New Data Suggests IT and Security Leaders are Ignorant to Generative AI Threats

Research proves bans are ineffective

ExtraHop®, a leader in cloud-native network detection and response (NDR), today released The Generative AI Tipping Point, a new research report that found enterprises are struggling to understand and address the security concerns that come with employee generative AI use.

The report, which analyzes organizations’ plans for securing and governing the use of generative AI tools, details cognitive dissonance among security leaders as the technology increasingly becomes a mainstay at work. According to the findings, 73% of IT and security leaders admit their employees use generative AI tools or Large Language Models (LLM) sometimes or frequently at work, yet, they aren’t sure how to appropriately address security risks.

Security isn’t the top priority

When asked, IT and security leaders are more concerned about getting inaccurate or nonsensical responses (40%) than security-centric issues, like exposure of customer and employee personal identifiable information (PII) (36%), exposure of trade secrets (33%), and financial loss (25%).

Generative AI bans prove ineffective

Almost a third (32%) of respondents shared that their organization has banned the use of generative AI tools, a similar proportion to those who are very confident in their ability to protect against AI threats (36%). Despite these bans, only 5% say employees never use these tools at work, signaling that they are ineffective.

Organizations want more guidance - especially from the government

Although nearly three-quarters (74%) surveyed have invested or are planning to invest in generative AI protections or security measures this year, IT and security leaders want more guidance. A majority (90%) of respondents want the government involved in some way, with 60% favoring mandatory regulations and 30% supporting government standards that businesses can adopt at their own discretion.

Basic hygiene is lacking

More than four in five (82%) are very or somewhat confident their current security stack can protect against threats from generative AI tools. However, less than half have invested in technology that helps their organization monitor the use of generative AI. Furthermore, only 46% have policies in place governing acceptable use, and 42% train users on safe use of these tools.

Following the launch of ChatGPT in November 2022, enterprises have had less than a year to fully weigh the risks versus the rewards that come with generative AI tools. Amid rapid adoption, it is important that business leaders better understand their employees’ generative AI usage so they can then identify potential gaps in their security protections to ensure data or intellectual property is not improperly shared.

“There is a tremendous opportunity for generative AI to be a revolutionary technology in the workplace,” said Raja Mukerji, Co-founder and Chief Scientist, ExtraHop. “However, as with all emerging technologies we’ve seen become a staple of modern businesses, leaders need more guidance and education to understand how generative AI can be applied across their organization and the potential risks associated with it. By blending innovation with strong safeguards, generative AI will continue to be a force that will uplevel entire industries in the years to come.”

Click here to download the report and read the blog.

This research was conducted by Censuswide in Fall 2023.

About ExtraHop

ExtraHop is the cybersecurity partner enterprises trust to reveal the unknown and unmask the attack. The company’s Reveal(x) 360™ platform is the only network detection and response platform that delivers the 360-degree visibility needed to uncover the cybertruth. When organizations have full network transparency with ExtraHop, they see more, know more, and stop more cyberattacks. Learn more at www.extrahop.com.

© 2023 ExtraHop Networks, Inc., Reveal(x), Reveal(x) 360, Reveal(x) Enterprise, and ExtraHop are registered trademarks or trademarks of ExtraHop Networks, Inc.

Contacts

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.