Addressing Data Protection Challenges in the Age of Generative AI-Powered Language Models
Next DLP ("Next"), a leader in data protection, today announced the addition of ChatGPT policy templates to the company’s Reveal platform, which uncovers risk, educates employees and fulfills security, compliance, and regulatory needs. The launch of these new policy templates is in response to the dramatic increase in use of large language model platforms in the company’s global customer base.
With the new policies customers gain enhanced monitoring and protection of employees using ChatGPT. The first policy educates employees on the potential risks associated with using the service. It triggers when an employee visits the ChatGPT website and can remind the user of proper corporate data usage protocols. The second policy detects the use of sensitive information such as internal project names, credit card numbers, or social security numbers in ChatGPT conversations, enabling organizations to take preventive measures against unauthorized data sharing. These policies are just two of many possible configurations that protect organizations whose employees are using ChatGPT.
"As AI-powered language models continue to revolutionize the way we interact with technology, it's essential for organizations to stay ahead of the curve in terms of data protection and have visibility into how employees are leveraging these tools," said Constance Stack, Chief Executive Officer of Next DLP. "The ability for Next to quickly pivot and create new policies in tandem with the speed of technology is propelling the company to the head of the insider risk and data protection market."
After surveying1 Next’s customer usage of ChatGPT, the company found:
- Over 50% of organizations have at least one employee who used ChatGPT on a company issued device
- Across larger organizations the access percentage is much higher; 97% of larger organizations have seen their employees use ChatGPT
- One in ten endpoints across the Reveal network have accessed ChatGPT
“CISOs cannot bury their heads in the sand when it comes to generative AI platforms, like ChatGPT. The reality is employees are turning to these tools for efficiency and often inserting sensitive company information into the platform in order to increase speed and agility of their operations, while exposing their company’s sensitive data in the process,” said Chris Denbigh-White, Security Strategist at Next. “The Reveal platform is built on the ability to remain flexible and adapt to changes in company workflows and incorporate policies as new technology is being accessed and used within any enterprise.”
For more information on the Reveal Platform and the new ChatGPT visibility and adaptive controls visit https://www.nextdlp.com/resources/blog/chatgpt-data-protection-policies.
1 Research methodology: Next used anonymized data in the Reveal platform to inform ChatGPT usage and the development of the policy templates.
About Next DLP
Next DLP ("Next") is a leading provider of insider risk and data protection solutions. Next is disrupting the legacy data loss prevention market with a user-centric, flexible, cloud-native, AI/ML-powered solution built for today's threat landscape. The Reveal Platform by Next uncovers risk, educates employees and fulfills security, compliance, and regulatory needs. The company's leadership brings decades of cyber and technology experience from Fortra (f.k.a. HelpSystems), Digital Guardian, Forcepoint, Mimecast, IBM, Cisco, and Veracode. Next is trusted by organizations big and small, from the Fortune 100 to fast-growing healthcare and technology companies. For more information, visit www.nextdlp.com.
View source version on businesswire.com: https://www.businesswire.com/news/home/20230420005034/en/
Contacts
Media Contact for Next DLP:
Danielle Ostrovsky
Hi-Touch PR
410-302-9459
Ostrovsky@hi-touchpr.com