Irregular Raises $80 Million to Set the Security Standards for Frontier AI

Already generating millions in revenue, Irregular partners with leading labs like OpenAI and Anthropic to assess advanced models under real-world threats and define the security frameworks for safe deployment

SAN FRANCISCO, CALIFORNIA / ACCESS Newswire / September 17, 2025 / Irregular, the world's first frontier AI security lab, today announced it has raised $80 million in funding led by Sequoia Capital and Redpoint Ventures, with participation from Swish Ventures, as well as from notable angel investors including Wiz CEO Assaf Rappaport, and Eon CEO Ofir Ehrlich. Formerly known as Pattern Labs, Irregular has reached millions in annual revenue. It works side by side with the world's leading AI labs like OpenAI and Anthropic to evaluate how next generation AI models may themselves carry out real world threats - such as antivirus evasions or autonomous offensive actions - and to develop the defenses needed before deployment.

As AI adoption accelerates, the risks are more advanced than most realize. Frontier labs like OpenAI, Anthropic, and Google DeepMind were built to make AI powerful and safe, and Irregular was founded with the mission to make it secure.

The company runs controlled simulations on frontier AI models to test both their potential for misuse in cyber operations and their resilience when targeted by attackers. The company gives AI creators and deployers a secure way to uncover vulnerabilities early and build the safeguards needed.

"Irregular has taken on an ambitious mission to make sure the future of AI is secure as it is powerful," said Dan Lahav, Co-Founder and CEO of Irregular. "AI capabilities are advancing at breakneck speed; we're building the tools to test the most advanced systems way before public release, and to create the mitigations that will shape how AI is deployed responsibly at scale."

Irregular's work already shapes industry standards; the company's evaluations are cited in OpenAI's system cards for GPT-4 o3, o4 mini and GPT-5; the UK government and Anthropic use Irregular's SOLVE framework, with Anthropic using it to vet cyber risks in Claude 4 ; and Google DeepMind researchers recently cited the company in a paper on the evaluation Emerging Cyberattack Capabilities of AI. The company also co-authored a whitepaper with Anthropic presenting a novel approach to using Confidential Computing technologies to enhance the security of AI model weights and user data privacy, and co-authored with RAND a joint seminal paper on AI model theft and misuse, helping shape Europe's policy discussions on AI security and setting a benchmark for the field.

"The real AI security threats haven't emerged yet," said Shaun Maguire, Partner at Sequoia Capital. "What stood out about the Irregular team is how far ahead they're thinking. They're working with the most advanced models being built today and laying the groundwork for how we'll need to make AI reliable in the years ahead.״

About Irregular
Irregular is the first frontier AI security lab to mitigate cybersecurity risks posed by advanced AI models while protecting the models from cyber attacks. By partnering with leading frontier labs like OpenAI and Anthropic, Irregular tests foundation models to test both their potential for misuse in cyber operations and their resilience when targeted by attackers. With deep roots in both AI and cybersecurity, the team is redefining how we secure the next generation of AI. Irregular is building the tools, testing methods, and scoring frameworks that will help organizations deploy AI safely, securely, and responsibly.

Learn more at www.irregular.com

Media contact

Itai Singer, TellNY
itai@tellny.com

SOURCE: Irregular



View the original press release on ACCESS Newswire

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms Of Service.