TEL AVIV, Israel, March 18, 2025 (GLOBE NEWSWIRE) -- Pillar Security, a pioneering company in AI security, discovered a significant vulnerability affecting GitHub Copilot and Cursor - the world's leading AI-powered code editors.
This new attack vector, dubbed the "Rule Files Backdoor," allows attackers to covertly manipulate these trusted AI platforms into generating malicious code that appears legitimate to developers.
This newly discovered attack vector exploits hidden configuration mechanisms within these tools, enabling attackers to inject malicious code suggestions that blend seamlessly into legitimate AI-generated recommendations and bypass human scrutiny and conventional security checks.
Unlike traditional code injection attacks that target specific vulnerabilities, “Rule Files Backdoor” represents a significant risk by weaponizing the AI itself as an attack vector, effectively turning the developer's most trusted assistant into an unwitting accomplice.
"This new attack vector demonstrates that rule files can instruct AI assistants to subtly modify generated code in ways that introduce security vulnerabilities while appearing completely legitimate to developers," said Ziv Karliner, CTO & Co-Founder of Pillar Security. "Developers have no reason to suspect their AI assistant is compromised, as the malicious code blends seamlessly with legitimate suggestions. This represents a fundamental shift in how we must think about supply chain security."
Key Findings and Implications:
- Widespread Industry Exposure: The vulnerability affects Cursor and GitHub Copilot, which collectively serve millions of developers and are integrated into countless enterprise development workflows worldwide.
- Minimal Attack Requirements: Execution requires no special privileges, administrative access, or sophisticated tools--attackers need only manipulate configuration files within targeted repositories.
- Undetectable Infiltration: Malicious code suggestions blend seamlessly with legitimate AI-generated code, bypassing both manual code reviews and automated security scanning tools.
- Data Exfiltration Capabilities: Well-crafted malicious rules can direct AI tools to add code that leaks sensitive information while appearing legitimate, including environment variables, database credentials, API keys, and user data--all under the guise of "following best practices."
- Long-Term Persistence & Supply Chain Risk: Once a compromised rule file is incorporated into a project repository, it affects all future code generation, with poisoned rules often surviving project forking, creating vectors for supply chain attacks that affect downstream dependencies.
Who is Affected?
A 2024 GitHub survey found that nearly all enterprise developers (97%) are using Generative AI coding tools. According to Pillar, because these rule files are shared and reused across multiple projects, one compromised file can lead to widespread vulnerabilities. The research identified several propagation vectors:
- Developer Forums and Communities: Malicious actors sharing "helpful" rule files that unwitting developers incorporate
- Open-Source Contributions: Pull requests to popular repositories that include poisoned rule files
- Project Templates: Starter kits containing poisoned rules that spread to new projects
- Corporate Knowledge Bases: Internal rule repositories that, once compromised, affect all company projects
Mitigation
To mitigate this risk, we recommend the following technical countermeasures:
- Audit Existing Rules: Review all rule files in your repositories for potential malicious instructions, focusing on invisible Unicode characters and unusual formatting
- Implement Validation Processes: Establish review procedures specifically for AI configuration files, treating them with the same scrutiny as executable code
- Deploy Detection Tools: Implement tools that can identify suspicious patterns in rule files and monitor AI-generated code for indicators of compromise
- Review AI-Generated Code: Pay special attention to unexpected additions like external resource references, unusual imports, or complex expressions
Following responsible disclosure practices, Pillar alerted both Cursor (February 26) and GitHub (March 12), who responded that users bear responsibility for reviewing AI-generated code suggestions.
“Given the growing reliance on AI coding assistants within development workflows, we believe it's essential to raise public awareness about potential security implications. We have reached an era where AI coding assistants must be regarded as critical infrastructure,” said Karliner.
Link to the full report: www.pillar.security/blog/new-vulnerability-in-github-copilot-and-cursor-how-hackers-can-weaponize-code-agents
About Pillar Security
Pillar is a unified, end-to-end AI security platform that accelerates AI initiatives by establishing robust security foundations across the entire AI lifecycle. By embedding security from development through runtime, Pillar enables organizations to ship AI-powered applications and agents with confidence while managing critical business risks.
The platform's comprehensive capabilities—including AI fingerprinting, asset inventory, and deep integration with development and data platforms—create a secure foundation that prevents data breaches and ensures compliance. Through tailored adversarial AI testing and adaptive guardrails aligned with industry standards, Pillar removes security bottlenecks, allowing teams to innovate and deploy AI faster without compromising on security.

Hadar Yakir hadar@pillar.security