Researchers have uncovered a disturbing new supply chain attack vector that threat actors could use to silently introduce and propagate virtually undetectable malicious code into AI-assisted software development projects.
This new attack method is the latest signal that organizations whose developers are using generative AI coding tools to write software must have formal policies, awareness training, and automated safeguards. And such organizations are far from rare. A 2024 GitHub survey found that a startling 97% of all enterprise developers currently use such tools to accelerate coding tasks.
Researchers at Pillar Security recently discovered the new vector when looking into how development teams share AI configuration data. The method, which they have dubbed the "Rules File Backdoor," has to do with how AI coding assistants such as GitHub Copilot and Cursor process contextual information when uploading what are known as rules files.
Here's what your security team needs to know to avoid risk from this new attack vector.
[ Get White Paper: How the Rise of AI Will Impact Software Supply Chain Security ]
What is a rules file?
Rules files are configuration files that define guidelines, guardrails, or constraints, such as coding standards or security checks, that the AI agent must use when generating code. The idea behind using a rules file is to ensure that any code that a gen AI tool generates or modifies is safe and consistent across projects.
As Pillar Security noted in its report, developers can create their own rules files for GitHub Copilot, Cursor, and other code-generation tools, or they can access what they need via public repositories and open-source communities. Organizations typically store rules files in a central repository that is accessible to entire project teams. They are, Pillar Security said, often "perceived as harmless configuration data" and end up getting integrated into projects without much inspection or security vetting.
Why the backdoor is easy to exploit
What researchers at Pillar Security discovered is that it is relatively straightforward for an attacker to embed malicious prompts within seemingly benign rules files, upload them to a repository, and wait for someone to download the files into their environment. "When developers initiate code generation, the poisoned rules subtly influence the AI to produce code containing security vulnerabilities or backdoors," the security vendor noted, adding this:
"Unlike traditional code injection attacks that target specific vulnerabilities, 'Rules File Backdoor' represents a significant risk by weaponizing the AI itself as an attack vector, effectively turning the developer's most trusted assistant into an unwitting accomplice, potentially affecting millions of end users through compromised software."
To demonstrate how a Rules File Backdoor attack might work, Pillar Security developed and embedded an attack payload within a rules file for Copilot and Cursor. When asked to generate a simple HTML page, the payload instructed the AI coding agents to add a malicious script sourced from an attacker-controlled site. The researchers used invisible Unicode characters to ensure that their malicious instructions would be virtually undetectable to a human but readable by the AI agent. In addition, the rules file explicitly commanded the AI agent not to log or make any mention of the addition of the malicious script to the HTML page. "Together, these components create a highly effective attack that remains undetected during both generation and review phases," the researchers said.
Why it's a particularly pernicious attack
Ziv Karliner, Pillar Security's chief technology officer and co-founder, said the Rules File Backdoor attack is particularly dangerous for a couple of reasons: stealth and persistence. The attack leverages configuration files that are usually trusted and overlooked during code reviews. It embeds malicious instructions using hidden Unicode characters that are almost undetectable. Once in place, the malicious instructions persistently manipulate AI coding assistants in an organization's environment, thereby affecting future code generation as well.
"Unlike traditional software supply chain attacks that focus on dependencies or direct code modifications, this attack exploits the way AI models interpret and generate code, subtly injecting vulnerabilities without explicit malicious payloads."
—Ziv Karliner
Pillar Security reported that both GitHub and Cursor characterize the issue as something that developers will need to address themselves when reviewing and accepting code suggestions. But Karliner said he believes that developers of AI coding assistants can significantly reduce the attack vector's effectiveness through enhanced input validation.
"They should implement detection mechanisms specifically targeting obfuscated or hidden characters within rules files. Since there is no legitimate need for hidden Unicode characters in rules files, these platforms can simply block such characters before providing the rules file as context to the model completion."
—Ziv Karliner
Low-effort, high-pay-off means trouble
Heath Renfrow, chief information security officer and co-founder at Fenix24, said he agrees with Pillar Security's assessment of the attack vector as particularly pernicious. He said he sees Rules File Backdoor as an example of how the risks from covert code manipulation, especially via seemingly innocuous configuration files, are likely to escalate as AI-generated code becomes part of modern development workflows.
"What makes this vector particularly dangerous is its subtlety. Unlike traditional malware, these hidden instructions can be deeply embedded in files developers often trust and overlook, like .yaml, .json, or .env."
—Heath Renfrow
Once inside a developer environment, the instructions can trigger actions that give attackers persistence, lateral movement opportunities, or even access to production pipelines. From an attacker’s perspective, this method is relatively low-effort and high-reward — especially in environments lacking robust code review and automated scanning for infrastructure-as-code (IaC) and configuration files, Renfrow said.
He recommended that organizations include all AI-generated code in their normal software code review processes to mitigate risk of the sort that Pillar Security identified in its research. Organizations should also enforce strict policies around trusted sources for AI-generated code and configurations and use static analysis and IaC security tools that can detect anomalies in configuration files.
"Educate developers on the risks of blindly accepting AI suggestions, especially in sensitive files. AI is a powerful tool, but like any technology, it must be integrated with security top of mind."
—Heath Renfrow
Vibe coding will exacerbate the backdoor threat
The Rules File Backdoor issue that Pillar Security identified is particularly troublesome at a time when many developers are "vibe coding," or using natural language to guide AI tools to generate code, said Kaushik Devireddy, senior product manager at Deepwatch. There are communities and open-source projects dedicated to providing vibe coders with config files that can improve the efficacy of their AI tools, he said.
"Vibe coding is all about using AI-coding tools such as Cursor, Windsurf, GitHub Copilot, etc. to write large features end to end by simply providing repeated English prompts. However, these config files can also be written in a way to induce AI tools outputting malicious code."
—Kaushik Devireddy
He said this is a brand-new attack vector, which manifests in the application logic layer — a particularly thorny area to secure. Complicating matters is the fact that there are no standard mechanisms for scanning configuration files other than via manual inspection.
"With the increase in vibe coding, this becomes an AI supply chain attack which affects non-developers equally."
—Kaushik Devireddy
To mitigate threats, the most effective use of time would be producing or curating training and awareness content for development teams, Devireddy said. "Ultimately, these tools are here to stay, and innovative organizations should find ways to safely leverage them rather than outright block them," he said.
AI coding tool risk is on the rise
The use of AI assistants is becoming routine for developers. Beth Linker, a senior director at Black Duck Software, noted that in Black Duck's 2024 Global State of DevSecOps report, about nine out of 10 organizations surveyed reported that their developers were using AI coding tools — with or without permission. A recent Apiiro report said that since the launch of ChatGPT in November 2022, more than 150 million developers have started using GitHub Copilot.
The rise of agentic AI for coding is the next wave to come. Dhaval Shah, senior director of product management for ReversingLabs, said security leaders need to balance strategic oversight with immediate controls because agentic AI is already here. That means deploying AI-aware monitoring that tracks both code generation and dependency inclusion, creating automated security gates that match AI development speed, and establishing clear boundaries for AI tool usage in critical code.
On the broader strategic front, Shah said, organizations will need to implement trust-but-verify automated security baseline checks and to maintain human-review checkpoints for security-critical changes to code and logic. He also recommended that, wherever possible, teams should be running AI development in contained environments with defined boundaries.
“Think of it like giving AI a sandbox to play in, but with clear rules and constant supervision. The key isn't containing AI — it's channeling its power within secure guardrails."
—Dhaval Shah
Keep learning
- Go big-picture on the software risk landscape with RL's 2025 Software Supply Chain Security Report. Plus: Join our Webinar to discuss the findings.
- Get up to speed on securing AI/ML with our white paper: AI Is the Supply Chain. Plus: See RL's research on nullifAI and join our Webinar to learn how RL discovered the novel threat.
- Learn how commercial software risk is under-addressed: Download the white paper — and see our related Webinar for more insights.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.