Vibe coding, the practice of using large language models (LLMs) and prompts to write usable code, has been garnering plenty of attention lately.
In February, Andrej Karpathy, the former senior director of AI at Tesla and a former research scientist and founding member at OpenAI, shared a Twitter post about how he was experimenting with what he had dubbed "vibe coding," a “new kind of coding … where you fully give in to the vibes, embrace exponentials, and forget that the code even exists” to see where it takes you.
“I ask for the dumbest things like ‘decrease the padding on the sidebar by half’ because I'm too lazy to find it. I ‘Accept All’ always, I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it.”
—Andrej Karpathy
“It's not too bad for throwaway weekend projects, but still quite amusing,” he concluded. (Karpathy did not respond to several email requests for further comment).
Despite Karpathy's underwhelming endorsement of vibe coding, venture capital firms, including Y Combinator and a16z, are excitedly talking it up in podcasts, saying that vibe coding offers new possibilities for startups trying to create products and develop ideas with small staffs and budgets.
Meanwhile, some IT experts see worrisome implications behind vibe coding that raise concerns. Vibe coding and agentic coding are like no-code application building on steroids, but their security challenges and concerns are largely under-appreciated. Here's what your application security (AppSec) team needs to know.
[ Get White Paper: How the Rise of AI Will Impact Software Supply Chain Security ]
Don’t minimize the security concerns of AI-based code
Chris Hughes, the CEO of Aquia, spotted Karpathy’s vibe-coding tweet and wrote about it on his Resilient Cyber blog, where he said that vibe coding demonstrates the potential of AI and LLMs in coding but warned that those racing to use LLMs for coding software are giving little notice to the business-critical topic of AppSec as it applies to vibe coding, especially if it were ever extended to the enterprise. And already, he noted, Y Combinator has stated that 25% of startups in its current cohort have codebases that are almost entirely AI-generated. Now throw in vibe coding, Hughes said, where developers, "if we want to call them that, it remains a question," are using prompt engineering and writing little of the code themselves, if any.
"If the vibe coding wave and broader code-by-prompt engineering activity isn’t accompanied by specific instructions related to secure coding and applications, what does the future of the cybersecurity landscape look like for these nonchalantly developed applications?”
—Chris Hughes
Security worries start to mount, Hughes said, when developers and organizations don't prioritize security over speed to market or revenue and with the reality that many large frontier models are trained on large swaths of open-source software rather than on proprietary databases of presumably more secure code.
A recent ReversingLabs post told about a report by Pillar Security researchers, who discovered a new attack method that Pillar dubbed the "Rules File Backdoor," which "represents a significant risk by weaponizing the AI itself as an attack vector, effectively turning the developer's most trusted assistant into an unwitting accomplice." RL wrote that the Pillar report is just the latest signal that organizations whose developers are using generative AI coding tools to write software must have formal policies, awareness training, and automated safeguards in place.
Vibe coding piles on to software risk
While the fun side of vibe coding is undeniable, the practice carries very real security concerns, said Janet Worthington, a senior security and risk analyst with Forrester Research.
LLMs can hallucinate, she noted, and they sometimes suggest vulnerable, insecure, and even non-existent open-source libraries.
"Humans tend to place more faith in generated code than is warranted, Therefore, we might take whatever the output is at face value, not scrutinizing it the same way we would a fellow developer's code.”
—Janet Worthington
Companies using AI coding assistants still need to do proper AppSec testing, such as static application security testing, software composition analysis, and secrets detection, Worthington said. “This doesn't even take into account reliability, accessibility, scalability, performance, or quality," she added.
Enterprises thinking about using AI assistants need to consider issues around governance, security, legal, and third-party risk, Worthington said.
“They need to ensure that their intellectual property is not being used for training, that their sensitive data is not leaked by the LLM, and consider the copyright and legal liabilities that may arise from using generative AI tools."
—Janet Worthington
But it's one thing to use AI generative assistants as part of the software development lifecycle and another to embrace vibe coding, she said. “Generative AI assistants can aid software developers by suggesting or generating code, helping to debug issues, and writing unit test cases and documentation. Studies have suggested that this can help with productivity, serving as an assistant, not as a replacement.”
'Vibe me some malware'
Could vibe coding be used by attackers who prompt LLMs with commands such as “Vibe me some malware”? “Attackers are already using generative AI tools to help them craft better phishing messages. If generative AI can help developers be more productive, it can also help attackers be more productive,” Worthington said.
Another expert, Viswanath Chirravuri, the software security director of security consultancy Thales Cyber & Digital, said that while vibe coding can be attractive for startups to get things started, it will never be a secure option for enterprises in critical verticals due to its inherent insecurities.
“My top concern is that vibe coding will be the No. 1 reason for increases in supply chain security concerns in the market if it is not handled properly. That is where I believe vibe coding will add significant risks to supply chain security.”
—Viswanath Chirravuri
Chirravuri said he did not see a lot of value for very sensitive business lines like medical fields, health care, or in government sectors. “It is all about the risk they are accepting — do they have the awareness of all the flaws that we are identifying in vibe coding?” he asked.
Where vibe coding can work, Chirravuri said, is in a quick-win situation such as quickly build a prototype to demonstrate how something can be done. “That solves the problem,” he said. “But when I want production-grade software code, where I do not want supply chain risks built into it, that is where I am bringing in human involvement from developers. Is vibe coding scalable across industries? No, not today, not tomorrow, and never, in my opinion.”
Andrew Bolster, the senior R&D manager at Black Duck, is also wary, noting the pitfalls of this "supposedly magical and freeing approach to the occasionally opaque world of software engineering."
“Every day, there are instances online of people vibing their way through a conversational development process only to end up with a sprawling mess of code that can’t be understood by the ‘author’ (if they really are the author in this context) and that falls over at the touch of a feather.”
—Andrew Bolster
From a security perspective, this is horrifying, said Bolster. “Some of the most impactful and pernicious vulnerabilities come from subtle interactions between complex software components that evolve and change over time — think Log4j, Apache Struts, or even Heartbleed, which [combined] caused billions of dollars of damage. As vibe coding and AI-assisted development continues to develop, it is tempting to clutch our collective pearls and bemoan the death of programming.”
Those massive security vulnerabilities from the past provide a sobering reality check that proves “that LLM-assisted systems still need those software security systems and processes” and that applications must be written using more than vibes, said Bolster.
“As an industry, we should be focusing on wrapping up those tools and processes in a way that we can integrate LLM-derived development as a partner, not as a replacement, and ultimately protect our bottom line by building trust in our software."
—Andrew Bolster
Keep learning
- Go big-picture on the software risk landscape with RL's 2025 Software Supply Chain Security Report. Plus: See our Webinar for discussion about the findings.
- Get up to speed on securing AI/ML with our white paper: AI Is the Supply Chain. Plus: See RL's research on nullifAI and replay our Webinar to learn how RL discovered the novel threat.
- Learn how commercial software risk is under-addressed: Download the white paper — and see our related Webinar for more insights.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.