An updated version of the OWASP Top 10 for LLM Applications has been released, with a key outline of software risk for large language model development and more, including unbounded security, vector, and embedding vulnerabilities; system prompt leakage; and excessive agency.
The update comes on the heels of new OWASP guidance for organizations seeking to protect generative AI models and an analysis of current application security (AppSec) tools in the market.
OWASP's LLM project lead, Steve Wilson, said in a recent statement:
"We're two years into the generative AI boom, and attackers are using AI to get smarter and faster. Security leaders and software developers need to do the same. Our new resources arm organizations with the tools they need to stay ahead of these increasingly sophisticated threats."
Here's what you need to know about the updated OWASP Top 10 for LLM Applications, its new software risk outline, the tooling landscape guide — and what's needed to adequately secure AI and machine-learning apps.
[ Special Report: Secure Your Organization Against AI/ML Threats | See Webinar: The MLephant in the Room ]
OWASP Top 10 for LLM now outlines key areas of software risk
The new OWASP Top 10 for LLM Applications outlines the top 10 risks, vulnerabilities, and mitigations for developing and securing gen AI and LLM applications across the development, deployment, and management lifecycle. These applications can include static prompt augmented applications, agentic applications, LLM extensions, and generally complex applications.
Unbounded consumption
The new unbounded consumption risk expands on what was previously the denial-of-service threat. It includes risks involving resource management and unexpected costs — a pressing issue in large-scale LLM deployments, said OWASP project co-leader Scott Clinton.
"With unbounded consumption, we're talking about the financial and capacity issues people are running into as they're starting to run larger and larger applications on top of large language models. Unbounded consumption is not only just denial of service, but also the consumption of the resources, the financial components of that, the risk associated with that."
—Scott Clinton
Unbounded consumption can happen when an LLM generates outputs based on input queries or prompts. Inference is a critical function of LLMs, involving the application of learned patterns and knowledge to produce relevant responses or predictions.
Attacks designed to disrupt service, deplete a target's financial resources, or even steal intellectual property by cloning a model's behavior all depend on a common class of security vulnerability in order to succeed. Unbounded consumption occurs when an LLM application allows users to conduct excessive and uncontrolled inferences, leading to risks such as denial of service, economic losses, model theft, and service degradation. The high computational demands of LLMs, especially in cloud environments, make them vulnerable to resource exploitation and unauthorized usage.
The project explained that the new vector and embeddings entry is a response to the OWASP community's requests for guidance on securing retrieval-augmented generation (RAG) and other embedding-based methods, now core practices for grounding model outputs, OWASP's Clinton said. "When we first started with RAG, it was one of those areas that was theoretical; the architectures were just starting to evolve,” he said. “But in most applications today, it's almost a default.”
“Before, it was sort of an emerging area. Now we've got some real-world examples, so it was really important to provide more detailed guidance because now these architectures are becoming the backbone of a lot of AI systems.”
—Scott Clinton
OWASP explained that vectors and embeddings vulnerabilities present significant security risks in systems using RAG with LLMs. Weaknesses in how vectors and embeddings are generated, stored, or retrieved can be exploited by malicious actions to inject harmful content, manipulate model outputs, or access sensitive information.
System prompt leakage
The project also added a new system prompt leakage item to address an area with real-world exploits. Many applications assume prompts are securely isolated, but recent incidents have shown that developers cannot safely assume that information in these prompts remains secret, it explained.
Prompt leakage creates a risk when system prompts or instructions used to steer the behavior of the model contains sensitive information that was not intended to be discovered. System prompts are designed to guide the model's output based on the requirements of the application but may inadvertently contain secrets. When discovered, this information can be used to facilitate other attacks.
Excessive agency
The excessive agency risk in the list has been expanded, as has the use of agentic architectures, which can give an LLM more autonomy. With LLMs acting as agents or in plug-ins, unchecked permissions can lead to unintended or risky actions, making this entry more critical than ever, OWASP noted.
"When we first designed the item, it was sort of theoretical. What we've done now with the expansion of it is start to just capture what we've seen actually happening in the wild. It's more of a tuning than it is a huge lift when it comes to expanding it."
—Scott Clinton
An LLM-based system is often granted a degree of agency by its developer to undertake actions in response to a prompt, the project explained. The decision over which extension to invoke may also be delegated to an LLM "agent" to dynamically determine based on input prompt or LLM output. Agent-based systems will typically make repeated calls to an LLM using output from previous invocations to ground and direct subsequent invocations, Clinton said.
"Agentic applications are going to be a key concern as people start to implement more automation. The agency they're giving to these applications is much more than what chatbots have historically been given. It's opening up a huge issue in that particular space.”
—Scott Clinton
OWASP Top 10 for LLM Applications targets Gen AI
The new OWASP Top 10 for LLM includes guidance for preparing and responding to deepfake events (focusing on risk assessment, threat actor identification, incident response, awareness training, and event types) and establishing centers of excellence for gen AI security (designed to develop security policies, foster collaboration, build trust, advance ethical practices, and optimize AI performance). The new "AI Security Solution Landscape Guide" offers insights into both open-source and commercial solutions for securing LLMs and gen AI applications.
Deepfakes — images, videos, or audio recordings created or manipulated using deep neural networks — pose significant threats, said Matthew Walsh, team lead and senior data scientist in the CERT division at Carnegie Mellon University's Software Engineering Institute. "Given the rising frequency and sophistication of deepfake attacks, organizations must have clear guidance on how to prepare for and respond to these incidents," he said.
One reason guidance on deepfakes is needed is that their use is rising sharply, Walsh said, citing data from the AIAAIC (AI, Algorithmic, and Automation Incidents and Controversies) and the AI Incident Database. The data shows that between 2022 and 2023, there was nearly a fivefold increase in deepfake attacks. "Businesses across various sectors, government agencies, and private citizens have become targets of these attacks," Walsh said.
"While deepfakes are often associated with individual defamation or fake content, these attacks can also be used for financial fraud, identity theft, and the spread of misinformation. To mitigate this risk, organizations must prioritize campaigns that raise employee awareness about the deepfake threat."
—Matthew Walsh
Guidance is also needed because deepfake technology is becoming increasingly sophisticated. "Advances in generative adversarial networks (GANs), variational auto-encoders (VAEs), and diffusion models have made it easier to produce highly realistic videos, images, and audio that are virtually indistinguishable from genuine media," he noted. "As these generative methods become more powerful and accessible, organizations must implement robust detection technologies to identify deepfakes. Additionally, organizations should invest in training programs that help employees to spot fraudulent content before it can cause harm."
Rapid response is critical when dealing with deepfake attacks, Walsh said. "Disinformation spread via deepfakes can quickly go viral, especially on social media platforms. In some cases, deepfakes have been used in live social engineering attacks, adding to the urgency," he said.
"Organizations must be prepared with an incident-response plan well in advance of an attack. Waiting until a deepfake event occurs to implement a response strategy is not an option — timely, coordinated action is needed to limit damage and prevent the further spread of false information."
—Matthew Walsh
Henry Patishman, executive vice president for identity verification solutions at Regula, a forensic devices and identity verification firm, said the deepfake guidance provided by the OWASP team is timely and should be taken up by all businesses around the world, regardless of size, industry, and location. "'The OWASP Guide to Preparing and Responding to Deepfake Events' very clearly outlines the current threats and guidance on how to deal with some specific events," Patishman said. "This guide acts as a great starting point for organizations to understand the threat and begin developing their own internal strategies."
Patishman added that, according to a survey conducted by Regula in August 2024, about half of all businesses worldwide reported cases of audio or video deepfake fraud in the past year. "This represents more than a 12% rise in audio deepfakes and almost a 20% rise in video deepfakes when compared to a similar study conducted in 2022," he said.
"This threat is not industry-specific, with all surveyed industries — crypto, financial services, aviation, technology, healthcare, telecom, and law enforcement — showing more than 40% of companies within each industry experiencing deepfakes."
—Henry Patishman
AI centers of excellence: Bring your teams together
The new OWASP team is also offering guidance for organizations to create their own AI security center of excellence (CoE) that brings together essential stakeholders from security, legal, data science, and operations to develop comprehensive security practices. J. Stephen Kowski, field CTO at the computer and network security company SlashNext, said the guidance is very practical, offering clear frameworks for policy development and implementation across organizations.
"The biggest challenge lies in coordinating cross-functional teams while maintaining operational efficiency and keeping pace with rapidly evolving threats."
—J. Stephen Kowski
Sean Wright, head of application security at the fraud prevention company Featurespace, said another important consideration when established a CoE is that the wide-scale adoption of AI is still new.
"[There] are still many unknowns as well as constant shifts in things such as regulations and compliance. Having a mechanism in place to ensure that your organization is able to adapt effectively as well as remain compliant is incredibly important."
—Sean Wright
As organizations increasingly adopt AI technologies, the associated risks for those organizations grow as well, said Jason Soroko, senior fellow at the digital certificate provider Sectigo. "Establishing an AI security center of excellence helps proactively manage these risks by developing and implementing secure practices throughout every stage of AI projects, from data collection to model deployment," he said.
Soroko said it also means having a dedicated team responsible for staying updated on emerging threats and mitigation strategies, ensuring that the organization’s AI initiatives remain secure and effective.
"A good center of excellence that involves risk will include executive team members who own the risk of a company and can help guide a cross-functional team. Top-down approaches to risk are usually best."
—Jason Soroko
Iftach Ian Amit, founder and CEO of the automated cloud infrastructure security firm Gomboc.ai, said that since AI tools are a force multiplier, organizations can deliver more with less. But that also means proper quality and safety assurances need to be embedded into its use.
"Gen AI is not inherently safe and needs to be augmented with the right guardrails and mechanisms that are specific to the organization using it. A CoE that provides both the policy as well as the processes and tools to do so would enable faster and more secure adoption of AI."
—Iftach Ian Amit
MJ Kaufmann, an author and instructor at O'Reilly Media, said that by pooling expertise from domains such as cybersecurity, data science, compliance, and risk management, an AI CoE can develop and enforce consistent security protocols. "A CoE can even be more effective because of the very fact that they draw expertise from a breadth of domains," she said.
However, Kaufman said that while the OWASP guidance is effective for larger and well-funded organizations, "not every organization has the resources to implement a CoE or the AI investment and adoption to warrant it."
"Building and maintaining a CoE demands a significant investment in personnel, technology, and ongoing training. This can be a barrier for smaller organizations or companies with tighter budgets. Allocating resources effectively while showing short-term value to stakeholders can be challenging, especially in companies with limited AI budgets."
—MJ Kaufmann
Indeed, CoEs might be a can of worms for many organizations, said Casey Bleeker, CEO and co-founder of the secure gen AI services platform SurePath AI. "While an AI security CoE is valuable when you have the right talent and skills, most organizations don’t have the expertise or the appropriate scope," Bleeker said.
"Is the CISO now responsible for overseeing the data science team’s work and monitoring and enforcing policies in technology areas they don’t have exposure to today? Are legal and risk departments responsible for defining every single use case employees are 'approved' to use AI for when the organization has no way to even monitor or enforce application of those policies?"
—Casey Bleeker
Bleeker said OWASP's recommendation should instead be viewed by most organizations as aspirational, not reachable for a few years. He said you first need to answer what the CoE will define, how it will measure adherence, and whether the teams have the right tools and people.
Get to know the AI security tooling landscape
The OWASP team did not stop with the new Top 10 for LLM. Its "AI Security Solution Landscape Guide" aims to serve as a comprehensive reference, offering insights into both open-source and commercial solutions for securing LLMs and gen AI applications. By categorizing existing and emerging security solutions, it can provide organizations with guidance to effectively address risks identified in the OWASP Top 10 LLM vulnerabilities list, the group said.
SurePath's Bleeker said that because most customers are just now understanding the basics of AI technology and there are misconceptions about how it functions and its associated risks, the tooling guidance was warranted.
"We meet with customers daily with wildly varying levels of understanding, and there is always some foundational element of misunderstanding because these are complex problems that sprawl across legal, compliance, technology, and data."
—Casey Bleeker
AI in security can be misunderstood, Bleeker said. Many security solutions on the market protect against AI-generated threats or use AI in detecting threats but play no part in securing the actual use of AI, he said.
Featurespace's Wright said that AI brings hype and that security tool marketing hype is worrisome. "This thankfully seems to be simmering down, but we still see some products making some bold claims," he said.
"My advice is, if you are considering purchasing a tool, make sure that you first validate many of its claims and ensure that the product does what it says it can do. There is a tremendous risk attached to believing in a tool will cover you from a security perspective when in fact it doesn't."
—Sean Wright
Bleeker said solutions to secure model deployments are often small improvements in web application firewall frameworks that protect against DDoS attacks, prompt engineering, or data exfiltration. "We see most of these risks being obviated long term by best practices, such as not allowing raw input from end users to LLM models," he said.
Solutions for securing model training are often focused on data cleansing or redaction, classification of source data, and the labeling and tagging of trained models based on the data sources used for input, Bleeker said. "This is often coupled with AI governance processes to ensure safe practices were followed and documented but contains no active enforcement of policy during training or after deployment when in use," he said.
"End-user security is often too focused on just shadow AI use, leaving massive gaps on private model access controls and internal data access controls leaving internal data leakage unaddressed."
—Casey Bleeker
Dhaval Shah, senior director of product management at ReversingLabs, wrote recently to describe how securing the ecosystem around ML models is more critical than ever. Shah described in technical detail how new ML malware detection capabilities in Spectra Assure, ReversingLabs' software supply chain security platform, ensure that your environment remains safe at every stage of the ML model lifecycle:
- Before you bring a third-party LLM model into your environment, check for unsafe function calls and suspicious behaviors and prevent hidden threats from compromising your system
- Before you ship or deploy an LLM model that you’ve created, ensure that it is free from supply chain threats by thoroughly analyzing it for any malicious behaviors
- Make sure models saved in risky formats such as Pickle are meticulously scanned to detect any potential malware before they can impact your infrastructure
Sponsorship program key to expanding OWASP guidance
Along with the OWASP Top 10 LLM refresh, the project announced a new sponsorship program. “By offering sponsorship opportunities for the project, the OWASP Top 10 for LLM and Gen AI Project aims to ensure the project has the resources necessary to empower its large collaborative community with the resources to help create and capture the latest research insights and guidance on securing generative AI/LLM applications and the evolving landscape openly and transparently that benefits the industry,” the project stated.
OWASP's Clinton said some things were different about this project than some of the other OWASP projects. “One of the key things that's different is that we made an early determination that we need to do something more than just a list. So we created another set of content. We want to focus on solutions and new initiatives"
In addition to the updated OWASP Top 10 for LLM Applications list and tooling guide, Clinton noted the OWASP LLM team will be focusing on four initiative areas: CTI research, threat intelligence research, red teaming, and data collection. "We're always open to new opportunities, but that's the road map we have right now."
Keep learning
- Get up to speed on securing AI/ML systems and software with our Special Report. Plus: See the Webinar: The MLephant in the Room.
- Learn how you can go beyond the SBOM with deep visibility and new controls for the software you build or buy. Learn more in our Special Report — and take a deep dive with our white paper.
- Upgrade your software security posture with RL's new guide, Software Supply Chain Security for Dummies.
- Commercial software risk is under-addressed. Get key insights with our Special Report, download the related white paper — and see our related Webinar for more insights.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.