As technology leadership pushes ever harder to deeply embed AI agents into software development lifecycles — in some cases, even using agentic AI to replace midlevel developers — application security (AppSec) is about to go from complex to a lot more complicated.
The industry is abuzz with hype about how agentic AI could handle functions either semi- or fully autonomously, but as always with hot new technology, the security implications have yet to be fully assessed, said Aquia chief executive Chris Hughes.
“While there is tremendous potential and nearly unlimited use cases, there are also key security considerations and challenges."
—Chris Hughes
As with so many transformational advances, security teams will get nowhere by trying to obstruct agentic AI. Security leaders and teams must prepare the organization for these new AI agents with new visibility, controls, and governance for the entire software development lifecycle (SDLC).
Here's what your AppSec team needs to know about what's coming with agentic AI — and how to manage risk with increasing SDLC complexity.
[ Get White Paper: How the Rise of AI Will Impact Software Supply Chain Security ]
The agentic AI genie is out of the bottle
Agentic AI, artificial intelligence systems designed to make autonomous decisions and actions within business systems, are not a new phenomenon. But enhancements to natural-language processing (NLP) and the advanced reasoning of large language models (LLMs) are making agentic AI capable of making more complex, chained decisions — and then adapting them to less-defined use cases for the business.
These increases in the capabilities and versatility of agentic AI are appealing to enterprises. Gartner estimates that by 2028, about 35% of software will utilize AI agents — and that the agents will make it possible to automate at least 15% of today’s day-to-day work decisions. This estimate encompasses automatable tasks across a range of business functions, from sales to project management.
Tom Coshow, senior director analyst for Gartner, wrote recently that “software developers are likely to be some of the first affected, as existing AI coding assistants gain maturity," Coshow wrote.
Recent coverage by Axios claims that agentic AI is poised to land in 2025. And Meta’s Mark Zuckerberg told Joe Rogan in a recent interview that in 2025, Meta and other companies will have an AI "that can effectively be a sort of midlevel engineer.”
Because the advancements with AI are exciting from an engineering perspective, there’s no putting the agentic AI genie back in the bottle, experts agree, even though AI will bring significant tech and business risks to the application stack.
Agentic AI builds on low-code and no-code
In many ways, agentic AI is extending what the low-code and no-code movement started years ago in its push to arm citizen developers and streamline development workflows. Many of today’s coding assistants and automated AI agents evolved from low-code/no-code platforms.
Some development experts say that agentic AI is poised to blow up the business process layer, replacing business logic for business process workflows, Ed Anuff, chief product officer at Datastax, wrote in a recent think piece about AI agents at The New Stack. In many cases, it will handle a huge chunk of the development work that engineering teams work on today, whether for integration or whole new applications.
“When agentic AI is applied to business process workflows, it can replace fragile, static business processes with dynamic, context-aware automation systems."
—Ed Anuff
Know the risks of agentic AI in development
In many ways, agentic AI will serve to abstract security problems. Organizations will need to build safeguards and governance around how the agents operate, the security of the code, and the security of the models that run them, all while maintaining and improving the traditional guardrails for the security and quality of code and logic that’s produced either by humans or AI, said Dhaval Shah, senior director of product management for ReversingLabs.
"Securing AI in development is like playing chess where the pieces move by themselves. With AI in development, not everything that can be secured can be seen, and not everything that can be seen can be secured.”
—Dhaval Shah
In particular, agentic AI ratchets up the risks of software supply chain security, Dhaval said, explaining that the addition of AI agents to the development workflow challenges traditional models in two big ways.
"First, AI agents blur traditional trust boundaries by seamlessly mixing proprietary, open-source, and generated code, making traditional software composition analysis ineffective. Second, they introduce new dependencies we can't easily track or verify, from model weights to training data, creating blind spots in our security monitoring.”
—Dhaval Shah
Shah said there are three major risks that AppSec pros will need to stay ahead of as agentic AI takes hold within their development organizations: dependency chain opacity, an expanded attack surface, and emergent behaviors.
Dependency chain opacity
As AI agents and coding assistants are tasked with autonomously selecting and integrating dependencies, supply chain blind spots are going to grow and become more plentiful, Shah said. “Agentic AI creates blind spots in our security visibility. Unlike human developers, who might carefully vet a library, AI can pull from numerous sources simultaneously, making traditional dependency tracking insufficient,” he said.
Expanded attack surface
As agentic AI-driven coding assistants grow more sophisticated in executing multistep, chained software engineering tasks, they’ll be touching and interacting with a broader range of systems, applications, and APIs. This is going to expand the attack surface of not only the applications but the development stack itself.
“This interconnected nature creates a broader attack surface where a single weak link can compromise the entire workflow. For example, an AI agent coordinating a supply chain could be exploited to inject malicious instructions across multiple systems.”
—Dhaval Shah
Emergent behaviors
As AI collaborates with human developers, emergent vulnerabilities may arise from unforeseen interactions between AI-generated snippets and hand-crafted code, Shah said. “This blend can create novel, complex failure modes that defy traditional testing and threat models.”
For example, research is already emerging showing how attackers are turning their sights to open AI models to establish novel malware attack techniques. ReversingLabs research recently outlined one such scheme that targeted machine-learning sharing platform Hugging Face with models containing malicious code designed to avoid that platform’s security scanning mechanism.
How security teams can come together
Security professionals will need to collaborate to stay abreast of the risks presented by agentic AI and create the right blend of visibility controls over an increasingly complicated SDLC. OWASP recently introduced new threats and mitigations guidance focused on agentic AI, complete with concrete threat modeling information and advice on early mitigation strategies.
Aquia's Hughes said that a recent thought piece titled "Governing AI Agents," by Noam Kolt of the Governance of AI Lab at Hebrew University, should be required reading for AppSec teams.
“As we prepare to see pervasive agent use and implementation, we need to address many issues related to agentic governance."
—Chris Hughes
ReversingLabs' Shah said security leaders need to balance strategic oversight with immediate controls, because agentic AI is already here. That means deploying AI-aware monitoring that tracks both code generation and dependency inclusion, creating automated security gates that match AI development speed, and establishing clear boundaries for AI tool usage in critical code.
On the broader strategic front, Shah said organizations will need to implement trust-but-verify automated security baseline checks and to maintain human-review check points for security-critical change to code and logic. He also recommended that, wherever possible, teams should be running AI development in contained environments with defined boundaries.
“Think of it like giving AI a sandbox to play in, but with clear rules and constant supervision. The key isn't containing AI — it's channeling its power within secure guardrails."
—Dhaval Shah
Keep learning
- Get up to speed on securing AI/ML with our white paper: AI Is the Supply Chain. Plus: See RL's research on nullifAI and join our Webinar to learn how RL discovered the novel threat.
- Upgrade your software security posture with RL's new essential guide, Software Supply Chain Security for Dummies.
- Learn how commercial software risk is under-addressed: Download the white paper — and see our related Webinar for more insights.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.