RL Blog

Topics

All Blog PostsAppSec & Supply Chain SecurityDev & DevSecOpsProducts & TechnologySecurity OperationsThreat Research

Follow us

XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBluesky

Subscribe

Get the best of RL Blog delivered to your in-box weekly. Stay up to date on key trends, analysis and best practices across threat intelligence and software supply chain security.

Spectra Assure Free Trial

Get your 14-day free trial of Spectra Assure for Software Supply Chain Security

Get Free TrialMore about Spectra Assure Free Trial
Blog
Events
About Us
Webinars
In the News
Careers
Demo Videos
Cybersecurity Glossary
Contact Us
reversinglabsReversingLabs: Home
Privacy PolicyCookiesImpressum
All rights reserved ReversingLabs © 2026
XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBlueskyRSSRSS
Back to Top
ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
Skip to main content
Contact UsSupportLoginBlogCommunity
reversinglabs
AppSec & Supply Chain SecurityMarch 4, 2026

AI-native AppSec: What it is — and why it matters

AI coding is a game-changer — and requires AI-powered application security to fight fire with fire.

John P. Mello Jr.
John P. Mello Jr., Freelance technology writer.John P. Mello Jr.
FacebookFacebookXX / TwitterLinkedInLinkedInblueskyBlueskyEmail Us
Fight fire with fire

Anyone in the software industry who still hasn’t accepted the fact that  security that’s bolted on after the fact rather than built in isn’t all that secure is likely to see the light once AI-aided development increases volume and multiplies vulnerabilities. 

In fact, AI has already prompted even laggard organizations to shift left and adopt Secure by Design principles in their development pipelines, offsetting AI-aided development with AI-native security.

Building AI directly into the architecture of a security platform changes what the platform can do, said Randolph Barr, CISO of Cequence Security.

AI-native means built, not bolted. It should not be a feature enhancement.

Randolph Barr

Barr compared what’s happening now to what occurred during the shift from running applications on premises to using software as a service (SaaS). “Organizations that re-architected for the cloud achieved scalability and resilience, while those that simply layered SaaS wrappers over legacy systems created long-term operational friction,” he said. 

In the case of baked-in AI-native security, Barr continued, it can read code, build context graphs, link findings, and drive workflows for prioritization and remediation. “This lets automated, contextual decision making happen across the SDLC,” he said

If AI is used only after scans of the code to generate summaries, it’s just adding to output, Barr said. “Event-driven pipelines, graph- or vector-based representations, and learning-centric workflows that really improve signal quality … are all signs of a true AI-native system,” he said.

Here's what you need to know about AI-native application security (AppSec) — and why it matters for advancing your software risk management.

[ See webinar: Develop Your Playbook for AI-Driven Software Risk ]

Fighting AI with AI 

AppSec teams have to keep pace with modern development velocity, said Melody (MJ) Kaufmann, an author and instructor at O’Reilly Media. “AI-generated code increases both speed and risk,” she said.

Eric Schwake, director of cybersecurity strategy at Salt Security, agreed that building AI into the architecture of security platforms is necessary with AI-driven software development. “AI-driven platforms can continuously discover assets, correlate behavior across environments, and identify patterns that humans would miss," he said.

And while development is accelerating, AI’s benefits are also accruing to the bad guys. “Fight fire with fire,” advised Brett Smith, a software developer at SAS.

We have to fight back against aggressive attacks powered by AI, using AI for things like root-cause analysis, anomaly detection, and predictive threat modeling.

Brett Smith

With AI, defenses are quickly outmatched by offensive capability, he said. “Attackers are already using AI to fuzz our defenses. They generate polymorphic malware and script complex attacks at machine speed. If your defense relies on human analysts staring at dashboards, the battle is already lost.”

But a note of caution was sounded by Willy Leichter, CMO of PointGuard AI. “As AI becomes core to security, it also expands the attack surface and becomes a prime target," he said.

Nor is AI-native integration the final word in software security, said Steven Swift, managing director of Suzu Testing. 

Building AI into security platforms is most helpful when it’s another layer added into an existing, well-functioning security stack. It is not a good replacement for a security stack.

Steven Swift

AI-native security is most appropriate when a platform naturally requires prompts that fit into existing context windows, he said. “AI is great for speed and when a nondeterministic answer is acceptable.”

Upping speed and raising risk

Faster attacks require faster responses. Eran Kinsbruner, vice president of product marketing at Checkmarx, noted that an AI coding vulnerability can be exploited in less than an hour. Organizations naturally want to release software to the market faster, but they can’t sacrifice quality and security to speed, he said.

With AI-native security, they can continue driving velocity, but with built-in, baked-in safeguards throughout the entire process. They can continuously deliver software and value to their customers while increasing the velocity.

Eran Kinsbruner

That helps security teams more effectively deal with the AI threat landscape, he said, looking for prompt injection and system prompt leakage early in the software development lifecycle (SDLC), address excessive agency, and tackle new threats such as  unbound consumption, and vector and embedding weaknesses.

All of this will take some time to figure out, said Saumitra Das, vice president of engineering at Qualys, but shifting security earlier in the SDLC is going to have to become the norm. 

Waiting for SecOps to come back and tell you what to fix later will not work in the new world of AI-generated software.

Saumitra Das

A changed threat landscape

One of the greatest risks arises because large language models (LLMs) and autonomous agents introduce unpredictability, delegation, and decision making into application environments, said Rosario Mastrogiacomo, chief strategy officer at Sphere Technology Solutions and author of AI Identities: Governing the Next Generation of Autonomous Actors.

Traditional AppSec tools were designed to detect vulnerable code patterns, not to evaluate reasoning systems that can dynamically call APIs, escalate tasks, or chain actions together.

Rosario Mastrogiacomo

What’s needed, he said, are AI-native AppSec platforms that can model agent behavior over time, detect when an agent’s actions drift from its baseline, and trace complex API chains across services. Such platforms “can analyze intent signals, flag prompt injection attempts, and monitor tool invocation patterns,” he said. 

“In dynamic, multi-agent ecosystems, the risk is no longer a single flawed line of code, Mastrogiacomo said. “It’s a cascade of decisions. AI-native AppSec is built to observe and interpret those cascades before they become incidents.”

Maria Paula Ariza, a senior security engineer at Iru, cautioned that security teams need to be cautious about treating everything that AI-native tools produce as fact.

There are many instances where a tool will flag code as an issue simply because it lacks full context of the feature it’s reviewing — even tools that claim to take into account full feature context. In other cases, a finding may be marked as critical when it does not truly warrant that level of severity, again due to limited context.

Maria Paula Ariza

But such caution has always been warranted, she said. “Most security tools have similar limitations. In my opinion, there will always need to be a security professional involved to validate findings and provide the final layer of judgment and verification.”

Ever more alert fatigue

Security teams were being overwhelmed by security alerts long before AI-generated code juiced development speed beyond what human security teams can handle, said Goh Ser Yoong, CISO of the Ryt Bank in Kuala Lumpur, Malaysia, and a member of the ISACA Emerging Trends Working Group. But AI-native tools don’t suffer from alert fatigue.

An AI-native AppSec tool would be able to detect, generate the exact patched code, and submit a pull request along with it. This will help enable the cybersecurity team to review a proposed PR fix along with the engineering team, thus reducing mean time to remediate.

Goh Ser Yoong

Nonetheless, he said, security teams can’t blindly accept all the PR submissions without understanding the underlying logic. “Those auto-fixes could introduce a high volume of technical debt,” Yoong said.

They could also become an attack surface. “Those fixes may not arrive if the tool itself is vulnerable to being tricked into omitting certain vulnerabilities and skip sending the alerts because it has been told to ignore and mark such vulnerabilities as safe without humans finding it out,” he warned.

Sphere’s Mastrogiacomo said AI-native AppSec marks a shift in how we think about AppSec in an era of autonomous systems. 

As AI agents increasingly provision access, generate code, and interact with live production systems, the security perimeter is no longer just infrastructure or software. It includes decision-making entities.

Rosario Mastrogiacomo

Organizations have to think of AI systems as governed actors, not just tools, he said. “That means embedding identity controls, behavioral monitoring, and lifecycle oversight from day one. Security in the AI era is not about slowing down innovation; it’s about ensuring autonomy does not outpace accountability.”

Learn how to develop your own AI security playbook in this webinar with Doug Levin and RL's Tomislav Peričin.

Keep learning

  • Get up to speed on the state of software security with RL's Software Supply Chain Security Report 2026. Plus: See the the webinar to discussing the findings.
  • Learn why binary analysis is a must-have in the Gartner® CISO Playbook for Commercial Software Supply Chain Security.
  • Take action on securing AI/ML with our report: AI Is the Supply Chain. Plus: See RL's research on nullifAI and watch how RL discovered the novel threat.
  • Get the report: Go Beyond the SBOM. Plus: See the CycloneDX xBOM webinar.

Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.

Tags:AppSec & Supply Chain SecurityArtificial Intelligence (AI)/Machine Learning (ML)

More Blog Posts

ReversingLabs: Home
Solutions
Secure Software OnboardingSecure Build & ReleaseProtect Virtual MachinesIntegrate Safe Open SourceGo Beyond the SBOM
Increase Email Threat ResilienceDetect Malware in File Shares & StorageAdvanced Malware Analysis SuiteICAP Enabled Solutions
Scalable File AnalysisHigh-Fidelity Threat IntelligenceCurated Ransomware FeedAutomate Malware Analysis Workflows
Products & Technology
Spectra Assure®Software Supply Chain SecuritySpectra DetectHigh-Speed, High-Volume, Large File AnalysisSpectra AnalyzeIn-Depth Malware Analysis & Hunting for the SOCSpectra IntelligenceAuthoritative Reputation Data & Intelligence
Spectra CoreIntegrations
Industry
Energy & UtilitiesFinanceHealthcareHigh TechPublic Sector
Partners
Become a PartnerValue-Added PartnersTechnology PartnersMarketplacesOEM Partners
Alliances
Resources
BlogContent LibraryCybersecurity GlossaryConversingLabs PodcastEvents & WebinarsLearning with ReversingLabsWeekly Insights Newsletter
Customer StoriesDemo VideosDocumentationOpenSource YARA Rules
Company
About UsLeadershipCareersSeries B Investment
EventsRL at RSAC
Press ReleasesIn the News
Pricing
Software Supply Chain SecurityMalware Analysis and Threat Hunting
Request a demo
Menu

How agentic AI flips the trust model

As AppSec shifts focus from the components to data, your strategy needs updating. Are you on top of your trust debt?

Learn More about How agentic AI flips the trust model
How agentic AI flips the trust model

MCP rug-pull attack worries mount

This new class of AI tool supply chain attack highlights how trust of agents can be exploited.

Learn More about MCP rug-pull attack worries mount
MCP rug-pull attack worries mount

Can AppSec keep pace with AI coding?

AI lets software teams generate code at a rate faster than security can validate it. One way to win the race: more AI.

Learn More about Can AppSec keep pace with AI coding?
Can AppSec keep pace with AI coding?
Finger on map

LLMmap puts its finger on ML attacks

Researchers show how LLM fingerprinting can be used to automate generation of customized attacks.

Learn More about LLMmap puts its finger on ML attacks
LLMmap puts its finger on ML attacks
Trust model flips
MCP attacks
AI coding racing