The National Institute of Standards and Technology’s latest guidance, on how to secure artificial intelligence (AI) applications against manipulation and attacks achieved with adversarial machine learning (ML), represents a major step toward establishing a standard framework for understanding and mitigating the growing threats to AI applications, but it's still insufficient. Fortunately, there are six steps your organization can take right now to address adversarial ML vulnerabilities.
AI application security should be a priority. AI use is already widespread, permeating most development workflows. In a 2024 GitHub survey, more than 97% of respondents said they have used AI coding tools at work, and a 2025 Elite Brains study concluded that AI now generates 41% of all code — 256 billion lines were written by AI last year alone.
Dhaval Shah, senior director of product management at ReversingLabs (RL), said attacks may be designed to “exploit capabilities during the development, training, and deployment phases of the ML lifecycle,” as the NIST guidance states.
“This prevalence makes understanding adversarial machine learning threats particularly urgent, as vulnerable AI systems are increasingly embedded throughout the software supply chain.”
—Dhaval Shah
Model sharing is another area fraught with risk, especially with regard to issues within ML models, such as serialization and deserialization, said Shah. Pickle, commonly used to compress AI models, is inherently unsafe because it allows embedded Python code to run when the model loads, and that opens the door to malicious actors, who can use it to inject harmful code into the model files, he said.
“When you serialize an ML model, you're essentially packing it into a file format that can be shared. It's similar to compressing a complex software application into a single file for easy distribution. But certain file formats allow code execution during deserialization."
—Dhaval Shah
Legacy application security testing (AST), both static and dynamic, as well as software composition analysis (SCA), miss such threats, Shah said. “These security risks are hidden, and they’re not covered by traditional SAST tools because those tools don’t analyze code for intent, only weaknesses and known vulnerabilities,” he said.
Malcolm Harkins, chief security and trust officer at the AI security firm HiddenLayer, said that to deal with modern supply chain threats, organizations need to incorporate better tooling and visibility into their entire development ecosystems.
“The existing enterprise security stack does not protect AI — particularly AI models — from being attacked.”
—Malcolm Harkins
Here's what you need to know about NIST's adversarial ML guidance — and six key actions every organization should be taking right now.
[ Get White Paper: How the Rise of AI Will Impact Software Supply Chain Security ]
NIST guidance: A good place to start
RL’s Shah said the 2025 edition of the NIST guidance is a good place for enterprises to get their feet wet on preparing for adversarial ML. It provides a taxonomy, arranged in a conceptual hierarchy, that includes key types of ML methods, lifecycle stages of attack, and attacker goals, objectives, capabilities, and knowledge. “This organizational approach helps companies systematically assess their vulnerabilities,” he said.
The guidance also explicitly addresses securing AI supply chains, managing risks posed by autonomous AI agents, and securing enterprise-grade generative AI (gen AI) integrations through detailed reference architectures. However, Shah emphasized NIST’s own acknowledgment of the guidance’s limitations: "[There] are theoretical problems with securing AI algorithms that simply haven't been solved yet," and available defenses currently lack robust assurances of complete risk mitigation.
“The guide is best viewed as an essential starting point rather than a comprehensive solution.”
—Dhaval Shah
Shah provided a breakdown of the good and bad aspects of NIST’s adversarial ML guidance.
The good
- The guidance provides standardized terminology in adversarial ML that the ML and cybersecurity communities can both agree upon.
- It includes a comprehensive taxonomy of attack types (evasion, data poisoning, privacy attacks, misuse attacks, supply chain model attacks, and direct prompt and indirect prompt attacks) across both predictive and gen AI systems.
- It addresses attacks against all viable learning methods (supervised, unsupervised, semi-supervised, federated, reinforcement) across multiple data modalities.
- It includes an index and glossary to help with understanding, navigating and referencing the taxonomy.
The bad
- The guidance acknowledges that "at this stage with the existing technology paradigms, the number and power of attacks are greater than the available mitigation techniques."
- It also states that there are "theoretical limits on the general strength of current mitigation techniques" such as data sanitization and model guardrails.
- It also calls the defenses AI experts have devised for adversarial attacks thus far are "incomplete at best."
- It advises that organizations must still "apply traditional cybersecurity measures to harden the model and the platform it runs on" and develop a risk budget they can accept.
Shah stressed that the guidance is useful — but not a comprehensive solution.
“Unfortunately, the framework doesn’t solve the fundamental challenges of secure AI, but it does provide a structured approach to understanding, categorizing, and beginning to address them.”
—Dhaval Shah
6 steps to protect your organization from adversarial ML
Here are six key actions every organization should be taking right now to protect AI applications and the supply chain that surrounds them.
- Inventory AI use. Know where and how AI-generated code, models, or decisions are being introduced in your organization. Be sure to include ML bills of materials (ML-BOMs) to highlight dependencies from third-party models and packages.
- Scan beyond the source code. Traditional AST misses binary, container, and model-level tampering. Use binary analysis tools to detect hidden threats such as malware and embedded secrets.
- Generate and monitor SBOMs. Include models and datasets in your software bills of materials. Your SBOM needs to go beyond code to include model provenance.
- Secure the tool chain. Protect CI/CD pipelines, training environments, and deployment containers. Think of the entire ML lifecycle, not just the model.
- Align with NIST lifecycle stages. Use the NIST taxonomy to stress-test your development stages against known threat vectors.
- Establish a response plan. Have a dedicated incident response playbook for AI-related attacks, including rollback and retraining strategies.
Be vigilant — and be ready to adapt as attacks evolve
While these measures will significantly improve your organization’s security posture with respect to AI application threats, organizations need to stay on the alert as attacks continue to evolve, and they must keep up with the latest mitigation approaches — especially since 70% of CISOs say their organizations are on the bleeding edge as innovators, early adopters, or early majority adopters of new AI technologies, as a 2024 Evanta Community Pulse survey found.
For example, agentic AI — autonomous AI systems that can take action based on high-level goals — present their own set of risks. This up-and-coming AI technology may be vulnerable to agent hacking, a type of prompt injection where attackers insert malicious instructions into data ingested by AI agents, and may also be vulnerable to remote code execution, database exfiltration, and automated phishing attacks.
Also, recent studies have shown that advanced AI models sometimes resort to deception when faced with losing scenarios. “In a security context, that could mean misrepresenting capabilities or gaming internal metrics,” Shah said, “In the next 12 months, organizations should approach agentic AI with caution.”
While the NIST framework now includes guidance on securing AI supply chains, dealing with risks posed by autonomous AI agents and securing enterprise-grade gen AI integrations through detailed reference architectures requires a new set of tooling, including binary analysis, Shah said.
“ReversingLabs’ focus on detecting malware, tampering, malicious implants, and embedded threats helps organizations better manage the complexity and unpredictability of agentic and AI-driven systems.”
—Dhaval Shah
Keep learning
- Go big-picture on the software risk landscape with RL's 2025 Software Supply Chain Security Report. Plus: See our Webinar for discussion about the findings.
- Get up to speed on securing AI/ML with our white paper: AI Is the Supply Chain. Plus: See RL's research on nullifAI and replay our Webinar to learn how RL discovered the novel threat.
- Learn how commercial software risk is under-addressed: Download the white paper — and see our related Webinar for more insights.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.