Like just about every part of business today, cybersecurity has been awash in promises of what AI can do for its tools and processes. In fact, cybersecurity vendors have touted the power of algorithmic detection and response for years.
But risk management professionals and application security (AppSec) teams need to recognize that the relationship between AI and cybersecurity extends far beyond enhancing algorithms or adding generative AI features to the security tool stack. In short, AI can undermine enterprise security just as well as enhance it.
Malcolm Harkins, chief security and trust officer for Hidden Layer, said that while AI is being used by hackers for deepfakes or automated attacks, the bigger threat to organizations is how they develop AI apps and processes.
“[AI] itself is a completely different tech stack: different file types, model types, and totally different ways of being susceptible to attack. And to be blunt, the existing enterprise security stack does not protect AI — particularly AI models — from being attacked.”
—Malcolm Harkins
Don't let the cybersecurity promise that AI makes blind you to the need to secure the AI systems deployed in the enterprise, including all AI-developed software running in your organization. Here's why you need to update your strategy — and your security tooling — for the AI age.
[ See Special Report: Secure Your Organization Against AI/ML Threats ]
The AI blind spot
As the pace of embedding AI in enterprise systems accelerates, there is a general awareness that AI will add risk to the technology infrastructure and business processes that it supports. As a result, the corporate world has been rolling out AI risk-governance boards. Too many of them, however, have implemented AI governance policies that rely on traditional security controls, Harkins said.
In a recent analysis of the most common types of threats to the AI stack that security researchers have uncovered, Harkins found that they can be grouped in these three categories:
- Threats to AI models: data poisoning, model evasion, model theft.
- Threats from malicious input: prompt injection, code injection
- Threats to artifacts in the AI supply chain: code execution, malware delivery, lateral movement
Harkins then assessed the strength of present controls, including static application security testing (SAST), dynamic AST, and vulnerability and malware scans.
The result was a color-coded spreadsheet showing controls that couldn’t manage a particular AI risk, controls that provided only indirect protection or partial coverage, and, in green, controls sufficient for the AI risk. The spreadsheet was devoid of green.
“Models today are not only vulnerable; they're easily exploitable. Our research is proving that all the time.”
—Malcolm Harkins
The open question is whether attackers are using these flaws. Many security leaders have told Harkins that they currently view threats to AI as a low priority because they aren’t seeing attacks against it with any regularity.
But, Harkins noted, "The absence of evidence doesn’t prove the evidence of absence. If I don’t have logging and monitoring purpose-built for AI models, how am I ever going to know an attack occurred?”
Where to get started on securing AI
This blind spot with AI is the basis for a recent RSA 360 article that Harkins wrote urging enterprises to start getting serious about bolstering the AI-specific controls they have in place. He’s been a champion for best practices and standards free of vested interests and vendor hype.
One effort Harkins hopes security practitioners get behind is the Coalition for Secure AI (CoSAI), which sets security standards and frameworks for securing tech from unique AI risks. More standards are expected from the group on model signing that will be similar to what the AppSec world has done with code signing, Harkins said.
As groups such as CoSAI start to tackle standards and cross-industry cooperation, security leaders can, little by little, start adding AI visibility and controls, Harkins said. His advice: “Start embedding AI visibility and awareness into your existing security practices."
One example: If you have an existing threat intelligence program, you should be embedding more feeds that cover attacks against AI. And third-party risk management programs should be asking questions about how vendors use AI.
Most importantly, security teams with asset management and vulnerability management programs should find a way to build out an AI inventory and ways to enumerate AI flaws. To ameliorate fears that that will further strain the vulnerability management team with even more vulns to prioritize, Harkins said, “We might use AI to help in that.”
Invest in AI security — and the right tools for the job
In order to fund it all, Harkins said, CISOs and other risk leaders need to be crafty and aware of when AI initiatives are being vetted. If an AI initiative gets $25 million, then it should only follow that at least some of those funds should be carved out to manage cyber-risk.
With machine learning driving the next generation of technology, the security risks associated with model sharing — and specifically issues within ML models such as serialization — are becoming increasingly significant, Dhaval Shah, senior director of product management at ReversingLabs, wrote recently. Vulnerabilities in serialization and deserialization are common across programming languages and applications, and they present specific challenges in machine learning workflows. For instance, Pickle, which is frequently used in AI, is especially prone to such risks, Shah wrote.
He said organizations need to stay ahead of these evolving threats with advanced detection and mitigation solutions, such as modern ML malware detection and protection. Shah had more advice:
- Before you bring a third-party LLM model into your environment, check for unsafe function calls and suspicious behaviors and prevent hidden threats from compromising your system.
- Before you ship or deploy an LLM model that you’ve created, ensure that it is free from supply chain threats by thoroughly analyzing it for any malicious behaviors.
- Models saved in risky formats such as Pickle should be meticulously scanned to detect any potential malware before they can impact your infrastructure.
Keep learning
- Get up to speed on securing AI/ML systems and software with our Special Report. Plus: See the Webinar: The MLephant in the Room.
- Learn how you can go beyond the SBOM with deep visibility and new controls for the software you build or buy. Learn more in our Special Report — and take a deep dive with our white paper.
- Upgrade your software security posture with RL's new guide, Software Supply Chain Security for Dummies.
- Commercial software risk is under-addressed. Get key insights with our Special Report, download the related white paper — and see our related Webinar for more insights.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.