The practical and secure implementation of artificial intelligence systems within organizations — starting with the exploration of tools, applications, supply chains, and other components necessary to deploy AI successfully — is the focus of a new report by the Cloud Security Alliance (CSA).
For security practitioners, the main concern about AI is that it creates entirely new attack vectors that traditional cybersecurity frameworks weren't designed to handle. The 71-page report about AI tools and applications is the third in a series by the alliance on AI organizational responsibilities.
Here are five key takeaways from the CSA report to help you get your AI ducks in a row.
[ Get White Paper: How the Rise of AI Will Impact Software Supply Chain Security ]
1. AI security involves traditional and specific security challenges
While AI can help make security teams more efficient through automation, for example, with algorithmic detection and response, it's a double-edged sword, bringing new threats such as prompt injections, said Dev Nag, CEO and founder of QueryPal.
“While conventional security focuses on things like network breaches and access control, AI systems can be compromised through adversarial attacks that subtly manipulate their mathematical foundations — like carefully crafted inputs that cause misclassification or prompt injection attacks that override system constraints.”
—Dev Nag
Ken Huang, co-chair of the CSA’s AI Safety Working Groups, said traditional injection attacks are easy to mitigate with parameterized statements and input validation. With prompt injection attacks, it's a whole new ballgame.
“With prompt injection, there's no perfect way to address the attack.”
—Ken Huang
HP Newquist, executive director of the Relayer Group and author of The Brain Makers: Genius, Ego & Greed in the Quest for Machines That Think,” said securing AI systems brings some fundamental differences.
“In typical security environments, the primary thing is you don't want your data extracted. The primary thing with AI data systems is you don’t want your data corrupted. Security teams have to be careful to not allow intruders to actually corrupt their AI’s data, because corrupting the data changes the way the AI works."
—HP Newquist
Adam Ennamli, chief risk and security officer at the General Bank of Canada, said security teams are dealing with statistical systems that can be manipulated in new ways that can evade usual controls.
“Data and model poisoning and extraction attacks revolve around the fact that you can trick these systems through carefully crafted inputs in ways that traditional security tools can't detect. Traditional security assumes deterministic systems. AI breaks that assumption.”
—Adam Ennamli
Patrick Appiah-Kubi, portfolio director for cloud computing, cybersecurity technology, and information assurance at the University of Maryland Global Campus’s School of Cybersecurity & Information Technology, cited some additional ways that AI systems can challenge security teams in nontraditional ways:
- Model stealing: Attackers can reverse-engineer AI models by querying them and analyzing the outputs. This can lead to the theft of intellectual property and the creation of malicious models that mimic the behavior of legitimate ones.
- Bias and fairness: AI models can inadvertently learn and propagate biases present in the training data. This can lead to unfair or discriminatory outcomes, which is a significant concern in areas such as fraud detection and identity verification.
- Explainability and transparency: AI models, especially deep learning ones, are often seen as black boxes because their decision-making processes are not easily interpretable. This lack of transparency can make it difficult to understand and trust the AI's decisions, especially in critical security applications.
- Overreliance on AI: If organizations become too dependent on AI systems, complacency will set in. Human oversight is still crucial, since AI could miss novel or sophisticated threats that it wasn't trained to recognize.
2. Third-party and supply chain risk management is essential
With the entire AI ecosystem requiring thorough assessments, clear agreements, and ongoing monitoring, third-party supply chain management is crucial, said QueryPal's Nag. That's because most organizations deploying AI rely heavily on external components — whether that's foundation models, training data, or specialized hardware.
“Each of these introduces unique risks. A compromised third-party model could have backdoors or biases baked in that are extremely difficult to detect.”
—Dev Nag
Even "open" models aren't really verifiable in the same way that open source code is, Nag said. “The training data could violate privacy regulations or contain toxic content that gets encoded into model weights and be impossible to see until used in production.”
Nag said the hardware supply chain could also be targeted to introduce vulnerabilities at the chip level. “These risks compound because AI systems tend to be tightly coupled—a vulnerability in one component can cascade through the entire pipeline,” he said.
Ennamli said that supply chain management and security issues are emerging from the inherent layered dependencies, as well as the opacity of modern AI systems.
“You're not just securing code and infrastructure anymore. You're trying to validate the behavior of black-box models built on the basis of other black-box models, each with their own complex training histories and potential vulnerabilities, which you don't know because of either intellectual property concerns or contractual limitations. There's no equivalent yet to code review for neural networks.”
—Adam Ennamli
3. Organizations need clear policies for employee use of AI tools
To balance the innovation that AI can usher in with security and ethical considerations, companies need clear AI usage policies. That's because AI systems are more unpredictable — and can potentially fail in dangerous ways.
Without proper guardrails, employees might inadvertently leak sensitive data by sharing it with external AI models, generate biased or harmful content that creates legal liability, or build critical business processes around AI tools without considering their limitations and failure modes, Nag said. That risk is increased with "shadow" use of AI.
“Shadow AI — the unauthorized use of AI tools — poses major risks, since these systems may not meet security and compliance requirements. The misuse of AI can scale rapidly across an organization before problems are detected.”
—Dev Nag
General Bank of Canada's Ennamli said policy gaps can create major risks because employees are essentially given almost unsupervised access to powerful statistical inference engines without guardrails. “They could feed in data, most of the time, sensitive data, and treat outputs as authoritative without validation, especially when they match what they were looking for,” he said, “and create shadow AI decision systems that bypass normal controls.”
The CSA’s Huang said that if you have a clear policy — and a clear program — you can reduce risk from shadow AI.
“The bigger issue isn't the security of the design and the controls surrounding it, but the degradation of business processes as they become dependent on uncontrolled, opaque, boxes. Shadow AI, like shadow IT, will always exist. It's really hard to get rid of.”
—Ken Huang
4. Adding AI into your SOCs offers key benefits — but requires care
Careless integration of AI into critical systems such as security operations centers (SOCs) can harm your organization. For example, if AI systems aren’t properly aligned with operational needs, they can cause operational disruptions. And they can cause financial losses too, said the University of Maryland's Appiah-Kubi.
“AI projects often require substantial investments. If these projects fail to deliver the expected outcomes or exceed budget allocations, they can drain financial resources, potentially leading to cutbacks or even bankruptcy.”
—Patrick Appiah-Kubi
Appiah-Kubi said that careless implementation of AI systems can also cause reputational damage. “AI systems that behave unethically or make biased decisions can severely damage a company's reputation. Negative incidents can quickly escalate on social media, leading to long-term brand erosion,” he said.
Without proper oversight, AI systems can also introduce new security risks, Appiah-Kubi said, because the systems might be susceptible to adversarial attacks or data poisoning, which can compromise their integrity and effectiveness.
Improperly implemented AI systems can lead to noncompliance with regulations, resulting in legal penalties and fines. “Ensuring that AI systems adhere to relevant laws and standards is crucial to avoid these issues,” Appiah-Kubi warned.
5. AI governance introduces new responsibilities — and roles
AI systems represent an entirely different governance responsibility compared with traditional IT management, and that makes clear role definitions and specialized skills a new requirement. With the need for new roles to manage AI risk, organizations will need talent and specific skills in areas such as monitoring for model drift, validating output in quality and scope, tracing data lineage, and detecting unwanted biases, Ennamli said. That is a challenge given traditional expertise.
“These are fundamentally different skills from traditional IT, as you're managing a statistical system that learns and changes, not just maintaining a stable predictable infrastructure."
—Adam Ennamli
The CSA's Huang recommended that organizations consider creating a role: chief AI officer.
“AI innovation is so fast, you need someone who is familiar with the AI. If your CISO is not familiar with AI, you are behind because this is a race between defensive security and the hackers.”
—Ken Huang
The need for AI model development and maintenance will also open up roles for AI engineers and data scientists, who will be responsible for developing, training, and maintaining AI models. This requires specialized skills in machine-learning algorithms, data preprocessing, and model optimization, Appiah-Kubi said. “Clear role definitions ensure that these tasks are handled by individuals with the appropriate expertise, reducing the risk of errors and inefficiencies,” he said.
Appiah-Kubi said too that new roles will also be required for data scientists and analysts who have strong skills in data cleaning, statistical analysis, and data visualization. “Clear roles help ensure that data is handled correctly, maintaining its integrity and quality,” he said.
With AI, continuous monitoring, reporting and improvement are essential
With AI systems' rapid evolution, ensuring ongoing compliance, security, and ethical use is essential, Ennamli explained. AI systems are inherently dynamic, and that's something that introduces new challenges.
“They drift as input distributions change, new attack vectors are discovered, regulations evolve, and user behavior adapts to new market trends. The traditional IT goal of system stability still applies, but you have a new dimension that gets layered in. You need continuous adaptation.”
—Adam Ennamli
QueryPal's Nag said one essential area organizations need to focus on is hardening their ML model operations (MLOps) infrastructure — the tools and processes needed to deploy, monitor, and maintain AI systems reliably over time.
“This includes capabilities for reproducible training, systematic testing, automated monitoring for drift and degradation, and rapid rollback when issues are detected. Without this foundation, organizations will struggle to operate AI safely at scale, regardless of their policies and governance structures. The technological backbone has to match the operational ambitions.”
—Dev Nag
In its new report, the CSA predicted that as AI technologies advance and their adoption accelerates across industries, the importance of robust governance, security measures, and ethical considerations will continue to grow. That means that getting your security strategy in place now is key:
“Organizations must stay informed about emerging AI regulations, evolving best practices, and new security threats specific to AI systems. By staying proactive and adapting to emerging trends, organizations can harness the power of AI while minimizing risks and upholding their ethical and security responsibilities."
Keep learning
- Boost your SOC triage efforts with advanced file analysis. Learn why — then get the White Paper.
- Learn how to do more with your SOAR with our Webinar: Enhance Your SOC With Threat Intelligence Enrichment.
- Get schooled by the lessons of Layer 8: See Dr. Jessica Barker on The Human Elements Driving Cyber Attacks.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.