The explosive growth in the use of generative artificial intelligence (gen AI) has overwhelmed enterprise IT teams. To keep up with the demand for new AI-based features in software — and to deliver software faster in general — development teams have embraced machine learning-based AI coding tools.
Hugging Face, a leading AI development platform, said in September 2024 that it had hit a milestone by hosting 1 million ML models — up from just 300,000 in 2023. That fast growth comes with a price. Increasing complexity makes software supply chain security essential.
With the rise of AI coding, the race to secure the software supply chain is getting more heated. AI/ML and software packages are now one and the same, giving threat actors more avenues for attack. Here are takeaways from the AI/ML risk section of ReversingLabs' 2025 Software Supply Chain Security Report.
[ Download: 2025 Software Supply Chain Security Report | See the SSCS Report Webinar ]
AI and supply chain risk gets real, fast
When AI began making headlines with the introduction of OpenAI’s ChatGPT in 2022, it became clear to security practitioners that threat actors would tap the technology to improve long-standing attack methods, such as spearphishing and malware. In 2024, security teams started to process the new opportunities that AI and ML — and the technology ecosystem that supports them — are creating for malicious actors.
Attackers now have multiple avenues to choose from when targeting software supply chains, leveraging weak links in the software supply chain to infiltrate sensitive development or IT organizations where AI technology is being used. In February 2025, ReversingLabs threat researcher Karlo Zanki discovered two malicious ML models residing on Hugging Face that managed to evade the platform’s security scanning feature.
The malicious ML models found on Hugging Face were using Python’s popular Pickle format, which allows for serialization of the ML model. As Dhaval Shah, senior director of product management at RL, recently wrote in a technical blog post, Pickle files are “inherently unsafe” because they allow embedded Python code to run when the model is loaded. Despite this, Pickle is still a widely used file format that won’t be going away anytime soon.
In his post, Shah stressed that the hidden Python code in the ML model on Hugging Face could have serious consequences: executing malicious commands, inserting malware onto internal systems, sending unauthorized communications, or even corrupting other locally installed Pickle files.
RL researchers have also documented a steady string of open-source software (OSS) supply chain attacks on platforms such as npm and the Python Package Index (PyPI), which are the primary packages that AI/ML developers frequent. The recent discoveries by Zanki and the RL research team showcase how Picklescan — the tool used by Hugging Face to detect suspicious Pickle files — failed to flag the two malicious ML models as unsafe.
OWASP leads the charge on AI/ML development best practices
While the supply chain threats tied to AI and ML infrastructure seem to be outpacing the security community’s ability to manage such risks, the Open Worldwide Application Security Project (OWASP) foundation has undertaken important efforts to get a handle on risk. In November 2024, OWASP released its Top 10 Risks for LLM Applications. The resource lists the most prominent risks facing AI and ML infrastructure today, such as prompt injection; unbounded security, vector, and embedding vulnerabilities; system prompt leakage; and excessive agency.
OWASP also released CycloneDX v1.6 last year, which introduced a machine-readable format for software bills of materials (SBOMs) that can be applied to ML models. Shortly after, OWASP released its LLM AI Security and Governance Checklist, which raises the bar for development teams by promoting AI and ML security best practices.
These resources from OWASP are a great place for organizations to start. However, discoveries such as nullifAI make it increasingly clear that more advanced tools to assess software supply chain security are now required to get a handle on risk. The Enduring Security Framework working group recommends application security (AppSec) tools that employ binary analysis and reproducible builds.
AI risk makes software supply chain security essential
With the ongoing tools gap that results from legacy AppSec tooling lagging behind supply chain risk, enterprises are now exposed by AI/ML risks, across ML infrastructure and commercial software products that feature AI capabilities. This exposure makes it essential for developers, AppSec, and third-party cyber-risk management (TPCRM) teams to vet their AI and ML infrastructure for supply chain risks.
AI and ML are now fully interconnected with the software supply chain. That makes tooling that allows security teams to assess threats for behavior that identifies unsafe function calls and suspicious and malicious behaviors in ML files — particularly with risky formats such as Pickle and the primary OSS package tools — critical for managing software risk across your enterprise.
Dive deeper into the state of AI/ML risk with RL’s 2025 Software Supply Chain Security Report.
Keep learning
- Go big-picture on the software risk landscape with RL's 2025 Software Supply Chain Security Report. Plus: See our Webinar for discussion about the findings.
- Get up to speed on securing AI/ML with our white paper: AI Is the Supply Chain. Plus: See RL's research on nullifAI and replay our Webinar to learn how RL discovered the novel threat.
- Learn how commercial software risk is under-addressed: Download the white paper — and see our related Webinar for more insights.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.