Companies pursing internal AI development using models from Hugging Face and other open source repositories need to focus on supply chain security and checking for vulnerabilities.
The popular Python Pickle serialization format offers ways for attackers to inject malicious code that will be executed on computers when loading models with PyTorch.
Researchers at Reversing Labs have discovered two malicious machine learning (ML) models available on Hugging Face, the leading hub for sharing AI models and applications.