Microsoft Security’s artificial intelligence (AI) security team recently shared its findings from a multi-year study that involved red teaming 100 generative AI (GenAI) products.
Researchers have spotted two machine learning (ML) models containing malicious code on Hugging Face Hub, the popular online repository for datasets and pre-trained models.
Companies pursing internal AI development using models from Hugging Face and other open source repositories need to focus on supply chain security and checking for vulnerabilities.