ReversingLabs threat researchers recently identified nullifAI, a novel attack technique used on an ML model hosted on the Hugging Face platform.
In his blog on the discovery, RL Threat Researcher Karlo Zanki notes the technique allows for the distribution of malware by abusing pickle file serialization.
nullifAI presents a novel attack technique that not only impacts an AI community, but identifies security challenges with Pickle.
In this webinar, Zanki and Paul Roberts will unpack the discovery of nullifAI, as well as pinpoint how AI is threatening software supply chains and enterprise security at-large.
In this session, you’ll learn:
- ✓ How RL researchers discovered the nullifAI attack technique on Hugging Face
- ✓ How it leveraged an exposure with Pickle and serialization
- ✓ Why LLMs and other ML models need to be vetted like software for risks.
- ✓ How cybercriminals are more easily targeting enterprises using AI
- *Attend live and receive an attendance certificate to be used towards CPE credits.
Register Now!