Summary: Cybersecurity researchers found two malicious machine learning models on Hugging Face using a technique called “broken” pickle files to avoid detection. The models execute a reverse shell upon loading, highlighting the security risks associated with pickle serialization. This incident is considered more of a proof-of-concept than an active attack.
Affected: Hugging Face
Keypoints :
- Two malicious ML models, identified as glockr1/ballr7 and who-r-u0000/0000000000000000000000000000000000000, were uncovered on Hugging Face.
- The models used broken pickle files to circumvent detection by existing security tools, such as Picklescan.
- This situation demonstrates the ongoing security risks associated with the pickle serialization format in distributing machine learning models.
Source: https://thehackernews.com/2025/02/malicious-ml-models-found-on-hugging.html
Views: 19