Researchers Find Serious AI Bugs Exposing Meta, Nvidia, and Microsoft Inference Frameworks

Researchers Find Serious AI Bugs Exposing Meta, Nvidia, and Microsoft Inference Frameworks

Cybersecurity researchers have identified critical vulnerabilities in AI inference engines from Meta, Nvidia, Microsoft, and open-source projects caused by unsafe use of ZeroMQ and Python’s pickle deserialization. These flaws could allow attackers to execute arbitrary code, escalate privileges, and compromise AI infrastructure and developer tools. #MetaLlama #NvidiaTensorRT #SGLang

Keypoints

  • Vulnerabilities in popular AI inference engines stem from unsafe ZeroMQ socket and pickle deserialization practices.
  • The root cause is a pattern called ShadowMQ, propagated through code reuse and copy-pasting across projects.
  • Several vulnerabilities, including CVE-2024-50050 and CVE-2025-30165, remain unpatched or partially mitigated.
  • Successful exploitation could lead to arbitrary code execution, privilege escalation, and model theft within AI clusters.
  • Additionally, JavaScript injection vulnerabilities in Cursor’s browser and IDEs pose risks of credential theft and malware distribution.

Read More: https://thehackernews.com/2025/11/researchers-find-serious-ai-bugs.html