Anthropic Leak and Mercor AI Attack: Takeaways for Enterprise AI Security

Anthropic Leak and Mercor AI Attack: Takeaways for Enterprise AI Security
Recent incidents — the Anthropic source-code leak and the Mercor supply-chain compromise — demonstrate that AI security failures are occurring now and expose sensitive data, internal systems, and proprietary technology. Enterprises must prioritize preventing human-error and supply-chain exposures across AI integrations, dependencies like LiteLLM, and cloud ecosystems such as Microsoft 365. #Anthropic #LiteLLM

Keypoints

  • Two high-profile April 2026 incidents (Anthropic leak and Mercor supply-chain attack) exposed source code and customer data, respectively.
  • The Anthropic incident resulted from a release packaging error that publicly exposed internal files and Claude Code source code.
  • The Mercor incident was a supply-chain compromise where malicious code in the LiteLLM open-source library injected credential-stealing malware to harvest API keys and data.
  • Both incidents highlight that human error, insecure integrations, and shared dependencies are primary drivers of AI risk, not necessarily advanced model attacks.
  • AI systems amplify the speed and scale of data exposure, making small mistakes and existing security gaps far more damaging.
  • Enterprises should focus on prevention: reducing user-driven risk, protecting sensitive data across AI workflows, securing email/collaboration, and gaining visibility into AI interactions.

MITRE Techniques

  • [T1195 ] Supply Chain Compromise – Malicious code was introduced into a widely used open-source dependency (LiteLLM), enabling downstream compromise: (‘Malicious code embedded in LiteLLM within open source repositories’)
  • [T1552.001 ] Credentials in Files – Injected credential-stealing malware was used to harvest API keys and other credentials from code/configurations and data flows: (‘injecting credential-stealing malware to harvest API keys and other data’)

Indicators of Compromise

  • [Threat Actor ] Mercor attack attribution – Team PCP, Lapsus$ (actors named as responsible for the supply-chain compromise)
  • [Software/Package ] compromised dependency – LiteLLM (open-source library used to connect applications to AI services)
  • [Organization/System ] affected entities and environments – Anthropic, Mercor, Microsoft 365 (organizations/systems mentioned as impacted or representative of common enterprise environments)
  • [File/Asset ] leaked source code – Claude Code source code (Anthropic release packaging error exposed internal files and source code)


Read more: https://www.proofpoint.com/us/blog/threat-insight/mercor-anthropic-ai-security-incidents