Anthropic has fixed three vulnerabilities in its Git MCP server that could be exploited through prompt injection and chaining with other MCP tools for remote code execution. Researchers warn that as agentic AI systems become more complex, the attack surface increases, requiring careful security assessments. #Anthropic #GitMCP #PromptInjection #AgenticAI #CodeExecution
Keypoints
- Anthropic fixed three critical bugs in its Git MCP server that could lead to remote code execution.
- The vulnerabilities involved path validation bypass, unrestricted git_init, and argument injection issues.
- Attackers can chain the Git and Filesystem MCP servers to exploit these flaws through indirect prompt injection.
- The exploit leverages Git filters like clean and smudge to trigger malicious scripts for code execution.
- Security teams need to evaluate the entire agentic systemβs permissions and interactions to prevent such exploits.
Read More: https://www.theregister.com/2026/01/20/anthropic_prompt_injection_flaws/