AI memory files and context data improve personalization and performance but represent a persistent security weakness that attackers can poison to corrupt future model outputs. Cisco demonstrated a persistent compromise of Anthropicβs Claude Code by modifying memory.md via NPM post-install hooks, underscoring prompt-injection and memory-poisoning risks and the need to scan or purge memory files. #ClaudeCode #NPM
Keypoints
- AI memory files and context data can be poisoned to manipulate model outputs and maintain persistence.
- Cisco researchers exploited NPM post-install hooks to modify Claude Codeβs memory.md and persist across sessions.
- Prompt injection and indirection prompt injection (IPI) are core vulnerabilities enabling memory poisoning across agents and connectors.
- Non-executable text files can carry malicious instructions, so memory and dependency files must be treated as risky.
- Security vendors recommend layered defenses, open-source scanners, and regular purging of memory files to mitigate attacks.
Read More: https://www.darkreading.com/vulnerabilities-threats/bad-memories-haunt-ai-agents