Researchers have demonstrated how malicious prompt injection attacks can exploit image scaling to deceive AI systems. This technique poses significant risks for sensitive data theft and manipulation in AI-integrated enterprise solutions. #ImageScalingAttack #PromptInjection
Keypoints
- Attackers can embed hidden malicious prompts in high-resolution images used by AI systems.
- The attack leverages the downscaling process, revealing the hidden prompt to the AI model.
- Such attacks can instruct AI to exfiltrate sensitive data like calendar information.
- Vulnerable interfaces include Gemini CLI, web, API, Vertex AI Studio, Google Assistant, and Genspark.
- Anamorpher, an open-source tool, enables researchers to craft and visualize these image scaling attacks.
Read More: https://www.securityweek.com/ai-systems-vulnerable-to-prompt-injection-via-image-scaling-attack/