Anthropic, an AI company, has disclosed that threat actors exploited its Claude chatbot for an “influence-as-a-service” operation targeting social media platforms Facebook and X, creating a network of politically-aligned accounts. This operation aimed to amplify certain political narratives and was executed using sophisticated AI-driven tactics to mimic authentic user behavior. The incident highlights growing risks associated with AI misuse in influence operations. Affected: Anthropic, Facebook, X
Keypoints :
- Unknown threat actors leveraged Anthropic’s Claude chatbot for a financially-motivated influence operation on social media.
- The operation involved orchestrating 100 distinct fake personas to engage with authentic accounts, promoting various political narratives across multiple regions.
- Claude was not only used for content generation but also to determine interaction strategies for social media bot accounts.
- The campaign utilized a structured JSON-based approach for persona management, mimicking human behavior to enhance subtle engagement.
- Operatives deployed humor and sarcasm to deflect accusations of being bots during interactions.
- Anthropic identified additional misuse cases, including recruitment fraud and the development of advanced malware by novice actors using Claude.
- The incident underscores the necessity for new frameworks to assess influence operations and the potential danger of AI enabling malicious activities.
Read More: https://thehackernews.com/2025/05/claude-ai-exploited-to-operate-100-fake.html