Sysdig Threat Research Team observed a new attack (LLMJacking) that uses stolen cloud credentials to access cloud-hosted LLM services and potentially monetize access. The operation targeted multiple providers and included a reverse proxy approach to manage compromised accounts, with the attackers probing quotas and logging configurations to stay under the radar. #LLMJacking #ClaudeV2 #AWSBedrock #OAIReverseProxy #LaravelCVE3129
Keypoints
- LLMJacking uses stolen cloud credentials to access cloud-hosted LLM services.
- Targets ten services across major providers, including Anthropic Claude, AWS Bedrock, Azure, ElevenLabs, MakerSuite, Mistral, OpenAI, OpenRouter, and Vertex AI.
- A broader script checks credentials across services to determine which are usable and what quotas exist.
- An LLM reverse proxy (OAI Reverse Proxy) is used to centrally manage access to multiple LLM accounts.
- attackers test legitimate API calls (InvokeModel) via CLI-like interactions and probe with unusual parameters to confirm access.
- They query logging configurations (GetModelInvocationLoggingConfiguration) to understand how prompts and logs are delivered and masked.
- Potential impact includes up to about $46,000 per day in LLM consumption if quotas are exhausted across regions.
- The attack emphasizes detection and logging as key to both defense and potential evasion, with recommended cloud-logging and monitoring measures.
MITRE Techniques
- [T1078] Valid Accounts β Use stolen cloud credentials to access the cloud environment and run LLM services. [The credentials were obtained from a popular target, a system running a vulnerable version of Laravel]
- [T1059.003] Command and Scripting Interpreter β Interact with hosted LLMs via CLI; attackers leveraged straightforward CLI commands to invoke models. [cloud vendors have simplified the process of interacting with hosted cloud-based language models by using straightforward CLI commands.]
- [T1036] Masquerading β Use of a user-agent matching OAI Reverse Proxy to appear legitimate when accessing LLM models. [a user-agent that matches OAI Reverse Proxy was seen attempting to use LLM models]
- [T1530] Data from Cloud Storage β Access and configuration data related to logging and data delivery (S3/CloudWatch) to understand where data is stored or logged. [GetModelInvocationLoggingConfiguration, which returns S3 and Cloudwatch logging configuration if enabled.]
- [T1071] Application Layer Protocol β Legitimate API calls (InvokeModel) to Bedrock/AWS endpoints, illustrating use of cloud APIs for model invocation. [The InvokeModel call is logged by CloudTrail and an example malicious event can be seen below. They sent a legitimate request but specified βmax_tokens_to_sampleβ to be -1.]
- [T1562] Impair Defenses β Attempts to hide activity by probing and masking prompts/logs to evade detection. [This check is done to hide the details of their activities from any detailed observations.]
Indicators of Compromise
- [IP Addresses] β 83.7.139.184, 83.7.157.76, 83.7.135.97, 73.105.135.228
- [Domain/URL] β github.com/kingbased/keychecker, sysdig.com
- [Vulnerability] β CVE-2021-3129 (Laravel vulnerability used to obtain credentials)
- [Cloud/Model IDs] β anthropic.claude-v2, anthropic.claude-v3 (and 2 more models)
- [File/Hash] β 2 more hashes (not disclosed)
Read more: https://sysdig.com/blog/llmjacking-stolen-cloud-credentials-used-in-new-ai-attack/