GTIG AI Threat Tracker: Adversaries Leverage AI for Vulnerability Exploitation, Augmented Operations, and Initial Access

GTIG AI Threat Tracker: Adversaries Leverage AI for Vulnerability Exploitation, Augmented Operations, and Initial Access
Google Threat Intelligence Group reports that threat actors are increasingly using generative AI across the full attack lifecycle, from vulnerability discovery and malware development to reconnaissance, information operations, and supply-chain compromise. The report highlights PROMPTSPY, Operation Overload, TeamPCP/UNC6780, APT27, UNC6201, and other clusters as examples of AI-enabled abuse, while also describing Google’s defensive actions such as disabling malicious assets and improving protections. #PROMPTSPY #OperationOverload #TeamPCP #UNC6780 #APT27 #UNC6201 #GoogleThreatIntelligenceGroup

Keypoints

  • GTIG says adversaries have moved from early AI experimentation to industrial-scale use of generative models in offensive workflows.
  • A threat actor reportedly used AI to develop a zero-day exploit intended for a mass exploitation event.
  • AI is being used to speed malware creation, obfuscation, and defense evasion, including decoy logic and polymorphic code.
  • PROMPTSPY is a notable Android backdoor that uses Gemini for autonomous device interaction, persistence, and command generation.
  • Attackers are using LLMs for reconnaissance, phishing preparation, and agentic workflows to automate multi-stage operations.
  • Information operations are also being enhanced through AI-generated synthetic media, including voice cloning and deepfake-style content in Operation Overload.
  • Supply-chain attacks against AI ecosystems, including OpenClaw skills and GitHub repositories such as LiteLLM, show that AI platforms themselves are becoming targets.

MITRE Techniques

  • [T1592.001 ] Gather Victim Host Information: Hardware – Threat actors used AI to identify a target’s exact computer make/model and requested photos showing the person using the device (‘identify the exact make and model of a computer used by a high-value target’).
  • [T1591.002 ] Gather Victim Org Information: Business Relationships – Adversaries prompted models to generate third-party relationships for large enterprises (‘generate detailed third-party relationships of large enterprises’).
  • [T1591.004 ] Gather Victim Org Information: Identify Roles – Threat actors used LLMs to map organizational hierarchies in departments such as finance, internal security, and HR (‘generate detailed organizational hierarchies’).
  • [T1587.001 ] Develop Capabilities: Malware – AI-assisted research was used to develop malware families including CANFAIL and LONGSTREAM (‘develop malware, such as CANFAIL and LONGSTREAM’).
  • [T1587.004 ] Develop Capabilities: Exploits – AI was used to identify and weaponize a 2FA bypass zero-day (‘development of an exploit’).
  • [T1588.002 ] Obtain Capabilities: Tools – Actors downloaded community tools like CLIProxyAPI for API key aggregation and proxying (‘downloaded specialized, community-developed middleware projects’).
  • [T1588.005 ] Obtain Capabilities: Exploits – Threat actors leveraged AI to obtain known exploits for targeted systems (‘obtain known exploits of vulnerabilities’).
  • [T1588.006 ] Obtain Capabilities: Vulnerabilities – Adversaries used AI to research vulnerabilities in target systems (‘research known vulnerabilities of targeted systems’).
  • [T1588.007 ] Obtain Capabilities: Artificial Intelligence – Automated pipelines were used to register premium LLM accounts across providers (‘programmatically exploit the registration flows’).
  • [T1566 ] Phishing – LLMs were used to research targets and craft higher-fidelity phishing lures (‘craft higher-fidelity phishing lures’).
  • [T1027.014 ] Obfuscated Files or Information: Polymorphic Code – PROMPTFLUX used automated code modification to vary signatures (‘vary file signatures and bypass legacy security controls’).
  • [T1027.016 ] Obfuscated Files or Information: Junk Code Insertion – CANFAIL and LONGSTREAM included decoy code to hide malicious behavior (‘decoy code to help disguise the malicious nature’).
  • [T1090.003 ] Proxy: Multi-hop Proxy – APT27 used AI to support multi-hop ORB network management (‘multi-hop configurations’).

Indicators of Compromise

  • [Malware / tool names ] AI-enabled malware and tooling discussed in the report – PROMPTSPY, CANFAIL, LONGSTREAM, HONESTCUE, PROMPTFLUX, OpenClaw, OneClaw, Hexstrike, Strix
  • [Threat actor / cluster names ] Actors and clusters tied to activity – TeamPCP (UNC6780), UNC2814, UNC6201, UNC5673, APT27, APT45, TEMP.Hex
  • [Domains / URLs ] Infrastructure used by malware and model access – generativelanguage.googleapis.com, GitHub-hosted resources
  • [Model / API identifiers ] AI services referenced in malicious workflows – gemini-2.5-flash-lite, Gemini API, Claude API, OpenAI API
  • [Software / repository names ] Targeted or abused software packages and repos – LiteLLM, BerriAI, Trivy, Checkmarx, wooyun-legacy
  • [File / package names ] Compromised packages and credential-stealing components – SANDCLOCK, malicious OpenClaw skill packages, ChatGPT Account Auto-Registration Tool


Read more: https://cloud.google.com/blog/topics/threat-intelligence/ai-vulnerability-exploitation-initial-access/