GTIG AI Threat Tracker: Distillation, Experimentation, and (Continued) Integration of AI for Adversarial Use | Google Cloud Blog

GTIG AI Threat Tracker: Distillation, Experimentation, and (Continued) Integration of AI for Adversarial Use | Google Cloud Blog

GTIG observed widespread misuse of generative AI in late 2025, including an uptick in model extraction (“distillation”) attempts and AI-augmented operations such as reconnaissance, hyper-personalized phishing, and AI-assisted malware development. Notable examples include the HONESTCUE downloader that called Gemini’s API to generate stage-two code and the COINBAIT phishing kit built with AI-assisted code generation and hosted on legitimate services (#HONESTCUE #COINBAIT)

Keypoints

  • Google detected and disrupted increased model extraction (distillation) activity used to clone model capabilities and steal proprietary logic.
  • State-backed actors (DPRK, PRC, Iran, Russia) used Gemini and other LLMs to accelerate reconnaissance, target profiling, and generate nuanced, localized phishing lures.
  • Threat actors experimented with agentic AI concepts and incorporated LLMs into tooling and malware development, but no breakthrough autonomous capabilities were observed in the wild.
  • HONESTCUE was observed outsourcing stage-two code generation to Gemini and executing compiled C# payloads in memory to evade disk-based detection.
  • COINBAIT, a phishing kit likely tied to UNC5356, used AI-generated SPA code (React) and legitimate cloud services (Lovable AI, Supabase) to improve evasion and scale.
  • Underground services (e.g., Xanthorox) advertise bespoke malicious AI while often chaining commercial models and MCP servers; attackers also harvest API keys from vulnerable open-source tools.

MITRE Techniques

  • [T1566 ] Phishing – LLMs used to generate “hyper-personalized, culturally nuanced lures” and “rapport-building phishing” (‘hyper-personalized, culturally nuanced lures’).
  • [T1059 ] Command and Scripting Interpreter – Attackers craft malicious command-line instructions and social-engineer victims to paste them into terminals (‘copy and paste a malicious command into the command terminal.’).
  • [T1204 ] User Execution – Campaigns rely on user-initiated execution of attacker-provided commands to install malware (‘This command will download and install malware.’).
  • [T1105 ] Ingress Tool Transfer – Malware samples download stage-two payloads from web/CDN locations (e.g., Discord CDN) using WebClient-like behavior and then execute them (‘download and executes another piece of malware’).
  • [T1027 ] Obfuscated Files or Information – Use of fileless techniques and in-memory compilation to avoid disk artifacts (‘fileless secondary stage … compile and execute the payload directly in memory.’).
  • [T1055 ] Process Injection – Loading and executing assemblies in memory using reflective loading techniques such as Assembly.Load to run payload entry points without writing files (‘load this byte array into memory as a .NET assembly using ‘System.Reflection.Assembly.Load’ … execute the entry point’).
  • [T1071 ] Application Layer Protocol – Use of LLM APIs, CDNs, and web services as part of C2/infrastructure and to host malicious content and shared AI transcripts (‘command-and-control (C2 or C&C) development and data exfiltration’).

Indicators of Compromise

  • [Malware ] examples and context – HONESTCUE (downloader/launcher outsourcing code generation to Gemini), COINBAIT (AI-accelerated phishing kit), ATOMIC (macOS info stealer targeting browser data and wallets).
  • [Domains / Services ] infrastructure and hosting – lovable.app (Lovable AI / lovableSupabase use for image hosting and backend), Discord CDN (hosting final payloads), Cloudflare (proxying phishing domains).
  • [Code artifacts / Strings ] forensic fingerprints – ‘? Analytics:’ log message prefix found in COINBAIT source code indicating AI-generated verbose developer logging (‘? Analytics: Initializing…’).
  • [APIs / Platforms ] abused platforms and key-harvest contexts – Gemini API (used by HONESTCUE), MCP servers / Hexstrike / Crush / LibreChat-AI used by underground toolkits like Xanthorox to chain models.
  • [Credential / Key theft ] compromised platforms enabling abuse – One API and New API platform (examples of services whose API keys are harvested via default creds, XSS, and exposed endpoints).


Read more: https://cloud.google.com/blog/topics/threat-intelligence/distillation-experimentation-integration-ai-adversarial-use/