Adversaries Exploiting Proprietary AI Capabilities, API Traffic to Scale Cyberattacks

Adversaries Exploiting Proprietary AI Capabilities, API Traffic to Scale Cyberattacks

In Q4 2025, GTIG observed threat actors escalate from experimental prompts to systematic exploitation of LLMs like Gemini for reconnaissance, phishing, malware development, and post-compromise activity. Model extraction and AI-powered frameworks such as HONESTCUE and COINBAIT, along with misuse by actors like UNC6418 and APT42, underscore growing abuse of commercial AI APIs. #Gemini #HONESTCUE

Keypoints

  • GTIG documented a shift from probing to repeatable exploitation of LLMs for offensive operations.
  • Model extraction or β€œdistillation” attacks use legitimate API queries to replicate proprietary models without network breaches.
  • State-backed and sophisticated actors used LLMs for OSINT, target profiling, and multilingual, culturally accurate phishing.
  • AI-assisted malware frameworks like HONESTCUE and COINBAIT leveraged generated code to evade detection, and marketplaces such as Xanthorox sold scaled AI-enabled tooling.
  • Google detected and disabled impacted assets and protected internal model logic, but warns of increasing interest in autonomous AI attack capabilities.

Read More: https://thecyberexpress.com/gtig-ai-threat-tracker/