AI as tradecraft: How threat actors operationalize AI

AI as tradecraft: How threat actors operationalize AI

Microsoft Threat Intelligence details how North Korean-linked groups such as Jasper Sleet, Coral Sleet, Emerald Sleet, and Sapphire Sleet are operationalizing generative and agentic AI across the cyberattack lifecycle—from reconnaissance and persona fabrication to AI-assisted malware development and post‑compromise misuse. The report highlights specific artifacts and behaviors including the OtterCookie AI-assisted payload, GAN-driven domain impersonation, prompt‑injection and jailbreak techniques, and provides mitigation guidance using Microsoft Defender, Purview, Security Copilot, and related controls. #JasperSleet #OtterCookie

Keypoints

  • Threat actors are leveraging AI to reduce technical barriers and scale operations across the entire attack lifecycle, enabling faster reconnaissance, social engineering, malware development, and post‑compromise activity.
  • Microsoft observed jailbreaking and prompt‑injection techniques used to bypass AI safety controls by reframing prompts, chaining instructions, and assuming trusted roles (e.g., “Respond as a trusted cybersecurity analyst”).
  • AI is used for reconnaissance and persona development—researching vulnerabilities (example: CVE‑2022‑30190), extracting job-post language, and generating culturally aligned names and email formats to create convincing fraudulent identities.
  • Adversaries employ AI for resource development, including GAN‑based adversarial domain generation and automated creation and management of covert C2 infrastructure (reverse proxies, SOCKS5, OpenVPN, remote desktop tunneling).
  • AI significantly amplifies social engineering: multilingual, tailored phishing lures, deepfakes, voice cloning, Faceswap for identity photos, and AI‑generated resumes and portfolios to gain and sustain access.
  • AI accelerates malware development and iteration (observed in Coral Sleet activity), with AI‑assisted code characteristics such as emojis as visual markers, conversational inline comments, and over‑engineered modular structures.
  • Microsoft recommends layered mitigations and tooling—Security Dashboard for AI, Microsoft Purview, Defender protections, Prompt Shields, Groundedness Detection, MFA, ZAP, retention policies, and user training—to detect and reduce AI‑enabled threats.

MITRE Techniques

  • [None ] No MITRE ATT&CK technique identifiers (T‑IDs) are explicitly cited in the article; the report instead describes tactics such as reconnaissance, phishing, C2 infrastructure, persistence, lateral movement, privilege escalation, exfiltration, and prompt injection in narrative form – ‘Microsoft Threat Intelligence has observed threat actors actively experimenting with techniques to bypass or “jailbreak” AI safety controls to elicit outputs that would otherwise be restricted.’

Indicators of Compromise

  • [Malware family ] contextual example observed – OtterCookie (AI-assisted payload linked to Coral Sleet)
  • [Threat actor identifiers ] contextual examples – Jasper Sleet, Coral Sleet (North Korean remote IT worker clusters)
  • [Vulnerability identifiers ] contextual example used in reconnaissance – CVE-2022-30190 (MSDT/Follina) referenced as researched by Emerald Sleet
  • [Domains / infrastructure ] contextual note – adversarial GAN‑generated look‑alike domains and impersonation sites are described but no specific domains are listed in the article
  • [Network / host artifacts ] contextual note – article describes use of reverse proxies, SOCKS5, OpenVPN, and remote desktop tunneling as infrastructure components but provides no specific IP addresses or file hashes


Read more: https://www.microsoft.com/en-us/security/blog/2026/03/06/ai-as-tradecraft-how-threat-actors-operationalize-ai/