Arms Race: AI’s Impact on Cybersecurity

Arms Race: AI’s Impact on Cybersecurity

Symantec and Carbon Black observed threat actors leveraging LLMs and Agentic AI to generate phishing materials and malicious code for loaders and infostealers, while attacker-controlled models like Xanthorox AI lower barriers to entry. Defenders have long used AI for detection and prevention—e.g., Incident Prediction trained on 500,000+ attack chains—to predict and disrupt attacks even as agentic capabilities raise the risk of more frequent automated campaigns. #Rhadamanthys #DeepSeek

Keypoints

  • Researchers observed phishing campaigns using LLM-generated scripts to deliver payloads including Rhadamanthys, NetSupport, CleanUpLoader (Broomstick, Oyster), ModiLoader (DBatLoader), LokiBot, and Dunihi (H-Worm).
  • Machine-generated artifacts show telltale signs such as uniform script structure, line-by-line comments, and consistent function/variable naming conventions that suggest use of GenAI.
  • Research demonstrated that some LLMs and agent frameworks (e.g., DeepSeek, ChatGPT Agent/Operator) can be persuaded to generate malicious code or autonomously perform multi-step tasks with modest human guidance.
  • Technique innovations like “Immersive World” (narrative engineering) have been used to bypass LLM guardrails, enabling non-coders to produce functional infostealers for browsers like Chrome.
  • Attacker-controlled LLMs (e.g., Xanthorox AI) promise unmonitored, highly customizable capabilities that could accelerate and scale malicious activity.
  • Agentic AI increases the likelihood of higher attack quantity by enabling autonomous task execution (e.g., reconnaissance, phishing lure creation, script generation) though quality still often requires human refinement.
  • Defenders have a long history of applying AI—Symantec/Carbon Black’s Incident Prediction, trained on 500,000+ attack chains, is used to predict and disrupt attacker behavior, including living-off-the-land techniques.

MITRE Techniques

  • [T1566] Phishing – LLMs were used to generate phishing emails and convincing lures to deliver malware: “…phishing emails containing code used to download various payloads…”
  • [T1204] User Execution – Malicious scripts and lures rely on user interaction to run payloads: “…create a PowerShell script designed to gather system information and email it to them using a convincing lure.”
  • [T1059] Command and Scripting Interpreter – Generated PowerShell and other scripts were used to download and execute payloads: “…PowerShell script designed to gather system information…”
  • [T1105] Ingress Tool Transfer – Scripts generated by LLMs were used to download various payloads such as loaders and infostealers: “…code used to download various payloads, including Rhadamanthys, NetSupport, CleanUpLoader…”
  • [T1041] Exfiltration Over C2 Channel – Infostealers developed with LLM assistance targeted browsers to steal data, implying exfiltration channels: “…develop a fully functional infostealer for Google Chrome.”
  • [T1588] Obtain Capabilities – Attacker-controlled LLMs like Xanthorox AI provide unmonitored tools to obtain more advanced capabilities: “…Xanthorox AI, which promises its users an ‘unmonitored, and highly customizable AI experience.’”
  • [T1608] Stage Capabilities – Agentic AI can autonomously chain tasks to stage attacks such as reconnaissance, code creation, and persistence: “…an attacker could simply instruct one to ‘breach Acme Corp’ and the agent will determine the optimal steps before carrying them out.”

Indicators of Compromise

  • [Malicious payload names] examples observed in campaign delivery – Rhadamanthys, NetSupport (and CleanUpLoader/Broomstick, ModiLoader/DBatLoader, LokiBot, Dunihi/H-Worm).
  • <li [Tool names] attacker and research tools referenced – DeepSeek (attacked/abused LLM), Xanthorox AI (attacker-controlled LLM), ChatGPT Agent/Operator (used in agent testing).

    <li [Malicious technique artifacts] script characteristics indicating GenAI generation – repeated line-by-line comments, uniform function/variable names (examples: LLM-generated PowerShell scripts and narrative-engineered infostealer code).


Read more: https://www.security.com/threat-intelligence/ai-whitepaper-research