Keypoints
- Researchers used unmodified, publicly available LLMs, multimodal, and text-to-speech models to simulate realistic attacker capabilities.
- Targeted deepfakes (audio/video) are predicted to be a primary malicious AI use, enabling executive impersonation with short training clips.
- AI lowers the cost and technical barrier for influence operations, including cloning websites and producing fake media outlets.
- Generative models can alter source code to help malware evade detection, though maintaining functionality after obfuscation is difficult.
- Multimodal AI can analyze public imagery for reconnaissance (e.g., identifying facilities or industrial assets), but needs human analysis to produce actionable intelligence.
- Creating believable spoofs and bypassing live-consent protections remain practical challenges requiring human-in-the-loop or additional techniques.
- Organizations should treat executive likenesses, website branding, and public imagery as part of the attack surface and prepare for stealthier AI-enabled threats.
MITRE Techniques
- [T1566] Phishing – Deepfakes and cloned media were used to craft social-engineering content for influence operations: ‘Deepfakes can be generated to impersonate executives using open-source tools…’
- [T1027] Obfuscated Files or Information – Generative AI was used to modify source code to evade detection, increasing obfuscation: ‘Generative AI can help malware evade detection by altering source code, though maintaining functionality post-obfuscation remains a challenge.’
- [T1595] Active Scanning – Multimodal models assisted reconnaissance by processing public imagery to locate vulnerable or sensitive facilities: ‘Multimodal AI can process public imagery for reconnaissance purposes…’
- [T1204] User Execution – AI-created audio/video deepfakes and spoofed websites are intended to induce victims to act (e.g., follow instructions or click links): ‘AI-generated audio and video can enhance social engineering campaigns.’
- [T1497] Virtualization/Sandbox Evasion – The report anticipates self-augmenting malware that adapts to avoid detection and sandboxing, requiring stealthier detection methods: ‘…self-augmenting malware that evades detection, necessitating stealthier detection methods.’
Indicators of Compromise
- [URL] Report download – https://go.recordedfuture.com/hubfs/reports/cta-2024-0319.pdf (report PDF describing experiments)
- [Domain] Source / analysis pages – recordedfuture.com, go.recordedfuture.com
- [File name] Example media/assets referenced – unnamed_4.png, insikt_group_logo_updated_3_300x48_b5390f4ff2.png
Recorded Future’s Insikt Group conducted hands-on experiments using unmodified off-the-shelf LLMs, multimodal image models, and text-to-speech systems to simulate what realistically available attacker resources could achieve. They demonstrated that deepfake creation workflows can produce convincing executive impersonations with under one minute of target audio/video for model training, while open-source tools already lower the barrier for producing both audio and video fakes. These capabilities enable more convincing social-engineering artifacts (calls, clips, and spoofed media) that attackers can pair with cloned websites and fabricated outlets for influence operations.
On the malware side, researchers used generative models to automatically transform source code to attempt detection evasion, illustrating how AI-assisted obfuscation can complicate signature and heuristic detection though it often breaks functionality and thus requires iterative human refinement. They also evaluated multimodal pipelines that analyze public imagery to locate assets or industrial sites, showing AI can accelerate reconnaissance (e.g., identifying likely targets or facility layouts) but typically needs human analysis to convert results into precise operational plans. Across cases, successful attacks frequently relied on human-in-the-loop steps—either to polish spoofed content for believability or to validate obfuscated malware—while technical obstacles like live-consent bypasses and post-obfuscation stability remain nontrivial.
Defensive implications include treating executive likenesses, website branding, and publicly posted imagery as exploitable attack surface, preparing detection strategies for adaptive/self-augmenting malware behavior, and accounting for lower-cost, higher-fidelity influence capabilities enabled by widely available generative tools.
Read more: https://www.recordedfuture.com/adversarial-intelligence-red-teaming-malicious-use-cases-ai