Cybersecurity researchers have identified MalTerminal, the earliest known example of malware with Large Language Model (LLM) capabilities, potentially a proof-of-concept tool. Threat actors are increasingly leveraging AI models in malware development and phishing attacks, complicating defense strategies. #MalTerminal #LLMThreats
Keypoints
- MalTerminal is the earliest malware with embedded LLM capabilities, using GPT-4 to generate malicious code.
- AI models are now integrated into malware tools like PROMPTSTEAL and PromptLock, enhancing their functionality.
- Threat actors are using AI to bypass email security through hidden prompts and sophisticated phishing campaigns.
- New attack techniques exploit vulnerabilities like Follina to drop malware and disable antivirus solutions.
- AI-powered hosting platforms are exploited for large-scale, cost-effective phishing and credential-stealing attacks.
Read More: https://thehackernews.com/2025/09/researchers-uncover-gpt-4-powered.html