Hackers Developing Malicious LLMs After WormGPT Falls Flat

Summary : Cybercrooks are developing custom, malicious large language models after existing tools failed to meet their needs for advanced intrusion capabilities.

Key Point :
๐Ÿ”’ Hackers are exploring ways to exploit guardrails put in place by AI-powered chatbots.
๐Ÿค– Demand for AI talent has surged among ransomware and malware operators.
๐Ÿ›ก Crooks are hiring AI experts to exploit private GPTs and create malicious models.
๐Ÿ” Threat actors are using generative AI to develop malware and evade detection tools.
๐ŸŒ Multi-model AI is being used to extract information on industrial control systems.
๐ŸŽญ Threat actors without advanced skills may use AI for deepfakes and disinformation.
๐Ÿ“ง Access to AI resources remains a constraint for lower-end threat actors.

——————–

Hackers Developing Malicious LLMs After WormGPT Falls Flat
Cybercrooks want to make their own AI models. (Image: Shutterstock)

Cybercrooks are exploring ways to develop custom, malicious large language models after existing tools such as WormGPT failed to cater to their demands for advanced intrusion capabilities, security researchers said.

See Also: OnDemand | Proactive vs Reactive: Why Using GenAI Needs to be Part of A Proactive Security Strategy

Undergrounds forums teem with hackers’ discussions about how to exploit guardrails put in place by artificial intelligence-powered chatbots developed by OpenAI and Google-owned Gemini, said Etay Maor, senior director of security strategy at Cato Networks.

In one case a researcher observed on Telegram, a Russian-speaking threat actor who goes by the name Poena placed an advertisement recruiting AI and machine learning experts to develop malicious LLM products.

This trend is also being observed among ransomware and other malware operators, Maor said.

Demand for AI talent has surged, especially after existing custom tools advertised in the underground markets – such as WormGPT, XXXGPT and XXX WolfGPT – failed to meet the needs of the threat actors (see: Criminals Are Flocking to a Malicious Generative AI Tool).

“WormGPT in their original advertisement showed how they wrote a keylogger, but then if you look at it, the code is pretty lame and the criminals found out very fast that it’s not all that it was hyped up to be,” Maor said.

Crooks are looking into hiring AI experts to exploit private GPTs developed by OpenAI and Bard to jailbreak restrictions put in place by the application developers and create malicious GPTs, he said.

“They’re looking for things that will help them with generating code that actually does what you’re supposed to do and doesn’t have all hallucinations,” Maor said.

A March 19 report by Recorded Future highlights threat actors using generative AI to develop malware and exploits. The report identifies four malicious use cases for AI that do not require fine-tuning the model.

The use cases include using AI to evade detection tools used by LLM applications that use YARA rules to identify and classify malware.

“These publicly available rules also serve as a double-edged sword,” the report said. “While they are intended to be a resource for defenders to enhance their security measures, they also provide threat actors with insights into detection logic, enabling them to adjust their malware characteristics for evasion.”

Using the technique, Recorded Future altered SteelHook, a PowerShell info stealer used by APT28 that submits the malware source code to an LLM system. The researchers prompted the LLM system to modify the source code to evade detection.

Threat actors are also likely to use multi-model AI to sort through a large trove of intelligence data to extract information on a specific industrial control system and determine potential vulnerabilities within the equipment, Recorded Future said.

That would require high-power computing, the researchers said, and they estimated that such skills are likely limited to advanced nation-state actors. Threat actors without those skills are likely to use AI to generate deepfakes and to run disinformation, they said.

An earlier report from the U.K. National Cyber Security Center contains similar observations. It says that access to AI resources remains a constraint for threat actors on the lower end of the cybercrime ecosystem and limits their operations to creating phishing emails at this point (see: Large Language Models Won’t Replace Hackers).

Although LLMs makers such as ChatGPT have improved their guardrails by limiting activities such as generating malicious code and crafting phishing emails, Maor said, private GPTs need to include more preventive measures as they are more prone to data leaks.

This includes requiring a “stamp of approval” to generate particular use cases and validate every use case on the platform, he added.

Recorded Future recommends using multilayered and behavioral malware detection capabilities to identify polymorphic strains developed using AI. It also recommends deploying appropriate branding on publicly available content.

Source: https://www.healthcareinfosecurity.com/hackers-developing-malicious-llms-after-wormgpt-falls-flat-a-24724


“An interesting youtube video that may be related to the article above”