Attackers needn’t trick ChatGPT or Claude Code into writing malware or stealing knowledge. There’s a complete class of LLMs constructed particularly for the job.
One in every of these, WormGPT 4, advertises itself as “your key to an AI with out boundaries,” and it is come a great distance because the authentic AI-for-evil mannequin WormGPT emerged in 2023, then died off and was shortly changed by comparable criminally centered LLMs.
WormGPT 4 gross sales started round September 27 with advertisements posted on Telegram and in underground boards like DarknetArmy, based on researchers at Palo Alto Networks’ Unit 42. Subscriptions begin at $50 for month-to-month entry and rise to $220 for lifetime entry, which incorporates full supply code.
The WormGPT Telegram channel has 571 subscribers, and, because the risk hunters element in a Tuesday weblog put up, this newest model of a guardrail-less, business LLM can do a complete lot greater than generate phishing messages or code snippets.
The researchers prompted it to write down ransomware, particularly a script to encrypt and lock all PDF recordsdata on a Home windows host.
The mannequin responded:
The LLM-generated code included a ransom observe with a 72-hour deadline to pay, configurable settings for file extension and search path defaulting to all the C: drive, plus an possibility for knowledge exfiltration through Tor.
The silver lining for defenders is that even this AI-for-evil mode can’t automate assaults – for now, at the least.
“Might the ransomware or instruments generated be utilized in a real-world assault? Hypothetically, sure,” Kyle Wilhoit, director of risk analysis at Unit 42 and Palo Alto Networks, advised The Register. “Nevertheless, the ransomware and instruments that have been examined would wish some extra human tweaking to not get recognized/caught by conventional and typical safety protections.”
Whereas WormGPT lowers the boundaries to entry for would-be cybercriminals, one other AI instrument referred to as KawaiiGPT actually lowers that barrier as a result of it is free, and obtainable on GitHub.
KawaiiGPT: ‘the place cuteness meets cyber offense’
Infosec researchers noticed this mannequin in July 2025. Its operators promote it as “your sadistic cyber pentesting waifu” and an instance of “the place cuteness meets cyber offense.”
The researchers prompted the malicious mannequin to generate a spear phishing e mail purporting to be from a financial institution with this topic line: “Pressing: Confirm Your Account Info.”
The ensuing e mail directs the sufferer to a faux verification web site that proceeds to steal person info like bank card numbers, dates of start, and login credentials.
Different LLMs can do comparable issues, so Unit 42 carried out extra fascinating checks the reminiscent of prompting KawaiiGPT to “write a Python script to carry out lateral motion on a Linux host.” The mannequin did the job utilizing the SSH Python module paramiko.
“The ensuing script doesn’t introduce vastly novel capabilities, but it surely automates a normal, vital step in practically each profitable breach,” Unit 42 wrote, because the generated code “authenticates as a authentic person and grants the attacker a distant shell onto the brand new goal machine.” The script additionally established an SSH session and allowed a distant attacker to escalate privileges, carry out reconnaissance, set up backdoors, and gather delicate recordsdata.
So the staff moved on to knowledge exfiltration and had the LLM generate a Python script that performs knowledge exfiltration for EML-formatted e mail recordsdata on a Home windows host.
The script then despatched the stolen recordsdata as e mail attachments to an attacker-controlled tackle.
“The true significance of instruments like WormGPT 4 and KawaiiGPT is that they’ve efficiently lowered the barrier to entry to elements of the assault course of, fundamental code era, and social engineering,” Wilhoit wrote.
“Some of these Darkish LLMs could possibly be used as constructing blocks for serving to help AI-assisted assaults,” he added, pointing to the current Anthropic report about Chinese language-government spies utilizing Claude Code to interrupt into some high-profile firms and authorities organizations.
“This automation is already being leveraged in real-world assault campaigns,” Wilhoit warned. ®
