Back to All Research

What Happened to WormGPT and What Are Cybercriminals Using Now?

What happened to WormGPT? Discover how AI tools like WormGPT changed cybercrime, why they vanished, and what cybercriminals are using now.
November 26, 2024

The rise of AI has changed almost every industry—and cybercrime is no exception. On June 28, 2023, a user on a popular hacking forum introduced a new tool called WormGPT. Unlike mainstream AI tools like ChatGPT, which have strict ethical rules, WormGPT was marketed as a no-limits alternative built specifically for cybercriminals. It quickly caught on in underground circles, promising complete freedom to explore any topic without restrictions or blocks.

The Popularity of AI and 2023 Introduction of WormGPT

WormGPT’s features were practical and aimed at supporting harmful activities. It offered lightning-fast responses and unlimited message length, making it easy for users to communicate and operate without interruption. Privacy was also a big selling point, with promises of secure and confidential conversations to protect user identities.

Users could pick from different AI models tailored for general or specialized uses, and they could save and revisit conversations whenever they wanted. The tool even offered advanced beta features, like “context memory” for smoother, ongoing chats and “coding formatting” to help organize code in a way that was easy to read and use.

Malicious AI Platforms Blog 1 Worm GPT Ad

Official cybercrime forum advertisement for WormGPT

This tool was effective for writing snippets of malware, phishing campaigns, and more. Testimonials in the forum thread confirmed its effectiveness, with users praising its capabilities. As WormGPT gained traction, media outlets began to take notice, and soon more than 100 news websites had covered it. Headlines like “ChatGPT’s Evil Twin WormGPT is Secretly Entering Emails, Raiding Banks” brought the story into the mainstream.

With all this publicity, the tool’s author started to respond, claiming WormGPT was intended for ethical usage only. However, this didn’t quite add up—given its marketing on a cybercrime forum and the blatant blackhat messaging in its ads.

Malicious AI Platforms Blog 2 Worm GPT Closure Message

Closure message from the author of WormGPT

Eventually, the author ceased sales of WormGPT and posted a message on the forum, attempting to deflect any liability by distancing themselves from the tool’s misuse. Despite these efforts, WormGPT’s rapid rise and controversial marketing left a lasting impression on the cybercrime world.

The Move From WormGPT to FraudGPT and DarkBERT

After WormGPT started to fade away and became harder to purchase, a new AI tool called FraudGPT emerged, quickly filling the gap left behind. Discovered on July 25, 2023, FraudGPT was also marketed as a tool designed specifically for cybercriminals, offering features for crafting phishing emails, generating malicious code, and even providing hacking tutorials. Promoted by a user under the alias "CanadianKingpin12," FraudGPT was advertised on various cybercrime forums as a successor to WormGPT.

Malicious AI Platforms Blog 3 Fraud GPT UI

Example of FraudGPT’s interface

Interestingly, some speculated that WormGPT and FraudGPT might have been created by the same group, as they shared a similar set of capabilities and marketing styles. However, due to policy violations, threads promoting the sales of FraudGPT were frequently taken down on major cybercrime forums, forcing the creator to shift promotions to decentralized platforms like Telegram, where restrictions were less stringent.

Malicious AI Platforms Blog 4 Deleted Fraud GPT Thread

Error message generated when viewing deleted FraudGPT thread

To improve FraudGPT’s appeal, the creator claimed that it had advantages over WormGPT and even hinted at developing new AI bots, such as “DarkBERT” and “DarkBART,” which would reportedly integrate internet access and Google Lens for image-based responses. But even with these attempts to keep the tool accessible, many of FraudGPT’s sale threads have ceased to exist, suggesting a decline in its availability.

The Shift to Scam-Driven AI Models and Imitations

Around the time FraudGPT began losing visibility, other variants started popping up—including Escape GPT, Evil GPT, and Wolf GPT. However, many of these tools quickly raised suspicions within the cybercrime community. Allegations emerged that these so-called "new AI tools" were simply jailbroken versions of ChatGPT with added wrappers to make them appear as standalone products.

Malicious AI Platforms Blog 5 Worm GPT Variants Thread

Discussions of skepticism regarding WormGPT variants

This suspicion was fueled by the behavior of some of these variants. For example, when users attempted to engage them for illegal activities, they’d receive responses like, “I’m unable to perform illegal activities,” a clear indicator that the tool was subject to ChatGPT’s built-in ethical limitations. These imitations were more fraud than innovation, often simply using minor modifications to bypass ChatGPT’s restrictions while still relying on its core model.

Cybercriminals soon realized that many of these tools weren’t genuine alternatives but quick cash-grabs. This wave of scams, imitations, and unreliable tools highlighted the challenges for threat actors in finding stable, malicious AI models, ultimately pushing some users back toward underground communities in search of more consistent, reliable options.

Malicious AI Platforms Blog 9 Fraud GPT Alternative Thread

Post from scammed user who attempted to purchase a FraudGPT alternative

Interestingly, while many of these imitation tools failed to deliver genuine AI-powered hacking capabilities, the concept of using wrappers to connect to jailbroken instances sparked a massive wave of interest. This approach opened the door for cybercriminals and curious users alike, creating hype and fueling thousands of discussions across the internet.

These conversations centered on sharing prompts, tips, and methods for jailbreaking legitimate AI bots like ChatGPT and Claude, pushing users to find creative ways to bypass the ethical restrictions set by these platforms.

Malicious AI Platforms Blog 7 AI Jailbreak Thread

Active discussions about AI jailbreak prompts

Today, rather than relying on dedicated blackhat AI tools, many in the cybercrime community are focused on manipulating existing AI tools. These discussions and shared methods have formed a sort of subculture around jailbreaking, where users experiment with modifications and explore prompt engineering techniques to enable restricted functionalities.

What Does the Future Look Like With AI-Based Cybercrime?

Since WormGPT’s debut, we’ve seen occasional “evil AI models” pop up, but, as mentioned above, most of these are either jailbroken instances of legitimate AI bots or outright scams. WormGPT didn’t just make headlines; it launched an entire trend of deceptive AI variants. Yet, it remains the only real large language model (LLM) developed specifically for cybercriminal purposes.

Looking forward to the future, it’s possible that someone could eventually create another genuinely “evil” chatbot powered by its own LLM, much like WormGPT. But, for now, that isn’t what we’re seeing in the landscape. Instead, the focus has largely shifted toward exploiting existing AI systems, like ChatGPT and Claude, through jailbreaks, wrappers, and prompt-engineering tricks.

Malicious AI Platforms Blog 8 AI Chatbot Hacking Thread

Active discussions of AI chatbot hacking methods

While WormGPT sparked interest in AI-powered cybercrime, the reality is that few true successors have emerged, and much of the activity revolves around repurposing legitimate AI models rather than building dedicated blackhat tools from scratch.

Protect Your Organization From Future AI-Powered Cyberattacks

Regardless of the tools a cybercriminal uses to exploit AI, the bottom line is this technology empowers them to easily craft sophisticated attacks that bypass traditional security solutions. The data indicates that AI-powered threats are just going to increase in volume and complexity, and a growing number of cybersecurity experts agree that the only way to combat malicious AI is with defensive AI.

Abnormal’s AI-native solution utilizes behavioral data to understand the communication patterns and processes of every employee and vendor across your organization. This allows it to accurately detect high-risk anomalies and then automatically remediate email threats that legacy systems miss—preventing end-user engagement and keeping your organization safe.

Want to see how Abnormal can catch these advanced attacks before they reach your team? Schedule a demo today—acting now can make all the difference.

B AI Malicious AI Platforms Blog

See How Abnormal Stops Emerging Attacks

See a Demo

Get the Latest from Abnormal Intelligence

Subscribe to our monthly newsletter to receive the latest insights from our team directly in your inbox.