FraudGPT: The AI Tool Cybercriminals Are Using to Outsmart Companies

Malicious versions of AI models like ChatGPT are now accessible on the Dark Web, with subscription prices starting at around $200 per month. These tools are used by cybercriminals to generate phishing emails, malicious scripts, and other materials for scams targeting both individuals and businesses.
FraudGPT AI-powered cybercrime tool generating phishing emails and scams, highlighting human-centered cybersecurity awareness

AI arrived at the end of 2022, and it’s here to stay. But its “dark side” has also grown rapidly. Throughout 2023, and continuing into 2025, AI-powered scams are on the rise, even as the EU introduces regulations around these technologies.

A recent report by the IJSR CSEIT highlights FraudGPT as a key milestone in cybercrime. This subscription-based AI is capable of efficiently generating phishing emails and fake websites. FraudGPT was first detected by Netenrich in July 2023 through the Dark Web and Telegram channels.

Another similar tool, WormGPT, simplifies phishing email creation. WormGPT is derived from GPT-J, an open-source model developed by EleutherAI in 2021, comparable to GPT-3 in capabilities.

FraudGPT: The Evil Twin of ChatGPT

FraudGPT is not free like OpenAI’s ChatGPT. Cybercriminals pay around $200 per month or $1,700 annually for access. This “evil twin” of ChatGPT can write malicious code, generate undetectable malware, create phishing pages, hacking tools, and scam emails.

Trustwave conducted comparative tests between FraudGPT and ChatGPT, finding notable differences. While ChatGPT can generate code and phishing emails if prompted carefully, it always includes warnings, and the resulting emails are generally less convincing. FraudGPT, in contrast, produces highly persuasive malicious content.

WormGPT is cheaper, with versions ranging from $60/month to $550/year. Its advanced v2 edition can be customized for $5,000.

AI-Generated Images, Voice, and Video

The most convincing scams are the most successful. AI is used to create synthetic audio, images, and video, enhancing techniques like pig butchering, where attackers build trust over time before scamming victims.

Scammers are also using AI to impersonate government officials or celebrities in video calls and social media, often promoting fake cryptocurrency investments. Deepfakes and manipulated media are increasingly part of these campaigns.

As Zac Amos from ReHack commented: “FraudGPT is a stark reminder that cybercriminals will continually evolve their methods to maximize impact.”

Written by:

Valeria Contreras

Always stay up to date

ZEPO
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.