WormGPT: What you need to know about the cybercriminal's answer to ChatGPT

WormGPT: What you need to know about the cybercriminal's answer to ChatGPT

Innovation
Why you can trust ZDNET : ZDNET independently tests and researches products to bring you our best recommendations and advice. When you buy through our links, we may earn a commission. Our process

‘ZDNET Recommends’: What exactly does it mean?

ZDNET’s recommendations are based on many hours of testing, research, and comparison shopping. We gather data from the best available sources, including vendor and retailer listings as well as other relevant and independent reviews sites. And we pore over customer reviews to find out what matters to real people who already own and use the products and services we’re assessing.

When you click through from our site to a retailer and buy a product or service, we may earn affiliate commissions. This helps support our work, but does not affect what we cover or how, and it does not affect the price you pay. Neither ZDNET nor the author are compensated for these independent reviews. Indeed, we follow strict guidelines that ensure our editorial content is never influenced by advertisers.

ZDNET’s editorial team writes on behalf of you, our reader. Our goal is to deliver the most accurate information and the most knowledgeable advice possible in order to help you make smarter buying decisions on tech gear and a wide array of products and services. Our editors thoroughly review and fact-check every article to ensure that our content meets the highest standards. If we have made an error or published misleading information, we will correct or clarify the article. If you see inaccuracies in our content, please report the mistake via this form.

Close

WormGPT: What you need to know about the cybercriminal version of ChatGPT

Should we be concerned about a malicious cousin to ChatGPT? Here’s everything you need to know.
wormgpt

It was only a matter of time before the AI chatbot was emulated for malicious purposes — and one such tool is now on the market, known as WormGPT.

ZDNET screenshot via Twitter

When ChatGPT was made available to the public on November 30, 2002, the AI chatbot took the world by storm. 

The software was developed by OpenAI, an AI and research company. ChatGPT is a natural language processing tool able to answer queries and provide information based on datasets gleaned from datasets, including books and online web pages, and has since become a valued tool for on-the-fly information gathering, analysis, and writing tasks for millions of users worldwide. 

Also: What is ChatGPT and why does it matter? Here’s what you need to know

While some experts believe the technology could prove to reach an internet level of disruption, others note that ChatGPT demonstrates ‘confident inaccuracy.’ Students in droves have been caught plagiarising coursework by way of the tool, and unless datasets are verified, tools such as ChatGPT could become unwitting tools to spread misinformation and propaganda. 

Indeed, the US Federal Trade Commission (FTC) is investigating Open AI over its handling of personal information and the data used to create its language model. 

Beyond data protection concerns, however, whenever a new technological innovation is made, so are pathways for abuse. It was only a matter of time before the AI chatbot was emulated for malicious purposes — and one such tool is now on the market, known as WormGPT.

What is WormGPT?

On July 13, researchers from cybersecurity firm SlashNext published a blog post revealing the discovery of WormGPT, a tool being promoted for sale on a hacker forum.

According to the forum user, the WormGPT project aims to be a blackhat “alternative” to ChatGPT, “one that lets you do all sorts of illegal stuff and easily sell it online in the future.”

Also: Scammers are using AI to impersonate your loved ones. Here’s what to watch out for

SlashNext gained access to the tool, described as an AI module based on the GPTJ language model. WormGPT has allegedly been trained with data sources including malware-related information — but the specific datasets remain known only to WormGPT’s author. 

It may be possible for WormGPT to generate malicious code, for example, or convincing phishing emails. 

What is WormGPT being used for?

WormGTP is described as “similar to ChatGPT but has no ethical boundaries or limitations.”

ChatGPT has a set of rules in place to try and stop users from abusing the chatbot unethically. This includes refusing to complete tasks related to criminality and malware. However, users are constantly finding ways to circumvent these limitations.

The researchers were able to use WormGPT to “generate an email intended to pressure an unsuspecting account manager into paying a fraudulent invoice.” The team was surprised at how well the language model managed the task, branding the result “remarkably persuasive [and] also strategically cunning.”

Also: Gmail will help you write your emails now: How to access Google’s new AI tool

While they didn’t say if they tried the malware-writing service, it is plausible that the AI bot could — given that ChatGPT’s limitations do not exist. 

According to a Telegram channel reportedly launched to promote the tool, posts viewed by ZDNET indicate the developer is creating a subscription model for access, ranging from $60 to $700. A channel member, “darkstux,” alleges that there are already over 1,500 users of WormGPT.

Is WormGPT the same as ChatGPT?

No. ChatGPT has been developed by OpenAI, a legitimate and respected organization. WormGMT is not their creation and is an example of how cybercriminals can take inspiration from advanced AI chatbots to develop their own malicious tools. 

Will we see more tools like WormGPT in the future?

Even in the hands of novices and your typical scammer, natural language models could turn basic, easily avoided phishing and BEC scams into sophisticated operations more likely to succeed. There’s no doubt that where money is to be made, cybercriminals will pursue the lead — and WormGPT is only the start of a new range of cybercriminals tools set to be traded in underground markets. 

Also: 6 skills you need to become an AI prompt engineer

It’s also unlikely that WormGPT is the only one out there. 

What do regulators say about the abuse of AI tools?

  • Europol: Europol said in the 2023 report, “The impact of Large Language Models on Law Enforcement,” that “it will be crucial to monitor […] development, as dark LLMs (large language models) trained to facilitate harmful output may become a key criminal business model of the future. This poses a new challenge for law enforcement, whereby it will become easier than ever for malicious actors to perpetrate criminal activities with no necessary prior knowledge.”
  • Federal Trade Commission: The FTC is investigating ChatGPT maker OpenAI over data usage policies and inaccuracy.
  • UK National Crime Agency (NCA): The NCA warns that AI could prompt an explosion in risk to young people and abuse.
  • UK Information Commission’s Office (ICO): The ICO has reminded organizations that their AI tools are still bound by existing data protection laws.

Can ChatGPT be used for illegal purposes?

chatgpt phishing email prompt
Screenshot via Charlie Osborne | ZDNet

Not without covert tactics, but with the right prompts, many natural language models can be persuaded to particular actions and tasks. 

ChatGPT, for example, can draft professional emails, cover letters, resumes, purchase orders, and more. This alone can remove some of the most common indicators of a phishing email: spelling mistakes, grammar issues, and secondary language problems. In itself, this alone could cause a headache for businesses attempting to detect and train their staff to recognize suspicious messages.  

SlashNext researchers say, “cybercriminals can use such technology to automate the creation of highly convincing fake emails, personalized to the recipient, thus increasing the chances of success for the attack.”

Also: 7 advanced ChatGPT prompt-writing tips you need to know

For step-by-step instructions on using ChatGPT for legitimate purposes, check out ZDNET’s guide on how to start using ChatGPT

How much does ChatGPT cost?

ChatGPT is free to use. The tool can be used to answer general queries, write content and code, or generate prompts for everything from creative stories to marketing projects. 

There is a subscription option, ChatGPT Plus, which users can sign up for. The subscription costs $20 per month and provides users with access to ChatGPT during peak times and otherwise, faster response times, and priority access to improvements and fixes.

Also: How to access, install, and use AI ChatGPT-4 plugins (and why you should)

Editorial standards

Add a Comment