WormGPT is an AI platform built specifically for scammers

Photo (c) Marko Geber - Getty Images

But one expert says the real AI threats probably lie in the future

You may have heard of the artificial intelligence (AI) platform ChatGPT. Meet its evil twin, WormGPT. It was created by a hacker for sophisticated email phishing attacks.

Some security experts have gained access to the platform and asked it to produce some fraudulent emails. They describe the results as “disturbing.”

In short, the researchers say WormGPT is very similar to ChatGPT. The main difference is it doesn’t flinch when instructed to engage in illegal or unethical actions.

While some are sounding the alarm, Dominic Chorafakis, principal at cybersecurity firm Akouto, isn’t one of them. He says there is nothing special about the platform that changes the cybersecurity landscape any more than ChatGPT already has. The real trouble, he suggests, may lie ahead.

“I believe the advancements we are about to experience in AI will bring about changes that we can’t even imagine at this point,” Chorafakis told ConsumerAffairs.

“Having said that, even the most advanced tools available today cannot produce code as well as an intermediate software engineer, we’re just not there yet. A moderately experienced software engineer who wants to create malware can already use ChatGPT to develop sophisticated programs, you just have to know how to ask.”

In fact, Chorafaklis says criminal organizations on the dark web have been offering malware as a service for many years. He says those services are far more capable than WormGPT or ChatGPT in creating malware – at least for now.

“The threat is that WormGPT does not even try to object to malicious requests, so you can ask it ‘help me develop malware in C# that can bypass anti-virus’ and it will not apply any ethical considerations before providing a response, where ChatGPT would refuse,” Chorafakis said.  

You just have to know how to ask

A malicious request to ChatGPT will produce the same results if it is cleverly worded. It will comply, not knowing what the real purpose is.

“Similarly, if you ask WormGPT to write an email to a CFO to trick him into clicking on a malicious link, it will happily do that where ChatGPT would refuse,” he said.  “With all this in mind, I believe that WormGPT is likely less dangerous than more advanced models that are legitimately available on the market.”

Still, the emergence of a malicious AI platform should be a warning to all. Scam emails may be harder to spot from now on. Paying close attention to any unsolicited communication and treating it with healthy skepticism is now more important than ever.

Take a Home Warranty Quiz. Get matched with an Authorized Partner.