As the world is just beginning to process the potential impact of the artificial intelligence (AI) platform ChatGPT comes the news of another AI breakthrough. This one could have scary implications.
According to London’s Daily Star, Microsoft engineers have produced AI that can reproduce any person’s voice after listening to just three seconds of audio. Called VALL-E, the technology can literally put words in your mouth.
Once the technology has your voice down pat, it can read a text script and produce a vocal conversation that reportedly is very close to the actual person’s tone and pacing. At first glance, the potential for criminal misuse seems enormous.
Take the grandparent scam, which usually targets victims at random. A caller claims to be a grandchild in trouble who needs cash.
The scam isn’t always effective because the voice, though disguised, isn’t close enough to the actual grandchild. But suppose a criminal was able to capture a few seconds of speech from a young person, create dialog, then find the person’s family and target them with the scam. It could be much more effective.
Experts voice concerns
Sandy Fliderman, chief technology officer (CTO) at Industry FinTech, says there is no question that the potential misuse of AI is a big concern because consumers can be exploited in several ways.
“One of the ways that work too well is the deep fakes that are out there,” Fliderman told ConsumerAffairs. “Deep fakes are AI-generated videos and voices that impersonate real people like celebrities, politicians, and in recent cases, bankers.”
Fliderman says AI could also be used to hack passwords, as it learns more about individuals through social media. That will increase the need for stronger passwords.
“The potential for real abuse exists,” Arun Bahl, CEO of Aloe AI told us. “ChatGPT writes with a tone of confidence that is completely independent of the accuracy of its words. At its core, it's a language model – not a knowledge model. We need to do a better job of making that distinction clear.”
Threat to children
Richard Gardner, CEO at Modulus, told us that there have already been reports of ChatGPT being used for nefarious means, including offering the ability to emulate particular kinds of speech, making it useful for bad actors trying to impersonate young people.
“This could be used by child predators to create a more authentic rapport with children,” Gardner said. “Equally disturbing are reports that the technology has been utilized to create malware by hackers in order to engage in cybercrime.”
Mihae Ahn, vice president of Marketing at ProServe IT, says the explosive growth of AI may pose security problems for companies that handle large amounts of data.
“Any new technology can compromise the safety of consumers when used with the wrong intent,” Ahn said. “The smarter the technology becomes, the bigger risks there are if no guardrail is put in place.”
To protect themselves, Fliderman says people will have to be diligent. If something looks “weird,” Fliderman says you should stop what you’re doing.
As for ChatGPT, Bahl says it’s “amazing work” but people should remember that “it's not currently designed for accuracy. It's closer to a digital hallucination.”