But where there’s smoke, there’s fire. When we interviewed AI and scam experts, it appears that fire is getting hot, too.
The most wicked AI that bad actors can employ are deepfakes (“deep learning” + “fake”) – synthetic media where a photo of a person can be integrated into a video or image and virtually turn someone into someone else. With software similar to a Snapchat or TikTok filter, creative scammers can spin all sorts of deepfakes to deceive the average person, too.
Who better to validate that than DeepMedia CEO and co-founder Rijul Gupta, whose company actually specializes in both detecting and creating deepfakes? Gupta told ConsumerAffairs that phone and email phishing is just a start and that scammers are quickly moving toward deepfake videos. He said there’s even an open-source program that allows anyone to deepfake a live video call and all they need is a 10-second clip of a TikTok video.
Holding your kid hostage
On top of other family emergency scams starting to trend, Gupta says parents need to be on the alert for AI scams using their children as a decoy.
“Imagine you’re a parent and you get a FaceTime request from your kid’s principal. You’ve seen them and spoken to them before and recognize the person you’re talking to on FaceTime, so you assume the video call is real,” Gupta said.
“The person on the other end could scam you out of thousands, saying something like ‘your kid broke something at school, it costs $10,000 to replace. Usually we’d have to file a police report but if you make a bank transfer today [for $500] we can resolve this issue quickly.’
Gupta said that a scammer could even use this technology to kidnap your child for ransom, or worse.
"There’s a problem at school, you’ll need to drop off your kid at this location instead! There are a lot of frightening scenarios that could happen when someone deepfakes a live video call.”
They’ve already found a place in the office
Deepfakes are showing up in virtual meetings, too, says the FBI. In a recent internet crime report, the agency said it’s seen scammers compromise a CEO or CFO’s email, then turn around and use it to request employees to participate in virtual meeting platforms where the scam magic unfurls.
“In those meetings, the fraudster would insert a still picture of the CEO with no audio, or a ‘deepfake’ audio through which fraudsters, acting as business executives, would then claim their audio/video was not working properly,” the agency wrote.
“The fraudsters would then use the virtual meeting platforms to directly instruct employees to initiate wire transfers or use the executives’ compromised email to provide wiring instructions.”
ChatGPT can turn amateur hackers into geniuses
ChatGPT is the current belle of the AI ball, but one privacy expert says it’s quickly turning into a beast. Dimitri Shelest, the CEO & founder of OneRep — a company that automates the removal of unauthorized private listings from the web to help people restore privacy – told ConsumerAffairs that the latest version of ChatGPT-4 has generated a new wave of criticism from privacy experts.
“In the same way as writing or improving useful code, ChatGPT can be used to write code that steals data. More specifically, it can help build websites and bots that trick users into sharing their information at such scales that can potentially take social engineering scams to the industrial level,” Shelest told us.
He added that cybercrooks who have little command of English are a thing of the past with ChatGPT, because AI will help them create credible, highly-targeted phishing campaigns that can be difficult to crack and difficult for the average person to detect as fake.
How to spot a fake
The AI deepfakes ConsumerAffairs found floating around on the web are impressive, but Gupta says there are some wrinkles you can look for that might give you a clue.
“You can recognize fake AI-generated images if the background is blurry or warped, if the face is asymmetrical and the shadowing doesn’t make sense, or if the teeth don’t look as sharp as they should,” he said.
As far as voice cloning is concerned, he suggests that listening for the subtleties in emotions and accents can help spot a deepfake.
However, from his experience, those suggestions may only be band-aid fixes.
“AI is evolving and getting more sophisticated every day, which makes it really difficult to determine what is real and what is fake,” he warned. “But as deepfake faces and voices become more advanced, these techniques will no longer be enough to spot a fake, making detection AI the only viable solution to keeping people safe.”