Scammers last week used deepfake technology to steal $25 million from a multinational Hong Kong firm.
Deepfake technology — a synthetic representation of a person’s likeness — is not new technology. Think Mark Hamill’s de-aged return as a young Luke Skywalker in an episode of ‘The Mandalorian’ in 2019.
Artificial intelligence is likewise nothing new.
But when it launched at the end of 2022, ChatGPT made AI technology cheaply accessible to the masses, simultaneously setting off a race between nearly all of the mega-cap tech corporations (and a bunch of startups, too) to ship more powerful models.
The risks and active threats incited by this recent proliferation of AI have been called out by certain experts for months: enhanced socioeconomic inequity, economic disruption, algorithmic discrimination, misinformation, disinformation, political instability and a whole new era of fraud.
The past year has seen mounting cases — in a variety of formats — of AI-generated deepfake fraud, some that have attempted to squeeze money from unsuspecting civilians, some that have served to mock artists and some that have attempted to humiliate celebrities at scale.
Last week, scammers armed with AI-generated deepfake technology stole around $25 million from a multinational corporation in Hong Kong, according to AFP.
Perhaps it would be more accurate to say that they were given the money.
A finance worker at the firm moved $25 million into designated bank accounts after talking to several senior officers, including the company’s chief financial officer, on a video conference call.
No one on the call, besides the worker, was real.
The worker said that, despite his initial suspicion, the people on the call both looked and sounded like colleagues he knew.
“Scammers found publicly available video and audio of the impersonation targets via YouTube, then used deepfake technology to emulate their voices … to lure the victim to follow their instructions,” acting Senior Superintendent Baron Chan told reporters.
All the deepfake fraud
This comes on the heels of an incident at the end of January in which fake, sexually explicit images of Taylor Swift went viral on social media. The images were generated using (MSFT) Microsoft’s Designer AI image generator.
The issue of deepfake porn isn’t even new; Motherboard reported on deepfake celebrity porn in 2017. New versions of AI image generators have just made deepfake technology quicker and more accessible, resulting in an incident last year in which deepfake, pornographic images of female high school students were created and shared by their classmates.
In January 2023, a mother received an AI-generated phone call from scammers who said they had kidnapped her daughter. The scammers were asking for a ransom payment of $1 million. And though her daughter was safe at home and in bed, her AI-generated screams were more than convincing.
More deep dives on AI:
- Think tank director warns of the danger around ‘non-democratic tech leaders deciding the future’
- US Expert Warns of One Overlooked AI Risk
- Artificial Intelligence is a sustainability nightmare — but it doesn’t have to be
Deepfaked images of politicians and other public figures have made their rounds on social media over the past year, including those of former President Donald Trump and the Pope.
A fake, AI-generated robocall of President Joe Biden went out before the New Hampshire primary, telling voters not to participate.
Last month, a self-described “comedy AI” consumed each of comedian George Carlin’s specials, then published a seemingly new hour-long special in Carlin’s voice on YouTube.
A Europol report from March 2023 found that the danger of Large Language Models (LLMs) like ChatGPT in the hands of fraudsters is one of speed and scale.
“With the help of LLMs … phishing and online fraud can be created faster, much more authentically and at significantly increased scale,” the report reads.
Lou Steinberg, deepfake AI expert and founder of cyber research firm CTM Insights, said that as AI gets better, this problem will continue to get worse.
Identity hijacking: The next generation of identity theft
“In 2024, AI will run for President, the Senate, the House and the Governor of several states. Not as a named candidate, but by pretending to be a real candidate,” Steinberg said. “We’ve gone from worrying about politicians lying to us to scammers lying about what politicians said …. and backing up their lies with AI-generated fake ‘proof.'”
“It’s ‘identity hijacking,’ the next generation of identity theft, in which your digital likeness is recreated and fraudulently misused,” he added.
The best safeguard against static deepfake images, he said, is to incorporate micro-fingerprint technology that would be built into camera apps, designed to help social media platforms recognize when an image is authentic and when it has been tampered with.
Any images that lack some sort of certificate of authenticity, according to Steinberg, cannot be trusted.
More deep dives on AI:
- New platform seeks to prevent Big Tech from stealing art
- Marc Benioff and Sam Altman at odds over core values of tech companies
- Senate Judiciary Committee seeks to build new framework to rein in Big Tech
When it comes to interactive deepfakes — phone calls and videos — Steinberg said the simple solution is to develop a code word to be used between family members and friends.
Companies, such as the corporation in Hong Kong, should establish policies around nonstandard requests for payment that involve codewords or confirmations through a different channel, Steinberg said. A video call cannot be trusted on its own; the officers in that call should be contacted separately and directly.
“What static and interactive deepfakes have in common is that most current approaches try to detect if something is fake, not if it’s real. That’s a flawed approach,” he told TheStreet. “Deepfakes keep getting more realistic, and there are a limitless number that can be generated. There is only one real you. It’s much easier to check if a small number of things are real vs if an infinite number are fake.”
The way forward
The deepfake war is one in which AI will be wielded on both sides, according to Steinberg.
New technologies designed to authenticate humanity or flag deepfake fraud need to be created and shipped to ensure the safety of any number of digital platforms, Steinberg said. And the government needs to make it “illegal for social media companies to accept deepfakes into a U.S.-based platform.”
“If social media platforms were liable for accepting deepfakes without a valid signature, they would implement controls,” he said. “Similarly, if audio and video carriers (phone companies, Zoom, Teams, etc.) were required by law to have a degree of certainty regarding who was connected, they would implement technologies that ensured the identity or had proof that the content they were presenting was legitimate.”
Though Biden’s October executive order mentions efforts to protect citizens against fraud, clear, enforceable legislation has yet to surface.
One cybersecurity expert told TheStreet last week that a technical or legal solution to the issues of deepfake porn, for one, might not exist, and certainly won’t anytime soon, suggesting instead that parents and teachers must work to form a new, more critical relationship with technology and social media.
“In the end, I am reminded of the economist’s term ‘negative externalities,'” AI researcher Gary Marcus wrote. “In the old days, factories generated pollution and expected everyone else to deal with the consequences. Now it’s AI developers who are expecting to do what they do scot-free, while society picks up the tab.”