Real-time deepfakes are a menace. how to protect yourself

Real-time deepfakes are a menace.  how to protect yourself

You’ve probably seen deepfake videos on the internet that inject facsimiles of famous people into weird or funny situations – for example, a fake Tom Cruise doing “industrial cleaning” or, in a truly meta effort, an artificial Morgan Freeman extolling “the age of synthetic reality”.

Now imagine getting a phone call from someone who sounds exactly like your child, begging for emergency help. Same technology, but no one is laughing.

Cybersecurity experts say deepfake technology has advanced to the point where it can be used in real-time, allowing fraudsters to replicate someone’s voice, image and movements in a virtual call or meeting. The technology is also widely available and relatively easy to use, they say. And it keeps getting better.

“Thanks to AI tools that create ‘synthetic media’ or generate content, an increasing percentage of what we’re seeing is inauthentic and it’s getting harder to tell the difference,” warned the Federal Trade Commission.

The researchers say the technology for real-time deepfakes has been around for nearly a decade. What’s new is the range of tools available to make them.

“We know we are not prepared as a society” for this threat, said Andrew Gardner, vice president of research, innovation and AI at Gen.

The real-time deepfakes were used to scare grandparents into sending money to simulated relatives, getting jobs at tech companies in an attempt to gain insider information, swaying voters, and embezzling money from lonely men and women. Fraudsters can copy a recording of someone’s voice that was posted online and use the captured audio to impersonate a loved one of the victim; a 23-year-old man is accused of swindling grandparents in Newfoundland out of $200,000 in just three days using this technique.

Tools to weed out this latest generation of deepfakes are also emerging, but they are not always effective and may not be accessible to you. That’s why experts advise taking a few simple steps to protect yourself and your loved ones from the new type of scam.

The term deepfake is short for a simulation powered by deep learning technology – artificial intelligence that ingests oceans of data to try to replicate something human, like having a conversation (e.g. ChatGPT) or creating an illustration (e.g. Dall-E). Gardner said it’s still an expensive and time-consuming proposition to develop these tools, but using them is comparatively quick and easy.

Yisroel Mirsky, an AI researcher and deepfake expert at Ben-Gurion University of the Negev, said technology had advanced to the point where it was possible to make a deepfake video from a single photo of a person and a “decent” clone of a voice with just three or four seconds of audio. But Gardner said the tools widely available to make deepfakes lag behind the state of the art; they require about five minutes of audio and one to two hours of video.

Regardless, thanks to sites like Facebook, Instagram, and YouTube, there are plenty of images and audio for fraudsters to find.

Mirsky said it’s easy to imagine an attacker searching Facebook to identify a potential target’s children, calling the child to record enough audio to clone their voice, and then using a deepfake of the child to beg the target for money to get out of some sort of traffic jam.

Technology is becoming so efficient, he said, that you can clone a face or a voice with a basic gaming computer. And the software is “really point and click,” he said, easily available online and configurable with some basic programming.

To illustrate the effectiveness of real-time deepfakes, the LexisNexis Risk Solutions Government Group shared a video that David Maimon, a professor of criminology at Georgia State University, caught on the dark web of an apparent fishing scam in progress. It showed an online chat between an older man and a young woman who was applying for a loan to meet the man in Canada. But in a third window, you could see that a man was actually saying the words that came out of the woman’s mouth in a woman’s voice – she was a deepfake and he was a scammer.

This technique is known as reenactment, Mirsky and Wenke Lee of the Georgia Institute of Technology said in a paper published in 2020. It can also be used to “perform acts of defamation, cause lack of credibility, spread misinformation, and tamper with evidence,” they wrote. Another approach is substitution, in which the target’s face or body is placed on someone else, as in revenge porn videos.

But exactly how fraudsters are using the tools remains a mystery, Gardner said. That’s because we only know what they were caught doing.

Haywood Talcove, chief executive of the Government Group at LexisNexis Risk Solutions, said the new technology could bypass some of the security techniques that companies have been deploying in place of passwords. For example, he pointed to California’s two-step online identification process, which has users upload two things: a photo of their driver’s license or ID card and then a freshly taken selfie. Fraudsters can buy a fake California ID online for a few dollars, then use deepfake software to generate a matching face for the selfie. “It’s a hot knife in butter,” he said.

Likewise, Talcove said financial firms need to stop using voice identification tools to unlock accounts. “I would be nervous if [at] my bank, my voice was my password”, he said. “Just using just the voice, it doesn’t work anymore.” The same goes for facial recognition, she said, adding that the technology is at the end of its useful life as a form of access control.

The Cybercrime Support Network, a non-profit organization that helps individuals and businesses victimized online, often works with targets of romance scams and encourages people to video chat with their suitors to try to weed out the scammers. Ally Armeson, the network’s director of program, said that just two or three years ago, they could tell customers to look for easy-to-detect glitches like frozen images. But in recent weeks, she said, the network has been contacted by scam victims who said they had video chatted for 10 or 20 minutes with their alleged suitor, “and it was absolutely the person they sent me in the picture.”

She added: “Victims said, ‘The head kind of looked weird on the body, so it looked a little off.'” But it’s not uncommon for people to ignore warning signs, she said. “They want to believe the video is real, so they’ll ignore minor discrepancies.”

(Victims of romance scams in the United States reported losses of $1.3 billion last year.)

Real-time deepfakes also pose a dangerous new threat to businesses. Many companies are training employees to recognize phishing attacks from strangers, Mirsky said, but no one is really bracing for deepfake calls with the cloned voice of a colleague or boss.

“People will confuse familiarity with authenticity,” he said. “And as a result, people will fall for these attacks.”

how to protect yourself

Talcove offered a simple yet difficult way to protect against deepfakes impersonating a family member: Have a secret code word that all family members know, but criminals would not guess. If someone claiming to be your daughter, grandson or nephew calls, Talcove said, asking for the code word can separate the real loved ones from the fake ones.

“Every family now needs a code word,” he said.

Pick something simple and easily memorable that doesn’t need to be written down (and doesn’t get posted to Facebook or Instagram), he said, and then pierce your family’s memory. “You need to make sure they know and practice, practice, practice,” Talcove said.

Gardner also defended codewords. “I think preparation goes a long way” in defending against deepfake scams, he said.

Armeson said her network still tells people to look for certain clues in video calls, including her alleged lover blinking too much or too little, eyebrows that don’t fit her face or hair in the wrong place, and skin that doesn’t match her age. If the person is wearing glasses, make sure the reflection they give is realistic, she tells the network—”deepfakes often fail to fully represent the natural physics of lighting.”

She also urges people to do these simple tests: Ask the other person on the video call to turn their head and place their hand in front of their face. These stunts can be revealing, she said, because deepfakes often haven’t been trained to do them realistically.

Still, she admitted, “we’re just playing defense.” Fraudsters will “always be ahead of us,” weeding out the flaws that reveal the scam, she said. “It’s annoying.”

Ultimately, she said, the most reliable way to debunk deepfakes may be to insist on an in-person meeting. “We have to be really analog about this. We cannot rely on technology alone.”

There are software tools that automatically look for AI-generated flaws and patterns in an effort to separate legitimate audio and video from fake ones. But Mirsky said “this is potentially a losing game” because, as technology improves, the telltale signs that used to give away fakes will disappear.

Mirsky and his team at Ben-Gurion University developed a different approach, called D-CAPTCHA, which operates on the same principle that some websites use to prevent bots from submitting online forms. A D-CAPTCHA system features a test designed to baffle current deepfakes in real time – for example, asking callers to hum, laugh, sing or just clear their throat.

The system, which has yet to be commercialized, could take the form of a waiting room for authenticating guests at confidential virtual meetings or an app that checks for suspicious phone calls. In fact, Mirsky said, “we can develop applications that can try to catch these suspicious calls and examine them before they connect.”

Gardner offered another hopeful note. The experiences people are having now with AI and apps like ChatGPT, he said, have made people quicker to question what’s real and what’s fake, and to look more critically at what they’re seeing.

“The fact that people are having these individual AI conversations is, I think, helping,” he said.

About The Times Utility’s journalism team

This article is from The Times utility journalism team. Our mission is to be essential to the lives of Southern Californians by publishing information that solves problems, answers questions and helps with decision making. We serve audiences in and around Los Angeles – including current Times subscribers and diverse communities that historically have not had their needs met by our coverage.

How can we be of service to you and your community? Email utility (at) latinoes.com or one of our journalists: Matt Ballinger, Jon Healey, Ada Tseng, Jessica Roy and Karen Garcia.

Leave a Comment

%d bloggers like this: