Press "Enter" to skip to content

Deepfakes Are Amazing. They’re Also Terrifying for Our Future.

BY 

Imagine this: You click on a news clip and see the President of the United States at a press conference with a foreign leader. The dialogue is real. The news conference is real. You share with a friend. They share with a friend. Soon, everyone has seen it. Only later you learn that the president’s head was superimposed on someone else’s body. None of it ever actually happened.

Sound farfetched? Not if you’ve seen a certain wild video from YouTube user Ctrl-Shift Face (take a look at the clip above). Since last August, it’s gotten almost 9.5 million views.

In it, comedian Bill Hader shares a story about his encounters with Tom Cruise and Seth Rogen. As Hader, a skilled impressionist, does his best Cruise and Rogen, those actors’ faces seamlessly, frighteningly melt into his own. The technology makes Hader’s impressions that much more vivid, but it also illustrates how easy—and potentially dangerous—it is to manipulate video content.

What Is a Deepfake?

The Hader video is an expertly crafted deepfake, a technology invented in 2014 by Ian Goodfellow, a Ph.D. student who now works at Apple. Most deepfake technology is based on generative adversarial networks (GANs).

GANs enable algorithms to move beyond classifying data into generating or creating images. This occurs when two GANs try to fool each other into thinking an image is “real.” Using as little as one image, a seasoned GAN can create a video clip of that person. Samsung’s AI Center recently released research sharing the science behind this approach.

“Crucially, the system is able to initialize the parameters of both the generator and the discriminator in a person-specific way, so that training can be based on just a few images and done quickly, despite the need to tune tens of millions of parameters,” said the researchers behind the paper. “We show that such an approach is able to learn highly realistic and personalized talking head models of new people and even portrait paintings.”

For now, this is only applied to talking head videos. But when 47 percent of Americans watch their news through online video content, what happens when GANs can make people dance, clap their hands, or otherwise be manipulated?

Why Are Deepfakes Dangerous?

If we forget the fact that there are over 30 nations actively engaged in cyberwar at any time, then the biggest concern with deepfakes might be things like the ill-conceived website Deepnudes, where celebrity faces and the faces of ordinary women could be superimposed on pornographic video content.

Deepnudes’ founder eventually canceled the site’s launch, fearing “the probability that people will misuse it is too high.” Well, what else would people do with fake pornography content?

“At the most basic level, deepfakes are lies disguised to look like truth,” says Andrea Hickerson, Director of the School of Journalism and Mass Communications at the University of South Carolina. “If we take them as truth or evidence, we can easily make false conclusions with potentially disastrous consequences.”

A lot of the fear about deepfakes rightfully concerns politics, Hickerson says. “What happens if a deepfake video portrays a political leader inciting violence or panic? Might other countries be forced to act if the threat was immediate?”

With the 2020 elections approaching and the continued threat of cyberattacks and cyberwar, we have to seriously consider a few scary scenarios:

 Weaponized deepfakes will be used in the 2020 election cycle to further ostracize, insulate, and divide the American electorate.

→ Weaponized deepfakes will be used to change and impact the voting behavior, but also the consumer preferences of hundreds of millions of Americans.

 Weaponized deepfakes will be used in spear phishing and other known cybersecurity attack strategies to more effectively target victims.

This means that deepfakes put companies, individuals, and the government at increased risk.

“The problem isn’t the GAN technology, necessarily,” says Ben Lamm, CEO of the AI company Hypergiant Industries. “The problem is that bad actors currently have an outsized advantage and there are not solutions in place to address the growing threat. However, there are a number of solutions and new ideas emerging in the AI community to combat this threat. Still, the solution must be humans first.”

A New Peril: Deepfake Financial Scams

Do you remember your first robocall? Perhaps not, considering the automated phone calls were pretty convincing a few years ago, back when most of us didn’t understand what they were just yet. Luckily, those scammy calls have been on the decline: the U.S. Federal Trade Commission reports that robocall complaints fell 68 percent in April and 60 percent in May, compared to the same periods in 2019.

However, audio deepfake technology could easily bolster the deceitful tactic. According to Nisos, an Alexandria, Virgina-based cybersecurity company, hackers are using machine learning to clone peoples’ voices. In one documented case, hackers used deepfake synthetic audio in an attempt to defraud a tech company.

Nisos shared that audio clip with Motherboard. Take a listen.

This came in the form of a voicemail message, which seemed to come from the tech company’s CEO. In the message, he asks an employee to call back and “finalize an urgent business deal.”

“The recipient immediately thought it suspicious and did not contact the number, instead referring it to their legal department, and as a result the attack was not successful,” Nisos notes in a July 23 white paper.


⚠️ What to do if you receive a suspicious voicemail ⚠️

→ Alert your company’s general counsel or another high-ranking executive. Often these social engineering schemes prey on lower-level employees.

 You can return the call directly to get the potential hacker on the line. Nisos says that deepfake technology is “not sophisticated enough” to mimic a full phone call.

→ Get your company to exercise a series of “challenge questions” about information that is not publicly known. This should help vet the identify of the person on the other end of the call.


What’s Being Done to Fight Deepfakes?

Last summer, the U.S. House of Representatives’ Intelligence Committee sent a letter to Twitter, Facebook, and Google asking how the social media sites planned to combat deepfakes in the 2020 election. The inquiry came in large part after President Trump tweeted out a deepfake video of House Speaker Nancy Pelosi:

Earlier this year, Facebook took a positive step toward banning deepfakes. In a January 6 blog post, Monika Bickert, vice president of global policy management for Facebook, wrote that the company is making new efforts to “remove misleading manipulated media.”

Facebook is taking a specific, two-pronged approach to flagging and removing deepfakes. For an image to be taken down, it must meet the following criteria, according to the blog post:

  • It has been edited or synthesized–beyond adjustments for clarity or quality–in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.
  • It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.

Satire and parody videos are still safe, though, as are videos that have been edited only to omit or change the order of words. That means manipulated media can still get through the cracks. Notably, TikTok and Twitter have similar policies.

Meanwhile, government institutions like DARPA and researchers at colleges like Carnegie Mellon, the University of Washington, Stanford University, and the Max Planck Institute for Informatics are also experimenting with deepfake technology. So is Disney. The organizations are looking at both how to use GAN technology, but also how to combat it.

“ORIGINAL CONTENT SITE”

Breaking News: