FYI.

This story is over 5 years old.

News

Google will not save us from the coming deluge of deepfakes — and Melania Trump is just the start

These videos "will significantly exacerbate the already existing problem we've got with disinformation undermining self-governance, democracy, and the rule of law.”

Google is making it very easy for anyone to find “deepfake” pornographic videos of first lady Melania Trump or President Trump's daughter Ivanka.

Searching Google for “Melania deepfake” or “Ivanka deepfake” returns a long list of direct links to videos in which the faces of Melania and Ivanka Trump have been superimposed onto porn stars. In one video, an actress looking like Ivanka Trump is shown masturbating to videos of her father announcing his candidacy for president.

Advertisement

The video of the first lady is credited to a user called RedKiller, who is responsible for a number of deepfakes featuring Hollywood actresses, such as Mila Kunis and Jennifer Love Hewitt.

While platforms such as Twitter, Reddit, and Pornhub have rid their networks of deepfake videos in recent months, they continue to proliferate online and are easily accessible because Google says it is extremely cautious about removing legal content from its searches — even when it’s highly offensive.

Of course unlike the other platforms mentioned, Google doesn't host these deepfake videos, it simply indexes those hosted on other sites — though in the past Google has downgraded content, such as pirated movies and TV shows, in its results.

The porn industry has once again become an early adopter of new technology, and the videos will no doubt cause embarrassment to the victims. However, the potential threat from deepfake videos goes far beyond porn. From turbocharging the fake news problem to undermining democracies and inciting violence, the ability to depict anyone saying anything has potentially catastrophic implications.

"[Deepfakes] will significantly exacerbate the already existing problem we've got with disinformation undermining self-governance, democracy and the rule of law,” Bobby Chesney, a law professor at the University of Texas and former White House adviser, told VICE News.

Even worse, the videos will likely make the public even more skeptical about the veracity of online content, while providing an easy opt-out to anyone in the public eye who says something they later regret.

Advertisement

READ: Twitter's fake news problem isn't caused by bots. It's caused by you.

“Any time now a politician is caught saying something inappropriate, illegal, or offensive, they have plausible deniability, they are now going to say this content is fake,” Hany Farid, an image forensics expert and chairman of the computer science department at Dartmouth College, told VICE News.

For years the ability to create realistic videos showing someone saying or doing something was the exclusive preserve of Hollywood studios. Today, with rapid advances in machine learning and artificial intelligence, anyone with a computer can produce a deepfake video.

“The technology is accelerating very, very quickly, and literally every few months we see advances in this space,” Farid said. “Now, this technology is being bundled up and it is being made accessible to a much broader audience.”

Creating the videos is relatively easy with a powerful computer graphics card, a couple of videos of the victim, and one of a number of easily accessible apps.

Deepfakes first appeared last year on Reddit, and were named after the user who first introduced the category of X-rated, face-swapping films to the site.

While they have so far been limited to porn, a video published last week by BuzzFeed offered a glimpse of how this technology could be used to undermine politics.

The video featured writer and director Jordan Peele voicing former President Barack Obama, highlighting the ease at making a fake yet convincing audio and video clip of a high-profile individual.

Advertisement

Consider the following possible scenarios in which deepfake videos could be used to spread fake news:

  • A video of Barack Obama “revealing” he was born in Kenya.
  • A video of Donald Trump “admitting” to being a Russian spy.
  • A white police officer shooting an unarmed black man while shouting racial epithets.
  • Footage of Rohingya Muslims attacking government security forces in Myanmar.
  • Audio of Donald Trump accusing Russia of war crimes ahead of a U.N. Security Council meeting.
  • An Israeli Defense Force soldier killing a civilian.

Of course the videos will be debunked, but not before they’ve spread around the world in minutes. If their release is strategically timed, it could have immediate and devastating consequences.

“I think the place you will see [this technology] really causing violence and contributing to deaths first is when it is exploited in armed conflict situations,” Chesney said.

The legal status of deepfake videos is confusing, with creators claiming protection under free speech laws.

But Chesney warned it is “not the case that anything goes, even in the United States with the First Amendment."

“If I knowingly create a false image designed to ruin the business my rival, if I do it on purpose, the fact it is taking the form of speech, won't prevent me from being sued for that,” he said.

But, those making these videos have a legal out — art.

“Art is where it gets tricky,” Chesney said. “If I'm a bad actor, when I’m called to account for a damaging deepfake, I would certainly claim it is satire.”

Advertisement

If the law, in its current form, is unlikely to counter the problem, technology could provide an answer. However, the deepfake creators are well ahead of those looking to create software that can spot fakes.

“We are going to have to start developing forensic techniques that can quickly and accurately find this content," Farid said. "But of course that will be an arms races — we get better at it, the other side is trying to avoid detection, and this is going to be an ongoing battle.”

For now, it appears that tech companies are being required to do the heavy lifting. Pornhub, Twitter, and Reddit have all removed deepfakes from their platforms, while Google has been more reticent.

Yet even if all mainstream platforms remove deepfakes, dedicated sites have already been created to cater to the demand, with AdultDeepFakes.com claiming to be the “best source of celebrity porn videos,” hosting hundreds of videos purporting to show stars such as Emma Stone, Ariana Grande, and Naomi Watts.

Ultimately it will be up to everyone to be more skeptical of what they see online. As a consequence, however, anyone will be able to cry “fake” when exposed.

For example, Trump could credibly claim the "Access Hollywood" tape that emerged during a critical stage of the 2016 presidential election was fake.

While Trump confirmed the tape’s authenticity when it first leaked in 2016, he subsequently told White House staff that it was fake, but because this technology had yet to mature, his claim never stuck.

Situations like this are a direct threat to the Democratic process.

“I'm not sure how we can have democratic elections, or we can have democracy, if we as an electorate can't agree on basic facts,” Farid said.

Cover image: US first lady Melania Trump arrives at the Flagler Museum in Palm Beach, Florida, April 18, 2018, for a visit with the wife of Japanese Prime Minister Shinzo Abe, Akie. (RHONA WISE/AFP/Getty Images)