FYI.

This story is over 5 years old.

News

“Deepfake” videos like that Gal Gadot porn are only getting more convincing — and more dangerous

Researchers want to make "deepfake" technology even more advanced, but they don't have answers about people's ethical concerns.

A Reddit user named "deepfakes" posted a video of Gal Gadot's face superimposed on a porn star's body last December. Other than Gadot's eyes not blinking, the video was a reasonably convincing fake.

Now, just about anyone can be in almost anything — and there's no clear solution or any serious consequences, as Motherboard reported. The technology could even become more convincing in the future.

Gadot's fake porn precipitated a brief but highly publicized blitz of amateur graphic artist trolls posting face-swapped videos online. Reddit, Twitter, and other websites soon banned these "involuntary" videos. But "deepfakes" videos — as they've come to be known — often remain on mainstream sites for kicks. Posting videos of Nicolas Cage's face, for example, is particularly popular.

Advertisement

While celebrities and public figures are rightly concerned about the implications of the technology, its roots aren't in the dark corners of the internet.

Last week in Vancouver, the world of computer-generated graphics companies gathered at the SIGGRAPH conference to discuss their trade. The annual conference brings together the motion picture industry, computing giants like Google and Adobe, and leading computer science research institutions present their latest research.

READ: Google will not save us from the coming deluge of deepfakes — and Melania Trump is just the start

Researchers at the Technical University of Munich, along with colleagues from Stanford, presented their work to make deepfakes even more advanced. But when an audience member asked them how they planned to confront the ethical implications of these easily replicable fake videos, nobody had a satisfying answer. The researchers apparently hadn’t considered how the technology they were developing could be misused.

The U.S. government thinks that’s a problem. This year, the Defense Advanced Research Projects Agency, or DARPA, will spend more than $28 million developing a way to detect and debunk deepfake videos. But even if DARPA helps to fund technology that can detect a fake video, that doesn’t mean people won’t be able to make them — and distribute them quickly.

This segment originally aired August 20, 2018 on VICE News Tonight on HBO.