To fix the problem of deepfakes we must treat the cause, not the symptoms | Matt Beard

Once technology is released, it’s like herding cats. Why do we continue to let the tech sector manage its own mess?

We haven’t yet seen a clear frontrunner emerge as the Democratic candidate for the 2020 US election. But I’ve been interested in another race – the race to see which buzzword is going to be a pivotal issue in political reporting, hot takes and the general political introspection that elections bring. In 2016 it was “fake news”. “Deepfake” is shoring up as one of the leading candidates for 2020.

This week the US House of Representatives intelligence committee asked Facebook, Twitter and Google what they were planning to do to combat deepfakes in the 2020 election. And it’s a fair question. With a bit of work, deepfakes could be convincing and misleading enough to make fake news look like child’s play.

Deepfake, a portmanteau of “deep learning” and “fake”, refers to AI software that can superimpose a digital composite face on to an existing video (and sometimes audio) of a person.

The term first rose to prominence when Motherboard reported on a Reddit user who was using AI to superimpose the faces of film stars on to existing porn videos, creating (with varying degrees of realness) porn starring Emma Watson, Gal Gadot, Scarlett Johansson and an array of other female celebrities.

However, there are also a range of political possibilities. Filmmaker Jordan Peele highlighted some of the harmful potential in an eerie video produced with Buzzfeed, in which he literally puts his words in Barack Obama’s mouth. Satisfying or not, hearing Obama call US president Trump a “total and complete dipshit” is concerning, given he never said it.

Just as concerning as the potential for deepfakes to be abused is that tech platforms are struggling to deal with them. For one thing, their content moderation issues are well documented. Most recently, a doctored video of Nancy Pelosi, slowed and pitch-edited to make her appear drunk, was tweeted by Trump. Twitter did not remove the video, YouTube did, and Facebook de-ranked it in the news feed.

For another, they have already tried, and failed, to moderate deepfakes. In a laudably fast response to the non-consensual pornographic deepfakes, Twitter, Gfycat, Pornhub and other platforms quickly acted to remove them and develop technology to help them do it.

However, once technology is released it’s like herding cats. Deepfakes are a moving feast and as soon as moderators find a way of detecting them, people will find a workaround.

But while there are important questions about how to deal with deepfakes, we’re making a mistake by siloing it off from broader questions and looking for exclusively technological solutions. We made the same mistake with fake news, where the prime offender was seen to be tech platforms rather than the politicians and journalists who had created an environment where lies could flourish.

The furore over deepfakes is a microcosm for the larger social discussion about the ethics of technology. It’s pretty clear the software shouldn’t have been developed and has led – and will continue to lead – to disproportionately more harm than good. And the lesson wasn’t learned. Recently the creator of an app called “DeepNude”, designed to give a realistic approximation of how a woman would look naked based on a clothed image, cancelled the launch fearing “the probability that people will misuse it is too high”.

What the legitimate use for this app is, I don’t know, but the response is revealing in how predictable it is. Reporting triggers some level of public outcry, at which suddenly tech developers realise the error of their ways. Theirs is the conscience of hindsight: feeling bad after the fact rather than proactively looking for ways to advance the common good, treat people fairly and minimise potential harm. By now we should know better and expect more.

Why then do we continue to let the tech sector manage its own mess? Partly it’s because it is difficult, but it’s also because we’re still addicted to the promise of technology even as we come to criticise it. Technology is a way of seeing the world. It’s a kind of promise – that we can bring the world under our control and bend it to our will. Deepfakes afford us the ability to manipulate a person’s image. We can make them speak and move as we please, with a ready-made, if weak, moral defence: “No people were harmed in the making of this deepfake.”

But in asking for a technological fix to deepfakes, we’re fuelling the same logic that brought us here. Want to solve Silicon Valley? There’s an app for that! Eventually, maybe, that app will work. But we’re still treating the symptoms, not the cause.

The discussion around ethics and regulation in technology needs to expand to include more existential questions. How should we respond to the promises of technology? Do we really want the world to be completely under our control? What are the moral costs of doing this? What does it mean to see every unfulfilled desire as something that can be solved with an app?

Yes, we need to think about the bad actors who are going to use technology to manipulate, harm and abuse. We need to consider the now obvious fact that if a technology exists, someone is going to use it to optimise their orgasms. But we also need to consider what it means when the only place we can turn to solve the problems of technology is itself technological.

Big tech firms have an enormous set of moral and political responsibilities and it’s good they’re being asked to live up to them. An industry-wide commitment to basic legal standards, significant regulation and technological ethics will go a long way to solving the immediate harms of bad tech design. But it won’t get us out of the technological paradigm we seem to be stuck in. For that we don’t just need tech developers to read some moral philosophy. We need our politicians and citizens to do the same.

At the moment we’re dancing around the edges of the issue, playing whack-a-mole as new technologies arise. We treat tech design and development like it’s inevitable. As a result, we aim to minimise risks rather than look more deeply at the values, goals and moral commitments built into the technology. As well as asking how we stop deepfakes, we need to ask why someone thought they’d be a good idea to begin with. There’s no app for that.

• Matt Beard is a fellow at The Ethics Centre and the author of Ethical by Design: Principles for Good Technology.

Contributor

Matt Beard

The GuardianTramp

Related Content

Article image
Deepfakes aren't a tech problem. They're a power problem | Oscar Schwartz
By framing deepfakes as a tech problem we allow Silicon Valley to evade responsibility for its symbiotic relationship with fake news

Oscar Schwartz

24, Jun, 2019 @6:00 AM

Article image
Deepfakes v pre-bunking: is Russia losing the infowar?
Ukraine and its allies seem to be doing a good job in this vital battleground, but experts warn the global picture is complex

Dan Milmo and Pjotr Sauer

19, Mar, 2022 @6:00 AM

Article image
'Deepfake' face-swap porn videos banned by Pornhub and Twitter
Films depicting celebrities’ faces superimposed on to adult film actors using AI also banned from Gfycat, but not Reddit

Alex Hern

07, Feb, 2018 @6:47 PM

Article image
The ultimate internet glossary: from 4chan to Zynga
Know your lolz from your lulzsec, and your belfies from your selfies? Hannah Jane Parkinson is here to help with an almost definitive list of digital geekery

Hannah Jane Parkinson

11, Sep, 2014 @11:00 PM

If you have lofty ambitions for your legacy, head for the attic

If we don't print off our documents we could be consigning the records of our lives to the digital shredder, says John Naughton

John Naughton

09, Jan, 2011 @12:05 AM

Article image
'Our minds can be hijacked': the tech insiders who fear a smartphone dystopia
The Google, Apple and Facebook workers who helped make technology so addictive are disconnecting themselves from the internet. Paul Lewis reports on the Silicon Valley refuseniks who worry the race for human attention has created a world of perpetual distraction that could ultimately end in disaster

Paul Lewis in San Francisco

06, Oct, 2017 @5:00 AM

Article image
Top tech firms avoid encryption issue in government talks
Executives commit to removing extremist material but do not address Amber Rudd’s concerns after Westminster attack

Peter Walker Political correspondent

30, Mar, 2017 @7:18 PM

Article image
If Meta’s intransigence isn’t enough, AI poses an even greater threat to journalism | Margaret Simons
While Facebook has refused to renew its Australian media deals, robots scraping news sites for content could upend the industry entirely

Margaret Simons

01, Mar, 2024 @2:00 PM

Article image
The state of AI and social media shows capitalism is unlikely to end with a robot rebellion | Jeff Sparrow
We tend to think about media disruption as driven by technological innovation. But the ‘enshittification’ of our socials shows that the tech itself plays a relatively minor role

Jeff Sparrow

17, Jul, 2023 @3:05 AM

Article image
The Guardian view on artificial intelligence: not a technological problem | Editorial
Editorial: The dream of a computer system with godlike powers and the wisdom to use them well is a theological construct

Editorial

16, Apr, 2018 @5:30 PM