Sorry, but I’ve lost my faith in tech evangelism

There are too many worrying developments in tech – traumatised moderators, AI bias, facial recognition – to be anything but pessimistic about the future

For my sins, I get invited to give a few public lectures every year. Mostly, the topic on which I’m asked to speak is the implications for democracy of digital technology as it has been exploited by a number of giant US corporations. My general argument is that those implications are not good, and I try to explain why I think this is the case. When I’ve finished, there is usually some polite applause before the Q&A begins. And always one particular question comes up. “Why are you so pessimistic?”

The interesting thing about that is the way it reveals as much about the questioner as it does about the lecturer. All I have done in my talk, after all, is to lay out the grounds for concern about what networked technology is doing to our democracies. Mostly, my audiences recognise those grounds as genuine – indeed as things about which they themselves have been fretting. So if someone regards a critical examination of these issues as “pessimistic” then it suggests that they have subconsciously imbibed the positive narrative of tech evangelism.

An ideology is what determines how you think even when you don’t know you’re thinking. Tech evangelism is an example. And one of the functions of an ideology is to stop us asking awkward questions. Last week Vice News carried another horrifying story about the dark underbelly of social media. A number of Facebook moderators – those who spot and delete unspeakable content uploaded to the platform – are suing the company and one of its subcontractors in an Irish court, saying they suffered “psychological trauma” as a result of poor working conditions and a lack of proper training to prepare them for viewing some of the most horrific content seen anywhere online. “My first day on the job,” one of them, Sean Burke, reported, “I witnessed someone being beaten to death with a plank of wood with nails in it.” A few days later he “started seeing actual child porn”.

Facebook employs thousands of people like Burke worldwide, generally using subcontractors. All of the evidence we have is that the work is psychologically damaging and often traumatic. The soothing tech narrative is that Facebook is spending all this money to ensure that our social-media feeds are clean and unperturbing. So it’s an example of corporate social responsibility. The question that is never asked is, why does Facebook allow anybody to post anything they choose – no matter how grotesque – on its platforms, when it has total control of those platforms? You know the answer: it involves growth and revenues, and traumatisation of employees is just an unfortunate byproduct of its core business. They’re collateral damage.

Or take machine learning, the tech obsession du jour. Of late, engineers have discovered that “bias” is a big problem with that technology. Actually, it’s just the latest manifestation of GIGO – garbage in, garbage out – except now it’s BIBO. And there’s a great deal of sanctimonious huffing and puffing in the industry about it, accompanied by trumpeted determination to “fix” it. The trouble is that, as Julia Powles and Helen Nissenbaum pointed out in a recent scorching paper, “addressing bias as a computational problem obscures its root causes. Bias is a social problem, and seeking to solve it within the logic of automation is always going to be inadequate.”

But this idea – that bias is a problem for which there is no technological fix – is anathema to the tech industry, because it threatens to undermine the deterministic narrative that AI will be everywhere Real Soon Now and the rest of us will just have to get used to it.

Worse still (for the tech companies), it might give someone the idea that maybe some kinds of tech should actually be banned because it’s societally harmful. Take facial recognition technology, for example. We already know that it is poor at recognising members of some ethnic groups and researchers are trying to make it more inclusive. But they still implicitly accept that the technology is acceptable.

That tacit acceptance is actually buying into the tech-deterministic narrative, though. The question we should be asking – as the legal scholar Frank Pasquale says – is whether some of these technologies should be outlawed, or at least licensed for socially productive uses, like, say, radioactive isotopes are for medical purposes. And as regards some of the really dangerous applications of this stuff – for example face-classifying AI, which is already being explored (and, it seems, deployed in China as a way of inferring sexual orientation, tendencies toward crime, and so on just from images of faces – shouldn’t we be asking whether this kind of research should be allowed at all? And if anyone regards that as a pessimistic thought, then can I respectfully suggest that maybe they haven’t been paying attention?

What I’ve been reading

Listen up, libertarians
Capitalism needs the state more than the state needs it – a terrific essay on Aeon by a great economist, Dani Rodrik. It should be required reading in Silicon Valley.

His defects are manifest
Big tech’s big defector: the title of an interesting New Yorker profile of Roger McNamee, an early investor in Facebook who eventually saw the light, and is now repenting.

More haste, less speed
Speed reading is for skimmers, slow reading is for scholars, according to David Handel on Medium.

Contributor

John Naughton

The GuardianTramp

Related Content

Article image
Can facial recognition technology really reveal political orientation? | John Naughton
The author of a peer-reviewed study says ‘yes it can’ and if he’s right, it has frightening implications

John Naughton

23, Jan, 2021 @4:00 PM

Article image
As a new year dawns expect a fresh assault on big tech | John Naughton
Democracies have finally begun to confront the internet giants and their unrivalled and untrammelled power

John Naughton

01, Jan, 2022 @4:00 PM

Article image
The silencing of Trump has highlighted the authoritarian power of tech giants | John Naughton
The US president’s Twitter ban has sparked a furious debate about online opinion, but it’s part of a bigger conversation

John Naughton

16, Jan, 2021 @4:00 PM

Article image
Controversial Pegasus spyware faces its day of reckoning | John Naughton
The infamous hacking tool is now at the centre of international lawsuits thanks to a courageous research lab

John Naughton

27, Nov, 2021 @4:00 PM

Article image
The privacy paradox: why do people keep using tech firms that abuse their data? | John Naughton
Despite privacy scandals, Facebook is more profitable than ever

John Naughton

05, May, 2019 @6:00 AM

Article image
How FarmVille and Facebook helped to cultivate a new audience for gaming | John Naughton
The Flash-based title, now put to rest alongside Adobe’s animation tool, was much derided, but broadened the appeal of computer games

John Naughton

09, Jan, 2021 @4:00 PM

Article image
More choice on privacy just means more chances to do what’s best for big tech | John Naughton
A study of how Facebook, Google and Microsoft have applied the EU’s new GDPR rules shows users are being manipulated

John Naughton

08, Jul, 2018 @6:00 AM

Article image
What do we do about deepfake video?
Deepfake – the ability of AI to fabricate apparently real footage of people – is a growing problem with implications for us all

Tom Chivers

23, Jun, 2019 @8:00 AM

Article image
If tech experts worry about artificial intelligence, shouldn’t you as well? | John Naughton
A recent study of 1,000 leaders in the technology sector found more fear than hope about the continuing growth of AI

John Naughton

16, Dec, 2018 @6:59 AM

Article image
If the UK really wants to be a sovereign nation, it should stand up to big tech | John Naughton
The government has come up with a clever new way of regulating the digital marketplace – but will it ever become law?

John Naughton

12, Dec, 2020 @4:00 PM