The big idea: Should we worry about artificial intelligence?

Could AI turn on us, or is natural stupidity a greater threat to humanity?

Ever since Garry Kasparov lost his second chess match against IBM’s Deep Blue in 1997, the writing has been on the wall for humanity. Or so some like to think. Advances in artificial intelligence will lead – by some estimates, in only a few decades – to the development of superintelligent, sentient machines. Movies from The Terminator to The Matrix have portrayed this prospect as rather undesirable. But is this anything more than yet another sci-fi “Project Fear”?

Sign up to our Inside Saturday newsletter for an exclusive behind the scenes look at the making of the magazine’s biggest features, as well as a curated list of our weekly highlights.

Some confusion is caused by two very different uses of the phrase artificial intelligence. The first sense is, essentially, a marketing one: anything computer software does that seems clever or usefully responsive – like Siri – is said to use “AI”. The second sense, from which the first borrows its glamour, points to a future that does not yet exist, of machines with superhuman intellects. That is sometimes called AGI, for artificial general intelligence.

How do we get there from here, assuming we want to? Modern AI employs machine learning (or deep learning): rather than programming rules into the machine directly we allow it to learn by itself. In this way, AlphaZero, the chess-playing entity created by the British firm Deepmind (now part of Google), played millions of training matches against itself and then trounced its top competitor. More recently, Deepmind’s AlphaFold 2 was greeted as an important milestone in the biological field of “protein-folding”, or predicting the exact shapes of molecular structures, which might help to design better drugs.

Machine learning works by training the machine on vast quantities of data – pictures for image-recognition systems, or terabytes of prose taken from the internet for bots that generate semi-plausible essays, such as GPT2. But datasets are not simply neutral repositories of information; they often encode human biases in unforeseen ways. Recently, Facebook’s news feed algorithm asked users who saw a news video featuring black men if they wanted to “keep seeing videos about primates”. So-called “AI” is already being used in several US states to predict whether candidates for parole will reoffend, with critics claiming that the data the algorithms are trained on reflects historical bias in policing.

Computerised systems (as in aircraft autopilots) can be a boon to humans, so the flaws of existing “AI” aren’t in themselves arguments against the principle of designing intelligent systems to help us in fields such as medical diagnosis. The more challenging sociological problem is that adoption of algorithm-driven judgments is a tempting means of passing the buck, so that no blame attaches to the humans in charge – be they judges, doctors or tech entrepreneurs. Will robots take all the jobs? That very framing passes the buck because the real question is whether managers will fire all the humans.

The existential problem, meanwhile, is this: if computers do eventually acquire some kind of god‑level self-aware intelligence – something that is explicitly in Deepmind’s mission statement, for one (“our long-term aim is to solve intelligence” and build an AGI) – will they still be as keen to be of service? If we build something so powerful, we had better be confident it will not turn on us. For the people seriously concerned about this, the argument goes that since this is a potentially extinction-level problem, we should devote resources now to combating it. The philosopher Nick Bostrom, who heads the Future of Humanity Institute at the University of Oxford, says that humans trying to build AI are “like children playing with a bomb”, and that the prospect of machine sentience is a greater threat to humanity than global heating. His 2014 book Superintelligence is seminal. A real AI, it suggests, might secretly manufacture nerve gas or nanobots to destroy its inferior, meat-based makers. Or it might just keep us in a planetary zoo while it gets on with whatever its real business is.

AI wouldn’t have to be actively malicious to cause catastrophe. This is illustrated by Bostrom’s famous “paperclip problem”. Suppose you tell the AI to make paperclips. What could be more boring? Unfortunately, you forgot to tell it when to stop making paperclips. So it turns all the matter on Earth into paperclips, having first disabled its off switch because allowing itself to be turned off would stop it pursuing its noble goal of making paperclips.

That’s an example of the general “problem of control”, subject of AI pioneer Stuart Russell’s excellent Human Compatible: AI and the Problem of Control, which argues that it is impossible to fully specify any goal we might give a superintelligent machine so as to prevent such disastrous misunderstandings. In his Life 3.0: Being Human in the Age of Artificial Intelligence, meanwhile, the physicist Max Tegmark, co-founder of the Future of Life Institute (it’s cool to have a future-of-something institute these days), emphasises the problem of “value alignment” – how to ensure the machine’s values line up with ours. This too might be an insoluble problem, given that thousands of years of moral philosophy have not been sufficient for humanity to agree on what “our values” really are.

Other observers, though, remain phlegmatic. In Novacene, the maverick scientist and Gaia theorist James Lovelock argues that humans should simply be joyful if we can usher in intelligent machines as the logical next stage of evolution, and then bow out gracefully once we have rendered ourselves obsolete. In her recent 12 Bytes, Jeanette Winterson is refreshingly optimistic, supposing that any future AI will be at least “unmotivated by the greed and land-grab, the status-seeking and the violence that characterises Homo sapiens”. As the computer scientist Drew McDermott suggested in a paper as long ago as 1976, perhaps after all we have less to fear from artificial intelligence than from natural stupidity.

Further reading

Human Compatible: AI and the Problem of Control by Stuart Russell (Penguin, £10.99)

Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark (Penguin, £10.99)

12 Bytes: How We Got Here, Where We Might Go Next by Jeannette Winterson (Jonathan Cape, £16.99)

Contributor

Steven Poole

The GuardianTramp

Related Content

Article image
The big idea: should we worry about sentient AI?
A Google employee raised the alarm about a chatbot he believes is conscious. A philosopher asks if he was right to do so

Regina Rini

04, Jul, 2022 @11:30 AM

Article image
The big idea: are we living in a simulation?
Could the universe be an elaborate game constructed by bored aliens?

Steven Poole

08, Aug, 2022 @11:30 AM

Article image
The big idea: will AI make us stupid?
There may be an unexpected upside to machines taking on more of our mental tasks

Simon Winchester

19, Jun, 2023 @11:30 AM

Article image
The big idea: should we all be putting chips in our brains?
Implants like Elon Musk’s Neuralink offer great promise, but come with massive ethical questions

Anil Seth

26, Feb, 2024 @12:30 PM

Article image
No one can read what’s on the cards for AI’s future | John Naughton
AI is now beating us at poker, but not even Google co-founder Sergey Brin can say with any certainty what the next steps for machine learning are

John Naughton

29, Jan, 2017 @7:00 AM

Article image
‘I hope I’m wrong’: the co-founder of DeepMind on how AI threatens to reshape life as we know it
From synthetic organisms to killer drones, Mustafa Suleyman talks about the mind-blowing potential of artificial intelligence, and how we can still avoid catastrophe

David Shariatmadari

02, Sep, 2023 @8:00 AM

Article image
‘It is a beast that needs to be tamed’: leading novelists on how AI could rewrite the future
Novelists and poets, Bernardine Evaristo, Jeanette Winterson, Stephen Marche and others, consider the threats and thrilling possibilities of artificial intelligence

Bernardine Evaristo, Jeanette Winterson, Adam Roberts, YZ Chin, Harry Josephine Giles, Louisa Hall, Stephen Marche, Will Eaves, Nick Harkaway, Jo Callaghan, Philip Terry and Nathan Filer

11, Nov, 2023 @9:00 AM

Article image
Nick Bostrom: ‘We are like small children playing with a bomb’
Sentient machines are a greater threat to human existence than climate change, according to the Oxford philosopher Nick Bostrom

Tim Adams

12, Jun, 2016 @7:30 AM

Article image
Why we have to get smart about artificial intelligence
Joanna Goodman looks at the promises and pitfalls of AI as we face the real possibility of ‘thinking’ machines capable of making decisions that affect humans

Joanna Goodman

26, Jan, 2015 @2:00 PM

Article image
Google's artificial intelligence machine to battle human champion of 'Go'
Lee Se-dol, 33, one of the world’s top players of the ancient Asian pastime, is confident he can beat Alphago. But he hasn’t seen improvements made to the system

Alex Hern

07, Mar, 2016 @2:33 PM