Human Compatible by Stuart Russell review – AI and our future

Creating machines smarter than us could be the biggest event in human history – and the last

Here’s a question scientists might ask more often: what if we succeed? That is, how will the world change if we achieve what we’re striving for? Tucked away in offices and labs, researchers can develop tunnel vision, the rosiest of outlooks for their creations. The unintended consequences and shoddy misuses become afterthoughts – messes for society to clean up later.

Today those messes spread far and wide: global heating, air pollution, plastics in the oceans, nuclear waste and babies with badly rewritten DNA. All are products of neat technologies that solve old problems by creating new ones. In the inevitable race to be first to invent, the downsides are dismissed, unexplored or glossed over.

In 1995, Stuart Russell wrote the book on AI. Co-authored with Peter Norvig, Artificial Intelligence: A Modern Approach became one of the most popular course texts in the world (Norvig worked for Nasa; in 2001, he joined Google). In the final pages of the last chapter, the authors posed the question themselves: what if we succeed? Their answer was hardly a ringing endorsement. “The trends seem not to be too terribly negative,” they offered. A lot has happened since: – Google and Facebook for starters.

In Human Compatible, Russell returns to the question and this time does not hold back. The result is surely the most important book on AI this year. Perhaps, as Richard Brautigan’s poem has it, life is good when we are all watched over by machines of loving grace. But Russell, a professor at the University of California, Berkeley, sees darker eventualities. Creating machines that surpass our intelligence would be the biggest event in human history. It may also be the last, he warns. Here he makes the convincing case that how we choose to control AI is “possibly the most important question facing humanity”.

Russell has picked his moment well. Tens of thousands of the world’s brightest minds are now building AIs. Most work on one-trick ponies – the “narrow” AIs that process speech, translate languages, spot people in crowds, diagnose diseases, or whip people at games from Go to Starcraft II. But these are a far cry from the field’s ultimate goal: general purpose AIs that match, or surpass, the broad-based brainpower of humans.

Its is not a ludicrous ambition. From the start, DeepMind, the AI group owned by Alphabet, Google’s parent company, set out to “solve intelligence” and then use that to solve everything else. In July, Microsoft signed a $1bn contract with OpenAI, a US outfit, to build an AI that mimics the human brain. It is a high stakes race. As Vladimir Putin said: whoever becomes the leader in AI “will become the ruler of the world”.

Russell doesn’t claim we are nearly there. In one section he sets out the formidable problems computer engineers face in creating human-level AI. Machines must know how to turn words into coherent, reliable knowledge; they must learn how to discover new actions and order them appropriately (boil the kettle, grab a mug, toss in a teabag). And like us, they must manage their cognitive resources so they can reach good decisions fast. These are not the only hurdles, but they give a flavour of the task ahead. Russell suspects it will keep researchers busy for another 80 years, but stresses the timing is impossible to predict.

Even with apocalypse camped on the horizon, this is a wry and witty tour of intelligence and where it may take us. And where exactly is that? A machine that masters all the above would be a “formidable decision maker in the real world”, Russell says. It would absorb vast amounts of information from the internet, TV, radio, satellites and CCTV and with it gain a more sophisticated understanding of the world and its inhabitants than any human could ever hope for.

What could possibly go right? In education, AI tutors would maximise the potential of every child. They would master the vast complexity of the human body, letting us banish disease. As digital personal assistants they would put Siri and Alexa to shame: “You would, in effect, have a high-powered lawyer, accountant, and political advisor on call at any time.”

And what of the downsides? Without serious progress on AI safety and regulation, Russell foresees messes aplenty and his chapter on misuses of AI is grim reading. Advanced AI would hand governments such extraordinary powers of surveillance, persuasion and control that “the Stasi will look like amateurs”. And while Terminator-style killer robots are not about to eradicate humanity, drones that select and kill individuals based on their faceprints, skin colour or uniforms are entirely feasible. As for jobs, we may no longer make a living by providing physical or mental labour, but we can still supply our humanity. Russell notes: “We will need to become good at being human.”

What’s worse than a society-destroying AI? A society-destroying AI that won’t switch off. It’s a terrifying, seemingly absurd prospect that Russell devotes much time to. The idea is that smart machines will suss out, as per HAL in 2001: A Space Odyssey, that goals are hard to achieve if someone pulls the plug. Give a superintelligent AI a clear task – to make the coffee, say – and its first move will be to disable its off switch. The answer, Russell argues, lies in a radical new approach where AIs have some doubt about their goals, and so will never object to being shut down. He moves on to advocate “provably beneficial” AI, whose algorithms are mathematically proven to benefit their human users. Suffice to say this is a work in progress. How will my AI deal with yours?

Let’s be clear: there are plenty of AI researchers who ridicule such fears. After the philosopher Nick Bostrom highlighted potential dangers of general purpose AI in Superintelligence (2014), a US thinktank, the Information Technology and Innovation Foundation, gave its Luddism award to “alarmists touting an artificial intelligence apocalypse”. This was indicative of the dismal debate around AI safety, which is on the brink of descending into tribalism. The danger that comes across here is less an abrupt destruction of the species, more an inexorable enfeeblement: a loss of striving and understanding, which erodes the foundations of civilisation and leaves us “passengers in a cruise ship run by machines, on a cruise that goes on forever”.

Human Compatible is published by Allen Lane (£25). To order a copy go to or call 020-3176 3837. Free UK p&p over £15, online orders only. Phone orders min p&p of £1.99.


Ian Sample

The GuardianTramp

Related Content

Article image
New Dark Age by James Bridle review – technology and the end of the future
The consequences of the technological revolution may be even more frightening than we thought

Will Self

30, Jun, 2018 @6:30 AM

Article image
To Be a Machine by Mark O’Connell review – solving the problem of death
A captivating exploration of transhumanism features cryonics, cyborgs, immortality and the hubris of Silicon Valley

Paul Laity

23, Mar, 2017 @7:30 AM

Article image
21 Lessons for the 21st Century by Yuval Noah Harari review – a guru for our times?
The author of global bestseller Sapiens is back, with a self-help guide for a bewildering age – and its sweeping statements are peppered with truly mind-expanding observations

Helen Lewis

15, Aug, 2018 @6:30 AM

Article image
12 Bytes by Jeanette Winterson review – how we got here and where we might go next
Twelve essays drawing on years of research into artificial intelligence ask challenging questions about humanity, art, religion and the way we live and love

Laura Spinney

23, Jul, 2021 @8:00 AM

Article image
Will computers be able to think? Five books to help us understand AI
From exploring the limits of machine learning to a novel exploring human prejudices – Nick Harkaway shares his favourite books about artificial intelligence

Nick Harkaway

23, Jul, 2018 @5:30 AM

Article image
The rise of robot authors: is the writing on the wall for human novelists?
The unveiling of artificial intelligence that can write fiction and journalism caused alarm. But how does its prose compare with George Orwell’s – and can it report on Brexit?

Steven Poole

25, Mar, 2019 @8:00 AM

Article image
Sex Robots & Vegan Meat by Jenny Kleeman review – the future of food, birth and death?
A pleasingly sceptical investigation into the innovations that could change the way we eat, have sex and die

Fiona Sturges

01, Jul, 2020 @6:30 AM

The Brain Is Wider than the Sky by Bryan Appleyard - review
By Victoria Segal

Victoria Segal

13, Nov, 2012 @4:51 PM

Article image
Life 3.0 by Max Tegmark review – we are ignoring the AI apocalypse
Yuval Noah Harari responds to an account of the artificial intelligence era and argues we are profoundly ill-prepared to deal with future technology

Yuval Noah Harari

22, Sep, 2017 @6:30 AM

Article image
Machines Like Me by Ian McEwan review – intelligent mischief
McEwan returns to his subversive early style with this dystopian vision of humanoid robots in a counterfactual 1982 Britain

Marcel Theroux

11, Apr, 2019 @6:30 AM