Rule by robots is easy to imagine – we’re already victims of superintelligent firms | John Naughton

The ruthless behaviour of corporations gives us some idea of what we need to avoid in a future run by machines

In 1965, the mathematician I J “Jack” Good, one of Alan Turing’s code-breaking colleagues during the second world war, started to think about the implications of what he called an “ultra-intelligent” machine – ie “a machine that can surpass all the intellectual activities of any man, however clever”. If we were able to create such a machine, he mused, it would be “the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control”.

Note the proviso. Good’s speculation has lingered long in our collective subconscious, occasionally giving rise to outbreaks of fevered speculation. These generally focus on two questions. How long will it take us to create superintelligent machines? And what will it be like for humans to live with – or under – such machines? Will they rapidly conclude that people are a waste of space? Does the superintelligent machine pose an existential risk for humanity?

The answer to the first question can be summarised as “longer than you think”. And as for the second question, well, nobody really knows. How could they? Surely we’d need to build the machines first and then we’d find out. Actually, that’s not quite right. It just so happens that history has provided us with some useful insights into what it’s like to live with – and under – superintelligent machines.

They’re called corporations, and they’ve been around for a very long time – since about 1600, in fact. Although they are powered by human beings, they are in fact nonhuman entities to which our legal systems grant the status of legal personhood. We can therefore regard them as artificial superintelligences because they possess formidable capacities for rational behaviour, reasoning, perception and action. And they have free will: they can engage in purposeful behaviour aimed at achieving self-determined goals. They possess and deploy massive resources of financial capital and human expertise. And they are, in principle at least, immortal: they can have life spans that greatly exceed those of humans, and some are capable of surviving catastrophes that kill millions of people. Just think of how many of the big German corporations of the 1930s – companies such as Thyssen, BASF, Mercedes, Siemens, Bosch, Volkswagen – are still prospering today.

So if corporations are the superintelligences de nos jours, what can that tell us about living with superintelligent machines? On the positive side, such entities are capable of accomplishing astonishing things – from building a new city, road or rail network, to indexing the world wide web, connecting 2.24 billion people, scanning all the world’s books, launching heavy rockets into space (and bringing them back safely), etc.

But these superintelligent entities have other characteristics too. The most disturbing one is that they are intrinsically sociopathic – they are AIs that stand apart from the rest of society, existing for themselves and only for themselves, caring nothing for the norms and rules of society, and obeying only the letter (as distinct from the spirit) of the law.

That doesn’t means that corporations do not regularly dissemble, or proclaim their “corporate social responsibility”, ethical standards, environmental awareness or the “values” that are implicit in their brands. They do. For the most part, though, this is just cant, designed to burnish the public image of the corporation.

Artificial Intelligence has various definitions, but in general it means a program that uses data to build a model of some aspect of the world. This model is then used to make informed decisions and predictions about future events. The technology is used widely, to provide speech and face recognition, language translation, and personal recommendations on music, film and shopping sites. In the future, it could deliver driverless cars, smart personal assistants, and intelligent energy grids. AI has the potential to make organisations more effective and efficient, but the technology raises serious issues of ethics, governance, privacy and law.

The interesting thing about the tech companies is that, until recently, we failed to notice that they were just corporations too. At the beginning, we were entranced by their young founders with their open-eyed “missions” to avoid being evil, enable people to broadcast themselves, connect the world, organise its information and build global communities. We were likewise seduced by their colourful, playful workplaces, free gourmet food, on-site massages and prevailingly hipster ethos, so didn’t notice that under all that gloss there lurked ruthless capitalist machines intent on harvesting as much data on our daily existence as they could.

Fortunately, after two years of scandals, the scales are now beginning to fall from our eyes and we are seeing these outfits for what they are: mere corporations. Anyone who has been on the receiving end of a tech company’s rage – as this newspaper was on the Friday night that we broke the Cambridge Analytica story – would be hard put to say what is the difference between Facebook and, say, a tobacco company whose cover had been blown. Which is why the recent revelations by the New York Times that Facebook had been employing a slimy PR firm to discredit the company’s critics by linking them to George Soros – a long-time target of antisemitic conspiracy theories – no longer came as a shock. When you’re a sociopathic corporation, the ends always justify the means. So the challenge we will one day face is whether we can design superintelligent machines that are better behaved.

What I’m reading

Brain on a stick
The website New Atlas reports on the commodification of machine-learning technology. Intel is now selling a plug-in neural network on a USB stick.

Where’s the beef?
I Found the Best Burger Place in America. And Then I Killed It”. Intriguing Thrillist essay by Kevin Alexander.

I hear where you’re coming from…
The Intercept reports that Amazon has a patent for recognising accents. Good news for spooks. Less good for privacy.

Contributor

John Naughton

The GuardianTramp

Related Content

Article image
Cambridge Analytica: how did it turn clicks into votes?
Whistleblower Christopher Wylie explains the science behind Cambridge Analytica’s mission to transform surveys and Facebook data into a political messaging weapon

Alex Hern

06, May, 2018 @7:00 AM

Article image
Worried about super-intelligent machines? They are already here | John Naughton
Forget about the danger of robots creating a sci-fi-style dystopia. The modern corporation is already doing all of that

John Naughton

25, Dec, 2021 @4:00 PM

Article image
Companies are now writing reports tailored for AI readers – and it should worry us
A recent study suggests lengthy, complex corporate filings are increasingly read by, and written for, machines

John Naughton

05, Dec, 2020 @4:00 PM

Article image
The truth about artificial intelligence? It isn't that honest | John Naughton
Tests of natural language processing models show that the bigger they are, the bigger liars they are. Should we be worried?

John Naughton

02, Oct, 2021 @3:00 PM

Article image
Ruha Benjamin: ‘We definitely can’t wait for Silicon Valley to become more diverse’
The sociologist on how discrimination is embedded in technology – and how we go about building a fairer world

Sanjana Varghese

29, Jun, 2019 @2:00 PM

Article image
I wrote this column myself, but how long before a chatbot could do it for me? | John Naughton
The impressive and wildly popular ChatGPT is the latest instalment in a long-running debate about whether we’re creating machines to help us or replace us

John Naughton

10, Dec, 2022 @4:00 PM

Article image
AI firms may pay a high price for their software’s artistic abilities | John Naughton
Computer-generated art seemed magical at first, but it works by ‘scraping’ the creations of real people. Now they’re angry, and have the tools to fight back

John Naughton

28, Oct, 2023 @3:00 PM

Article image
A lawyer got ChatGPT to do his research, but he isn’t AI’s biggest fool | John Naughton
The emerging technology is causing pratfalls all over – not least tech bosses begging for someone to regulate them

John Naughton

03, Jun, 2023 @3:00 PM

Article image
OpenAI’s new video generation tool could learn a lot from babies | John Naughton
The footage put together by Sora looks swish, but closer examination reveals its doesn’t understand physical reality

John Naughton

24, Feb, 2024 @4:00 PM

Article image
As a new year dawns expect a fresh assault on big tech | John Naughton
Democracies have finally begun to confront the internet giants and their unrivalled and untrammelled power

John Naughton

01, Jan, 2022 @4:00 PM