If tech experts worry about artificial intelligence, shouldn’t you as well? | John Naughton

A recent study of 1,000 leaders in the technology sector found more fear than hope about the continuing growth of AI

Fifty years ago last Sunday, a computer engineer named Douglas Engelbart gave a live demonstration in San Francisco that changed the computer industry and, indirectly, the world. In the auditorium, several hundred entranced geeks watched as he used something called a “mouse” and a special keypad to manipulate structured documents and showed how people in different physical locations could work collaboratively on shared files, online.

It was, said Steven Levy, a tech historian who was present, “the mother of all demos”. “As windows open and shut and their contents reshuffled,” he wrote, “the audience stared into the maw of cyberspace. Engelbart, with a no-hands mic, talked them through, a calm voice from Mission Control as the truly final frontier whizzed before their eyes.” That 1968 demo inspired a huge new industry based on networked personal computers using graphical interfaces, in other words, the stuff we use today.

Engelbart was a visionary who believed that the most effective way to solve problems was to augment human abilities and develop ways of building collective intelligence. Computers, in his view, were “power steering for the mind” – tools for augmenting human capabilities – and this idea of augmentation has been the backbone of the optimistic narrative of the tech industry ever since.

The dream has become a bit tarnished in the last few years, as we’ve learned how data vampires use the technology to exploit us at the same time as they provide free tools for our supposed “augmentation”. The argument in favour varies from company to company and from application to application. Spreadsheets, word-processing, computer-assisted design (CAD) and project-planning tools are unquestionably a boon, for example. Google search can be seen as a memory prosthesis for humanity or an excuse for never retaining any factual knowledge. But it’s not clear what kind of cognitive augmentation – if any – is provided by Instagram or Facebook. And as for Twitter...

There is, however, one kind of recent tech development for which the augmentation argument is continually made: artificial intelligence. AI is unfailingly portrayed by its evangelists as a technology that really can amplify human capabilities. In its current manifestation, it’s basically machine-learning and its augmentation potential is often impressive. One thinks, for example, of how image-scanning software can rapidly scan tens of thousands of retinal scans and reliably pick out those that need consideration by an eye surgeon.

But there are also lots of cases where machine-learning produces perverse outcomes and biased predictions and there’s widespread concern about building a networked world around such a powerful but flawed technology. To try and get a balanced assessment of the risk, the Pew Research Center recently put the following question to a formidable panel of experts: “By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them?”

The answers are revealing and sobering. While all of the respondents were people who understood the empowering potential of the technology, the overwhelming impression one gets from them was one of concern. They see widespread use of AI as leading to a loss of human agency, as decision-making on key aspects of life is ceded to code-driven “black box” systems. They fear the potential for abuse in surveillance systems designed for profit (aka social media), effects that are already much in evidence. They see existential risks for many kinds of employment. While some expect new kinds of jobs to emerge, others worry about massive unemployment, widening economic divides and the social upheavals and populist uprisings that could be triggered by that.

Many of them perceive and welcome the augmentation potential of AI, but others fear that our increasing dependence on the technology will erode our capacity to think for ourselves, take independent action or interact effectively with others. And some foresee mayhem as the technology’s capacity to disrupt democratic processes increases (eg with “deep fake” video).

The first thing to say about this survey is that it is not a statistically representative opinion poll of the Mori/YouGov type. All it is is a serious attempt to canvass the opinions of a wide range of well-informed thinkers, many of whom are au fait with the tech industry and its core technologies and corporations. At the very least, their reflections provide a welcome contrast to the current frenzied boosterism about “AI everywhere” from a tech industry that sees ethics as occupational therapy for underemployed philosophers and other “losers”.

But most of all, it made me wonder what Engelbart would have made of the debate. “Someone once called me ‘just a dreamer’,” he once said. “That offended me, the ‘just’ part; being a real dreamer is hard work. And it really gets hard when you start believing in your dreams.” Especially when they turn into nightmares.

What I’m reading

Net gains
Benedict Evans’s “End of the Beginning” presentation at venture capital firm a16z’s annual technology conference. An astute and insightful take on the evolution of the internet to date by one of its shrewdest observers.

App antics
“Gaming the App Store” studies how Apple’s monopolistic control goes hand in hand with dereliction of duty, in a fascinating blog post by a veteran app developer David Barnard.

Fact files
Truth Decay is an interesting project (and a neat pun) by the Rand Corporation.

Contributor

John Naughton

The GuardianTramp

Related Content

Article image
The truth about artificial intelligence? It isn't that honest | John Naughton
Tests of natural language processing models show that the bigger they are, the bigger liars they are. Should we be worried?

John Naughton

02, Oct, 2021 @3:00 PM

Article image
No one can read what’s on the cards for AI’s future | John Naughton
AI is now beating us at poker, but not even Google co-founder Sergey Brin can say with any certainty what the next steps for machine learning are

John Naughton

29, Jan, 2017 @7:00 AM

Article image
Don’t worry about AI going bad – the minds behind it are the danger | John Naughton
Killer robots remain a thing of futuristic nightmare. The real threat from artificial intelligence is far more immediate

John Naughton

25, Feb, 2018 @7:00 AM

Article image
Companies are now writing reports tailored for AI readers – and it should worry us
A recent study suggests lengthy, complex corporate filings are increasingly read by, and written for, machines

John Naughton

05, Dec, 2020 @4:00 PM

Article image
Worried about super-intelligent machines? They are already here | John Naughton
Forget about the danger of robots creating a sci-fi-style dystopia. The modern corporation is already doing all of that

John Naughton

25, Dec, 2021 @4:00 PM

Article image
Machine-learning systems are problematic. That’s why tech bosses call them ‘AI’ | John Naughton
Pretending that opaque, error-prone ML is part of the grand, romantic quest to find artificial intelligence is an attempt to distract us from the truth

John Naughton

05, Nov, 2022 @4:00 PM

Article image
As a new year dawns expect a fresh assault on big tech | John Naughton
Democracies have finally begun to confront the internet giants and their unrivalled and untrammelled power

John Naughton

01, Jan, 2022 @4:00 PM

Article image
The real test of an AI machine is when it can admit to not knowing something | John Naughton
Mark Zuckerberg and Brussels both have ideas on AI regulation, but it’s a Cambridge statistician who has produced something intelligible

John Naughton

22, Feb, 2020 @4:00 PM

Article image
Why is Google so alarmed by the prospect of a sentient machine? | John Naughton
The tech giant seems to be running scared over an engineer’s claim that its language model has feelings

John Naughton

18, Jun, 2022 @3:00 PM

Article image
ChatGPT isn’t a great leap forward, it’s an expensive deal with the devil | John Naughton
The new chatbot is generating a lot of hype, but we would do well to consider its human and environmental cost

John Naughton

04, Feb, 2023 @4:00 PM