Rishi Sunak’s AI safety summit appears slick – but look closer and alarm bells start ringing | Chris Stokel-Walker

The prime minister wants progress on this tech to be his legacy, but in truth he is failing to equip us for the challenges it brings

The UK’s AI safety summit opens at Bletchley Park this week, and is the passion project of Rishi Sunak: a prime minister desperate for a good news story as his government looks down the barrel of a crushing election defeat.

Sunak appears to want progress on AI to become his lasting legacy. Last week, he delivered a speech about the risks of AI if weaponised by terrorists and cybercriminals, and published a series of documents on “frontier AI”, an industry term for generative AI tools such as ChatGPT and DALL-E. He even unveiled a UK AI safety institute.

The message was clear. The slick – albeit very behind in the polls – Stanford MBA grad who likes to holiday in California had, to use a favoured phrase of his, “got to grips” with the problem. The British people, according to Sunak, “should have peace of mind that we’re developing the most advanced protections for AI of any country in the world”.

And now the summit. Sunak has lured 100 leading lights from the world of AI to Bletchley Park, including representatives from the world’s biggest tech companies and leaders from across the globe. But – and this is a big but – there are precious few civil society representatives, much to the annoyance of more than 100 signatories of an open letter published on Monday, who have warned the meeting will achieve little with such a narrow guest list.

It’s hard to disagree with them. The agenda for the summit goes heavy on the existential risks of a Terminator-style AI gaining super-intelligent sentience. But many of the academics and campaigners who have been studying the space for longer than it has been Sunak’s hobbyhorse, and who will be only watching the livestream because they weren’t invited to the party, say that risk is a strawman designed to distract from the bigger issues inherent with AI.

There are more pressing problems, they warn. One is misrepresentation and bias against minorities. Type “doctor” or “CEO” into a generative AI image creator and you’ll be shown a row of middle-aged, white male faces. And with the government saying it wants police forces to integrate surveillance AI into its operations, those issues of representation have real-world ramifications.

The other hole in the event’s agenda is AI’s environmental impact, which has a massive drain on our planet’s natural resources. The use of power by AI is likely to eclipse that of many large countries in a matter of years, and yet it is paid only lip service in the discussion paper.

The terminology used in the materials to promote the event, with its mention of frontier AI, feels heavily skewed towards the status quo. It echoes the name of the Frontier Model Forum, a talking shop for the tech industry, and a body hastily put together to make it seem like it is self-policing to try to ward off regulation. The use of industry language suggests Sunak will remain supine in the face of big tech’s latest innovation. In many ways, it is no surprise. The prime minister has made no bones about the fact he wants to encourage tech companies to develop AI within the UK, hoping to reap the benefits it can offer the economy. He has also hinted he won’t nag them too much about safety. In fact, Sunak is so keen to buddy up to the tech representatives that he’s hanging out with Musk on X, formerly Twitter, in a livestreamed event after the conclave.

I’ve spent the last year speaking to experts in the field for a book on the enormous impact this wave of AI will have on our lives that will be published next year. And in the past month, I have seen UK government representatives boast about the central role Britain will have in regulating the technology in the years to come. I sat in a conference hall in Amsterdam watching Viscount Camrose, the UK minister for AI and intellectual property, call the summit “historic”. Last week I watched a livestream of the deputy prime minister, Oliver Dowden, hyping up the event.

But I’ve seen who’s on the tech companies’ side of the table. I’ve seen who’s on the government’s side. And I’ve seen the fierce intelligence of the people who have been left out of the summit, in the cold this November. I’m not holding my breath for positive results and a new AI accord that meets the challenges we face.

  • Chris Stokel-Walker is the author of How AI Ate the World, to be published in May 2024

Contributor

Chris Stokel-Walker

The GuardianTramp

Related Content

Article image
Rishi Sunak’s AI summit: what is its aim, and is it really necessary?
UK government says meeting will discuss ‘internationally coordinated action’ to mitigate the risks posed

Dan Milmo and Kiran Stacey

09, Jun, 2023 @12:51 PM

Article image
Five takeaways from UK’s AI safety summit at Bletchley Park
Rishi Sunak hails conference as diplomatic coup after it produces international declaration to address AI risks

Dan Milmo and Kiran Stacey

02, Nov, 2023 @4:55 PM

Article image
Rishi Sunak’s vanity jamboree on AI safety lays bare the UK’s Brexit dilemmas | Rafael Behr
At a table where democratic accountability should dominate, a diminished nation is trading in Silicon Valley celebrity, says Guardian columnist Rafael Behr

Rafael Behr

01, Nov, 2023 @6:00 AM

Article image
TechScape: Why Sunak’s ‘vanity jamboree’ on AI safety was actually … a success
Against the odds, world leaders agreed on a landmark declaration to bring stronger oversight to AI. Plus, Sam Bankman-Fried’s very bad week

Dan Milmo

07, Nov, 2023 @11:40 AM

Article image
UK will lead on ‘guard rails’ to limit dangers of AI, says Rishi Sunak
PM sounds a more cautious note after calls from tech experts and business leaders for moratorium

Rowena Mason in Hiroshima

18, May, 2023 @9:00 PM

Article image
AI can be a force for good or ill in society, so everyone must shape it, not just the ‘tech guys’ | Afua Bruce
Although designers do have a lot of power, AI is just a tool conceived to benefit us. Communities must make sure that happens, says author Afua Bruce

Afua Bruce

11, Aug, 2023 @7:00 AM

Article image
To save us from a Kafkaesque future, we must democratise AI | Stephen Cave
The history of artificial intelligence is entwined with state and corporate power, says Stephen Cave, executive director of the Leverhulme Centre for the Future of Intelligence

Stephen Cave

04, Jan, 2019 @6:00 AM

Article image
We hold people with power to account. Why not algorithms? | Hannah Fry
As we delegate more responsibility to technology to run our lives, we must regulate it, says the mathematician Hannah Fry

Hannah Fry

17, Sep, 2018 @5:00 AM

Article image
Human-like programs abuse our empathy – even Google engineers aren’t immune | Emily M Bender
Consumers need transparency about how such systems are used, says professor of linguistics Emily M Bender

Emily M Bender

14, Jun, 2022 @4:57 PM

Article image
What depressed robots can teach us about mental health | Zachary Mainen
The idea of a depressed computer may seem absurd – but artificial intelligence and the human brain share a vital feature, writes neuroscientist Zachary Mainen

Zachary Mainen

16, Apr, 2018 @1:04 PM