The UK’s AI safety summit opens at Bletchley Park this week, and is the passion project of Rishi Sunak: a prime minister desperate for a good news story as his government looks down the barrel of a crushing election defeat.
Sunak appears to want progress on AI to become his lasting legacy. Last week, he delivered a speech about the risks of AI if weaponised by terrorists and cybercriminals, and published a series of documents on “frontier AI”, an industry term for generative AI tools such as ChatGPT and DALL-E. He even unveiled a UK AI safety institute.
The message was clear. The slick – albeit very behind in the polls – Stanford MBA grad who likes to holiday in California had, to use a favoured phrase of his, “got to grips” with the problem. The British people, according to Sunak, “should have peace of mind that we’re developing the most advanced protections for AI of any country in the world”.
And now the summit. Sunak has lured 100 leading lights from the world of AI to Bletchley Park, including representatives from the world’s biggest tech companies and leaders from across the globe. But – and this is a big but – there are precious few civil society representatives, much to the annoyance of more than 100 signatories of an open letter published on Monday, who have warned the meeting will achieve little with such a narrow guest list.
It’s hard to disagree with them. The agenda for the summit goes heavy on the existential risks of a Terminator-style AI gaining super-intelligent sentience. But many of the academics and campaigners who have been studying the space for longer than it has been Sunak’s hobbyhorse, and who will be only watching the livestream because they weren’t invited to the party, say that risk is a strawman designed to distract from the bigger issues inherent with AI.
There are more pressing problems, they warn. One is misrepresentation and bias against minorities. Type “doctor” or “CEO” into a generative AI image creator and you’ll be shown a row of middle-aged, white male faces. And with the government saying it wants police forces to integrate surveillance AI into its operations, those issues of representation have real-world ramifications.
The other hole in the event’s agenda is AI’s environmental impact, which has a massive drain on our planet’s natural resources. The use of power by AI is likely to eclipse that of many large countries in a matter of years, and yet it is paid only lip service in the discussion paper.
The terminology used in the materials to promote the event, with its mention of frontier AI, feels heavily skewed towards the status quo. It echoes the name of the Frontier Model Forum, a talking shop for the tech industry, and a body hastily put together to make it seem like it is self-policing to try to ward off regulation. The use of industry language suggests Sunak will remain supine in the face of big tech’s latest innovation. In many ways, it is no surprise. The prime minister has made no bones about the fact he wants to encourage tech companies to develop AI within the UK, hoping to reap the benefits it can offer the economy. He has also hinted he won’t nag them too much about safety. In fact, Sunak is so keen to buddy up to the tech representatives that he’s hanging out with Musk on X, formerly Twitter, in a livestreamed event after the conclave.
I’ve spent the last year speaking to experts in the field for a book on the enormous impact this wave of AI will have on our lives that will be published next year. And in the past month, I have seen UK government representatives boast about the central role Britain will have in regulating the technology in the years to come. I sat in a conference hall in Amsterdam watching Viscount Camrose, the UK minister for AI and intellectual property, call the summit “historic”. Last week I watched a livestream of the deputy prime minister, Oliver Dowden, hyping up the event.
But I’ve seen who’s on the tech companies’ side of the table. I’ve seen who’s on the government’s side. And I’ve seen the fierce intelligence of the people who have been left out of the summit, in the cold this November. I’m not holding my breath for positive results and a new AI accord that meets the challenges we face.
Chris Stokel-Walker is the author of How AI Ate the World, to be published in May 2024