Artificial intelligence, robots and a human touch | Letters

Deborah O’Neill on the failings of automation at Tesla and elsewhere, and Matt Meyer and Nick Lynch on the House of Lords AI select committee report

Elon Musk’s comment that humans are underrated (Humans replace robots at flagging Tesla plant, 17 April) doesn’t come as much of a surprise, even though his company is at the forefront of the technological revolution. Across industries, CEOs are wrestling with the balance between humans and increasingly cost-effective and advanced robots and artificial intelligence. However, as Mr Musk has discovered, the complexity of getting a machine to cover every possibility results in a large web of interconnected elements that can overcomplicate the underlying problem. This is why so many organisations fail when they try to automate everything they do. Three key mistakes I see time and again in these situations are missing the data basics, applying the wrong strategy, and losing the human touch.

There are some clear cases where automation works well: low value, high repetition tasks or even complex ones where additional data will give a better outcome, for example, using medical-grade scanners on mechanical components to identify faults not visible to the human eye. But humans are better at reacting to unlikely, extreme, or unpredictable edge cases, for example being aware that a music festival has relocated and extra cider needs to go to stores near the new venue rather than the previous location.

Regardless of industry, it’s only by maintaining a human touch – thinking and seeing the bigger picture – that automation and AI can add the most value to businesses.
Deborah O’Neill
Partner, Oliver Wyman

• The House of Lords report (Cambridge Analytica scandal ‘highlights need for AI regulation’, theguardian.com, 16 March) outlining the UK’s potential to be a global leader in artificial intelligence – and its calls for governmental support of businesses in the field and education to equip people to work alongside AI in the jobs of the future – should be welcomed for two reasons. First, it recognises the potential of UK-based AI companies to benefit the economy. Supporting these fast-growing companies to ensure that they continue to scale – and eventually exit – here should be a strategic priority, particularly at a time when a new generation of fast-growth providers, such as Prowler.io and Benevolent AI in life sciences, and ThoughtRiver in legal tech, is emerging to build on an impressive track record of AI innovation in the UK, from Alan Turing to DeepMind.

Second, it acknowledges that AI can contribute significantly to businesses’ competitive advantage – a view that few too UK businesses seem to appreciate at a time when media coverage of the topic is dominated by scaremongering about job losses, security threats, ethics, and bias. It’s refreshing to see a more positive narrative about AI and the workplace starting to emerge. What we now need to see is more of from the business world is openness to the opportunities that AI creates in terms of continuing, and expanding on, the positivity of this report, and leadership in sharing their successes in this area that others can learn from.
Matt Meyer
CEO, Taylor Vinters

• The announcement from the House of Lords that Britain must “lead the way” on the regulation of artificial intelligence (AI) highlights the current climate of concern around the ways that AI could impact society, in particular, fears of weaponised AI used by militaries and other unethical usage. But there are many other applications where “ethical” AI is crucial – in making accurate medical diagnoses, for example. 

There is no doubt AI will transform how society operates, and that there is a need for improper use to be safeguarded against. However, creating ethical AI algorithms will take more than just an announcement. It will require far greater collaboration between governments, and industry and technology experts. By working with those that understand AI, regulators can put in place standards that protect us while ensuring AI can augment humans safely, so that we can still reap its full potential.
Dr Nick Lynch
The Pistoia Alliance

• Join the debate – email guardian.letters@theguardian.com

• Read more Guardian letters – click here to visit gu.com/letters

Letters

The GuardianTramp

Related Content

Article image
Artificial intelligence 'will not end human race'
Head of Microsoft’s main research lab admits that AI will pose legal, ethical and psychological issues as it becomes more sophisticated

Chris Johnston

28, Jan, 2015 @6:08 PM

Article image
Human error, not artificial intelligence, poses the greatest threat | Letter
Letter: The risk that humanity faces comes not from malevolent machines but from incompetent programmers, writes Martyn Thomas

Letters

03, Apr, 2019 @5:28 PM

Article image
Elon Musk drafts in humans after robots slow down Tesla Model 3 production
‘Humans are underrated,’ says CEO after company failed to hit weekly production target in first quarter of 2018

Samuel Gibbs

16, Apr, 2018 @9:36 AM

Article image
Benefits of ‘welfare robots’ and the need for human oversight | Letters
Letters: Simon McKinnon of the DWP, Tom Symons of Nesta and Pat McCarthy respond to articles on the use of artificial intelligence in managing benefit claims

Letters

17, Oct, 2019 @5:02 PM

Article image
The Guardian view on killer robots: on the loose | Editorial
Editorial: Lethal autonomous weapons are a reality, but the campaign to prevent their use is ours to win

Editorial

29, Aug, 2017 @6:40 PM

Article image
How to prevent creeping artificial intelligence becoming creepy
With successful AI emerging slowly, almost without us noticing, we must improve the understanding between human and computer

Neil Lawrence

12, Jun, 2015 @6:30 AM

Article image
Elon Musk: artificial intelligence is our biggest existential threat
The AI investor says that humanity risks ‘summoning a demon’ and calls for more regulatory oversight. By Samuel Gibbs

Samuel Gibbs

27, Oct, 2014 @10:26 AM

Article image
The Guardian view on artificial intelligence: human learning | Editorial
Editorial: Computers can’t be held responsible for anything. So their owners and programmers must be

Editorial

14, Oct, 2018 @4:58 PM

Article image
How we can save some of the jobs destroyed by rise of the machines | Letters
Letters: Malcolm Fowles on ‘super-efficient market gardening’, Colin Hines on infrastructure-building, and Susannah Everington on avoidance of self-service tills

Letters

09, Aug, 2018 @4:21 PM

Article image
The superhero of artificial intelligence: can this genius keep it in check?
With his company DeepMind, Londoner Demis Hassabis is leading Google’s project to build software more powerful than the human brain. But what will this mean for the future of humankind?

Clemency Burton-Hill

16, Feb, 2016 @8:15 AM