We still need humans to identify sexually explicit images online – for now

Jeremy Hunt’s claim that technology could soon automatically spot and block ‘sexting’ among under-18s is a little premature, if not inconceivable. But we still rely on real people to identify images of abuse online, and it’s no easy job

When Peter, an analyst at the Internet Watch Foundation (IWF), is on “hashing” duty, he might look at 1,000 images of child sexual abuse in a single day. His job is to filter them. Some of the photographs the IWF picks up on its trawls of the web, or that members of the public send to the organisation, fall outside criminal boundaries: one might, for example, show a toddler working on a sandcastle. Others depict monstrous abuse. Sitting in an upstairs office in a Cambridge business park, with the blinds drawn for precaution, Peter – one of 13 analysts at the IWF – dutifully clicks through the daily queue of images and videos, marking the difference. Every hour he takes a break. “Sometimes you see something that takes you by surprise,” says the former RAF intelligence analyst, “and you have to take a long sit-down.” But each photo he hashes as abusive – from Category C (indecent) to Category A (penetrative) – can swiftly be blocked wherever it appears on the public internet. That is why Peter, a father of two, does the job.

On Tuesday, Jeremy Hunt suggested it might not be necessary for much longer. Technology exists, he said, that can “identify sexually explicit images and prevent [them] being transmitted”; this could facilitate a complete bar on sexting for under-18s. Well, says Peter, he isn’t redundant yet. “It would be amazing,” he says, in a room across the hallway from where IWF staff have just finished a mindfulness session, “if there was a magic brush that could do this kind of job.” Almost all of the “hashing” process runs automatically. The IWF, along with many police forces, uses PhotoDNA, a service Microsoft makes freely available to them. Once an analyst such as Peter has set it in motion, the software takes a digital fingerprint of the image (the “hash”), and adds it to a list of 130,000 the IWF has logged so far. Running the list against all the images uploaded to their platforms, Google, Facebook and Twitter – among others – locate and wipe out any replicas they may inadvertently be hosting. In the past, paedophiles could mark or change the file format of a photograph to fool the hash. But since PhotoDNA was released in 2009, that has become harder. Analysts now spend less time on the same, endlessly recurring stock of images.

Jeremy Hunt: social media should block sexting for under-18s

If there is any benefit to the process still relying on the interpretation of a human eye, it comes in the rare, overwhelming rush of a possible rescue. Shortly after Peter joined the IWF, in 2015, he saw photographs of a girl he did not recognise from the churn of imagery recycled from old videos. It seemed as if her actions were being prompted from outside the frame. “I shouted,” he recalls. Over the following hours, he and the rest of the team frantically mined the images for clues – the clothes on a handrail, the wallpaper – before passing the results to police. A 12-year-old was found and freed from the man who had groomed her online for years. It felt “amazing”.

Mostly, though, the analysts stay sane by learning how to switch off, attending their regular mandatory counselling sessions, and via an impressive, if slightly bug-eyed, sense of corporate cheer. One wall inside the hotline office is covered by a large and forcibly bright mural of Sir Giles Gilbert Scott’s red telephone boxes. On a noticeboard, under the label “pet’s corner”, hang goofy pictures of ducks and horses, next to an academic paper on compassion fatigue. As nobody’s surname is mentioned throughout the organisation (“a lot of people on the internet hate us,” says Chris, the hotline manager. “They think we’re ruining it”), staff choose pictures of up-and-at-’em icons – Tom Selleck, Steve McQueen, Penelope Pitstop – to represent them on the noticeboard. Hourly pauses are more than a welfare policy. Seeing horror after horror, Chris explains, may incline staff to read abuse into innocent photos.

Could intelligent, unfeeling machines take the job entirely out of human hands, as Hunt suggests? Historically, most image filtering – especially sensitive material involving sex and violence – has needed human guidance. In 2014, Wired magazine picked up on Facebook’s outsourced army of moderators in the developing world, who checked brutal footage against site guidelines, in return for $500 (£400) a month, and little or no support to cope with the inevitable burnout and trauma. It seemed likely to the Register, a slangy and sharp tech website, that the health secretary had simply misunderstood the field: “telling teenagers they are not allowed to do something in order to stop them from doing it has been a uniquely unsuccessful strategy for parents and governments alike throughout history”.

But machine-learning entrepreneurs were less quick to criticise. Though David Lissmyr, the founder of Sightengine, believes teenagers will always find a way to sext, he thinks that in theory, “you could create an app that would block the vast majority of images” in real time.

Before 2012, computer image recognition was laughably unreliable. On the messageboards of Y Combinator, a Silicon Valley incubator for start-ups, people swapped stories of software mistaking pastrami sandwiches for pornography. But in that year’s ImageNet competition, an annual geek-off in which teams compete to discern whether, for example, an image contains a tiger or not, one team, lead by Alex Krizhevsky, “crushed the competition,” says Lissymyr. Rather than laboriously teaching an algorithm how to see the world – this is a cat, this a cloud – it fed millions of images to a deep-learning programme and, using “neural networks”, the programme learned on its own. Sightengine’s equivalent, having examined hundreds of thousands of nipples, penises and labias, can tell a client in milliseconds if a photograph counts as “NSFW”. Humans are only involved in the most marginal cases – and these are fed back to the programme, smartening it up further. Lately, Facebook, Google and Twitter have bought up a number of companies such as Lissmyr’s. The results are starting to show. In May, Facebook announced that its AI had reported more offensive images than humans had for the first time.

Two days after Hunt aired his thoughts, another AI story broke. A team of researchers at iCop (identifying and catching originators in peer-to-peer networks) announced that they had built a toolkit capable of spotting images of child abuse with a high degree of accuracy. “Instead of looking at thousands of images,” says Awais Rashid, of Lancaster University, analysts may only need to examine “tens”. Stopping teenagers from sexting will always be difficult. But, soon, technology might spare the eyes of more men and women such as Peter.

Memphis Barker

The GuardianTramp

Related Content

Article image
Jeremy Hunt weighs in on teen sexting – cartoon
The health secretary has proposed a ban on sexting for under-18s and wants social media companies to use technology to bar young people from sending explicit messages and cyberbullying

Ian Williams

07, Dec, 2016 @3:59 PM

Article image
No Jeremy Hunt, you can't use tech to ban sexting for the under-18s | Jonathan Haynes
The health secretary needs to stop scapegoating technology companies and tackle sexting and cyberbullying through a government education scheme

Jonathan Haynes

30, Nov, 2016 @2:04 PM

Article image
Mental health of children and young people ‘at risk in digital age’
Cyberbullying and rise in self-harm highlighted by MPs voicing concern over violent video games and sexting

Denis Campbell, health correspondent

05, Nov, 2014 @12:01 AM

Article image
Social media firms failing to protect young people, survey finds
Cyberbullying inquiry finds the mental health of young people is severely affected by online abuse

Patrick Greenfield

26, Feb, 2018 @12:01 AM

Article image
What are four of the top social media networks doing to protect children?
With reports of cyberbullying on the rise, a guide to what Facebook, Twitter, Snapchat and Instagram are doing, and whether it’s enough

Marc Ambasna-Jones

09, Feb, 2016 @5:00 AM

Article image
Hannah Smith's friends 'dress cheerfully' for funeral of bullied girl
Church filled with balloons and pop music for 14-year-old who was relentlessly bullied by anonymous users of Ask.fm

Maev Kennedy

16, Aug, 2013 @5:45 PM

Article image
Social media firms to be penalised for not removing child abuse
UK police and charities praise plan to hold firms liable for not taking down harmful content

Sandra Laville and Alex Hern

08, Apr, 2019 @8:58 AM

Article image
More than half of children in England and Wales bullied about appearance
Survey finds most bullying focuses on weight and body shape and has major impact on victims

Haroon Siddique

28, Feb, 2018 @1:26 PM

Esme Smith found safe and well in north London

Missing 14-year-old girl, the daughter of a Nato commander, was last seen arriving at Waterloo station on 12 September

Alexandra Topping and agencies

23, Sep, 2013 @11:02 AM

Article image
Internet trolling: quarter of teenagers suffered online abuse last year
Survey of 13- to 18-year-olds reveals teenagers with disabilities and those from minority ethnic backgrounds are more likely to encounter cyberbullying

Aisha Gani

09, Feb, 2016 @12:01 AM