Age checks, trolls and deepfakes: what’s in the online safety bill?

The legislation will place duties of care on tech companies to protect users from harmful content

The online safety bill returns to the House of Commons on Tuesday with the government pledging to introduce an important change: criminal liability for tech executives whose platforms persistently fail to protect children from harm online.

It is the latest alteration to a piece of legislation that has triggered debates about a range of issues, from free speech to dealing with trolls and proper age checking for pornography sites. Here is a quick run-through of the bill as it stands.

How does the bill work?

The cornerstone of the bill is the duties of care it will place on tech companies to protect users from harmful content. The legislation will apply to platforms that host user-generated content, which covers social media services such as Twitter, TikTok and Facebook, and search engines such as Google. Although many of these services are based outside the UK, if they are accessible to UK users then they are in the scope of the bill’s powers.

All tech firms covered by the bill will have to protect all users from illegal content. The sort of content that platforms will need to remove includes child sexual abuse material, revenge pornography, selling illegal drugs or weapons, and terrorism.

Tech platforms will also have a duty of care to keep children safe online. This will involve preventing children from accessing harmful content and ensuring that age limits on social media platforms – the minimum age is typically 13 – are enforced. Platforms will have to explain in their terms of service how they enforce these age limits and what technology they use to police them.

In relation to both of these duties, tech firms will have to carry out risk assessments detailing the threats their services might pose in terms of illegal content and keeping children safe. They will then have to explain how they will mitigate those threats – for example through human moderators or using artificial intelligence tools – in a process that will be overseen by Ofcom, the communications regulator. This is expected to come into force by the end of the year.

What are the punishments for companies under the legislation?

Ofcom will have a range of regulatory powers under the bill. At the top end, it will be able to impose fines of up to £18m, or 10% of global turnover – a big number if it is a company such as Meta, which generated revenue of just under $118bn in 2021. In the most extreme cases, rogue sites can be blocked from operating by ordering payment providers, advertisers and internet service providers to stop working with them. Ofcom will also have the power to issue enforcement notices under the bill, telling companies and platforms to improve how they operate.

Can executives go to jail under the legislation?

Even before the government conceded to backbench rebels on Monday, tech executives faced the threat of a two-year jail sentence under the legislation, if they hinder an Ofcom investigation or a request for information.

Now, they also face the threat of a two-year jail sentence if they persistently ignore Ofcom enforcement notices telling them they have breached their duty of care to children. In the face of tech company protests about criminal liability, the government is stressing that the new offence will not criminalise executives who have “acted in good faith to comply in a proportionate way” with their duties.

Nonetheless, it will sharpen the minds of social media executives. The new offence will target senior managers who “connive” in “ignoring enforceable requirements”.

Are there other criminal offences?

The bill will introduce a range of criminal offences for England and Wales. These include encouraging people to self-harm, sharing pornographic “deepfake” images, taking and sharing “downblousing” images, cyberflashing (sending an unsolicited sexual image), and sending or posting a message that conveys a threat of serious harm.

How does it deal with pornography and age verification?

If a platform publishes pornography, it will need to have “robust” processes in place to check that a user is not underage. How that is to be done is up to the platform – there are a number of tools that can be used to check a user’s age – but it will be vetted by Ofcom. The government has said any age assurance method used by pornography sites would have to protect users’ data, reflecting privacy campaigners’ concerns that requiring users of porn websites to log in could make it easier to collect – and leak – data on an individual’s viewing habits.

Will it protect adults from online trolls and abuse?

Under a previous iteration, the bill placed a duty of care on large platforms to address content that was harmful but not illegal. This alarmed free speech advocates on the Conservative backbenches and elsewhere, so it has been removed. Instead, tech firms will be required to remove certain types of “legal but harmful” content if it is already banned under their terms of service, under a clause that tries to ensure platforms pay more than just lip service to their content rules. Adults will also have the option of screening out certain types of harmful content if they so choose. This includes posts that are abusive, or incite hatred on the basis of race, ethnicity, religion, disability, sex, gender reassignment or sexual orientation.


Dan Milmo Global technology editor

The GuardianTramp

Related Content

Article image
Online safety bill needs tougher rules on misogyny, say peers
Nicky Morgan and other Tory peers plan amendment to boost Ofcom’s powers to penalise social media firms

Tobi Thomas

04, Feb, 2023 @1:29 PM

Article image
Government criticised over renewed delay to online safety bill
Internet safety groups say withdrawal of proposed legislation from next week’s Commons schedule leaves children at continued risk

Alex Hern UK technology editor

27, Oct, 2022 @5:25 PM

Article image
Online safety bill: a messy new minefield in the culture wars
Analysis: Ofcom remains gatekeeper of big social networks, but moderators face multiple conundrums with the legislation

Alex Hern UK technology editor

12, May, 2021 @5:16 PM

Article image
Online harms bill: firms may face multibillion-pound fines for illegal content
Government sets out strict guidelines to govern removal of material promoting child sexual abuse and terrorism

Alex Hern

15, Dec, 2020 @8:17 AM

Article image
Nadine Dorries commits to online safety reforms in memory of David Amess
Culture secretary says online hate has ‘poisoned public life’ and pledges fines for platforms that amplify hateful content

Miranda Bryant

23, Oct, 2021 @11:26 AM

Article image
WhatsApp would not remove end-to-end encryption for UK law, says chief
Meta’s head of chat app says it would not comply with the requirements set out in online safety bill

Alex Hern

09, Mar, 2023 @2:07 PM

Article image
Minister refuses to rule out changes to UK online safety bill
Social media bosses who breach child safety rules may face jail if Ofcom given powers to prosecute

Kiran Stacey and Dan Milmo

13, Jan, 2023 @9:59 AM

Article image
Changes to online safety bill tread line between safety and appearing ‘woke’
Ministers drop ‘harmful communications’ offence with some arguing it was ‘legislating for hurt feelings’

Alex Hern

29, Nov, 2022 @5:13 PM

Article image
‘Making the digital streets safe’: Calls for greater protection for women online
British peers propose amendment to online safety bill requiring social media sites to consider how to keep female users safe

Dan Milmo Global technology editor

10, Feb, 2023 @6:13 PM

Article image
The online safety bill will show just how blurred the boundaries of free speech are | Gaby Hinsliff
Ofcom and big tech will be told to do more, but do we want them to decide who’s allowed to say what, asks Guardian columnist Gaby Hinsliff

Gaby Hinsliff

14, May, 2021 @6:00 AM