Social media companies in the US brace to battle onslaught of legal challenges

State and federal lawsuits and bills with far-reaching regulatory implications for TikTok, Meta and others come to a head this year

Social media companies in the United States are bracing themselves to battle an onslaught of new state and federal legislation and legal challenges with far-reaching regulatory implications this year.

The majority of US state legislatures have introduced or passed bills attempting to reform how social media giants moderate their content and increase security measures for American users.

Elsewhere on the legal front, the supreme court will hear no fewer than four high-profile cases against tech giants, ranging from liability in terrorist attacks to alleged censorship of conservative viewpoints on their platforms.

State and federal lawsuits, two of which were announced this month, also take aim at how social media apps and their highly effective algorithms negatively affect the mental health of American teenagers.

On 6 January, Seattle public schools and Kent school district filed a lawsuit against TikTok, Instagram, Facebook, YouTube and Snapchat, alleging they promote “harmful content to youth, such as pro-anorexia and eating disorder content”. In a statement, Seattle public schools said: “We cannot ignore the mental health needs of our students and the role that social media companies play.”

Only a couple of weeks later, Utah’s Republican governor Spencer Cox announced the state plans to file a similar lawsuit, claiming the yet-to-be filed complaint will be aimed at protecting youth.

The concerns cited by public officials about the affects of social media on teenagers are not unfounded. Last year, Facebook data scientist turned whistleblower Frances Haugen leaked internal documents to the Wall Street Journal, showing teens who used Instagram experience harm as a result of “social comparison, social pressure, and negative interactions with other people”.

In 2019, the company now called Meta – the parent company of Facebook and Instagram – obtained clear market research data demonstrating Instagram use caused 40% of American teenagers to feel they had to create a perfect image, to think they were unattractive and didn’t have enough money, the leaked documents showed. One in five teens in Meta’s market research said that Instagram made them feel worse about themselves; teenagers reported the social media app exacerbated their existing mental health issues.

Meta’s CEO, Mark Zuckerberg, responded to the Wall Street Journal’s report about the leaked documents and Haugen’s congressional testimony by calling it a “mischaracterization of the research into how Instagram affects young people”. He pointed to a Facebook Newsroom response, penned by its vice-president and head of research, Pratiti RayChoudhury, which said: “It is simply not accurate that this research demonstrates Instagram is ‘toxic’ for teen girls. The research actually demonstrated that many teens we heard from feel that using Instagram helps them when they are struggling with the kinds of hard moments and issues teenagers have always faced.”

While the pressure is mounting for public officials to legally address the harms social media cause in children, states have been waging a legislative war against social media platforms for content moderation for the past two years. Politico reported 34 states introduced or passed more than 100 bills primarily attempting to ban censorship or restrict hate speech. As legislative sessions kick off this year, that number is expected to increase.

A California law just went into effect this January, requiring social media companies to share their hate speech, extremism and disinformation policies in their terms of services.

“Californians deserve to know how these platforms are impacting our public discourse, and this action brings much-needed transparency and accountability to the policies that shape the social media content we consume every day,” said the California governor, Gavin Newsom, after he signed the bill last fall.

Starting next year, tech companies will also have to provide data about how those policies are enforced in biannual reports to the California attorney general, Rob Bonta.

Legislation passed by conservative lawmakers in Texas and Florida argues that social media platforms are censoring rightwing political speech. Appellate courts struck down Florida’s law, arguing it violated the first amendment, but upheld the legislation in Texas with one judge saying it “chills censorship”. The supreme court narrowly ruled to temporarily block the Texas law last May.

This week, the supreme court asked the US solicitor general, Elizabeth Prelogar, to weigh in on whether states can stop social media companies from eliminating some forms of political rhetoric on their platforms. Because the supreme court has asked for Prelogar’s opinion on the stalled cases, it’s anticipated that their ruling will be delayed until their next session in October 2023.

In late February, the supreme court will hear arguments in two controversial cases – Gonzalez v Google and Twitter v Taamneh – both of which raise questions about whether or not social media platforms are liable for spreading Islamic State content that ultimately resulted in the 2015 Paris attacks and the 2017 Turkish nightclub attack. Google has argued publicly that ruling against the company in this case would undermine Section 230 of the Communications Decency Act, harming “free expression online” and making the internet less safe from spam and offensive content.

Taking effect this month, the federal government banned TikTok from all of its government-issued devices unless employees are using the app for law enforcement or national security purposes. The ban came on the heels of the public concerns of the FBI director, Christopher Wray, about China using TikTok to infiltrate American users’ cellphones, collect personal data and peddle influence. More than 20 states have followed suit, requiring TikTok to be permanently removed from state-issued devices, the Associated Press reported.

Like state officials’ concerns about the harms of social media with youth, federal and state governments’ heightened anxiety about TikTok’s security implications aren’t entirely baseless.

ByteDance, TikTok’s parent company, confirmed in December two China-based and two US-based employees tasked with investigating press leaks had improperly accessed the personal data of a BuzzFeed and a Financial Times reporter, CNN reported. ByteDance fired all four employees and TikTok’s CEO, Shou Chew, called the breach “unacceptable” and a misuse of the employees’ authority.

For some lawmakers, like the Florida senator Marco Rubio, banning TikTok on government-issued devices doesn’t go far enough.

“This isn’t about creative videos – this is about an app that is collecting data on tens of millions of American children and adults every day,” Rubio wrote in a statement on his website. “We know it’s used to manipulate feeds and influence elections. We know it answers to the People’s Republic of China. There is no more time to waste on meaningless negotiations with a CCP-puppet company. It is time to ban Beijing-controlled TikTok for good.”

ByteDance has argued its data is held in the US and Singapore, not China, and the Chinese government has never asked the company to provide them with data.

MacKenzie Ryan

The GuardianTramp

Related Content

Article image
TechScape: Why Donald Trump’s return to Facebook could mark a rocky new age for online discourse
The former president was banned from Instagram and Facebook following the Jan 6 attacks, but Meta argues that new ‘guardrails’ will keep his behaviour in check. Plus: is a chatbot coming for your job?

Josh Taylor

31, Jan, 2023 @11:35 AM

Article image
Donald Trump’s Truth Social posts bode ill for his return to Facebook
As Trump was reinstated, Meta’s Nick Clegg stressed ‘guardrails’ were in place. He could soon find them tested

Dan Milmo Global technology editor

26, Jan, 2023 @2:52 PM

Article image
‘Game of Whac-a-Mole’: why Russian disinformation is still running amok on social media
Social media companies’ response amid war in Ukraine has been haphazard and confusing, experts say

Kari Paul

16, Mar, 2022 @5:00 AM

Article image
TikTok can track users’ every tap as they visit other sites through iOS app, new research shows
Researcher says social media app can collect keystroke information but ‘there is no way for us to know’ if or how data is used

Rafqa Touma

24, Aug, 2022 @4:45 AM

Article image
‘Dangerous misogynist’ Andrew Tate removed from Instagram and Facebook
Self-described sexist removed for violating Meta’s policies on ‘dangerous organizations and individuals’

Kari Paul

19, Aug, 2022 @10:09 PM

Article image
A ‘safe space for racists’: antisemitism report criticises social media giants
Facebook, Twitter, Instagram, YouTube and TikTok failing to act on most reported anti-Jewish posts, says study

Maya Wolfe-Robinson

02, Aug, 2021 @4:14 PM

Article image
The desperate people of Ukraine need help, not self-satisfied social media posts | Moya Lothian-McLean
Twitter, Instagram and TikTok posts offer quick catharsis, but it’s the unshowy work of collective organising that makes a real difference, says writer Moya Lothian-McLean

Moya Lothian-McLean

08, Mar, 2022 @7:00 AM

Article image
Job cuts and falling shares: how did it all go so wrong for the US tech sector?
As Amazon axes 18,000 roles and Tesla loses 65% of its value, we examine the causes of the glitch hitting Silicon Valley

Dan Milmo Global technology editor

06, Jan, 2023 @2:00 PM

Article image
Adult online age used by third of eight- to 17-year-old social media users
Ofcom study covers Facebook, TikTok, Instagram, Snapchat, Twitter and YouTube, all of which have age limits of 13

Dan Milmo Global technology editor

10, Oct, 2022 @11:01 PM

Article image
Facebook’s parent Meta prepares to slash thousands of jobs – reports
Layoffs come after $80bn wiped off company’s market value last month amid global economic downturn

Josh Taylor

07, Nov, 2022 @12:40 AM