Australia’s updated misinformation code still fails to tackle large-scale group messaging and needs tougher requirements for digital media companies to report on harm reduction, according to the media regulator.
The Australian Communications and Media Authority said it welcomed the new voluntary code of practice on disinformation and misinformation, released in late December but signalled it would continue to push for powers to compel social media companies to hand over information about how they are combatting misinformation and dealing with complaints.
Acma’s push was endorsed by the Morrison government in March 2022, and by the new communications minister, Michelle Rowland, who suggested in June that social media companies could soon be forced to turn over data on posts and audience figures so the government can decide whether to tighten laws on misinformation.
Online misinformation in Australia is self-regulated, with the tech peak body Digital Industry Group Inc – whose members include Google, Apple, Meta, Twitter and TikTok – responsible for drawing up the voluntary code of conduct.
That code was updated last month to redefine harm as communication containing a “serious and credible” threat but dropping the requirement that harm must be “imminent”.
It also added a commitment to allow users to access “general information about … use of recommender systems,” such as recommended tweets or TikTok’s For You page, as well as access to “options related to [their] content”.
The code also required transparency reporting for services with fewer than 1 million active monthly users in Australia, a change it said will also encourage greater participation by smaller platforms.
In a statement to Guardian Australia, Acma said it noted these “several improvements” and also praised changes that provide greater transparency about where users can go to make a complaint.
“However, the revised code does not address all the Acma’s concerns outlined in our submission … including the development of a more robust reporting framework and the expansion of the code to cover the propagation of mis- and disinformation on messaging services that facilitate large-scale group messaging,” the statement said.
Mass or orchestrated direct messages have played a role in false rumours about child abduction spreading in India through WhatsApp and in Australia in the death tax scare campaign at the 2019 election via Facebook messenger.
Acma said it would review the revised code before providing further advice to the government but noted that in 2021 it had asked for “stronger regulatory oversight over platform activities and recommended new regulatory powers”.
The social media companies that are signatories to the code publish reports on their efforts to combat misinformation but Acma has said it wanted formal information-gathering powers, including the ability to request Australia-specific data on the effectiveness of measures to address disinformation and misinformation. It also called for “reserve powers” to introduce binding rules and codes of conduct.
“We continue to support the need for these powers to allow the Acma to take further action if required,” it said.
In December Digi’s managing director, Sunita Bose, said the code was “an important safeguard for Australians against the harms that arise from mis- and disinformation” and Digi was committed to its “continued improvement”.
“We’ve closely examined feedback and made updates that strengthen the code in a range of areas,” she said.
“As mainstream platforms get better in their approaches to mis- and disinformation, this content and behaviour is likely to proliferate elsewhere online. That’s why we’re also making changes today to make it easier for smaller companies to adopt the code.”