Australian Media Watchdog Gains New Powers to Target Media Misinformation

Australian Media Watchdog Gains New Powers to Target Media Misinformation
Social media apps on a smartphone in this file photo. Chandan Khanna/AFP via Getty Images
Lis Wang
Updated:
0:00

Australia’s media regulator will gain new powers to impose a compulsory code of conduct to target misinformation and disinformation on digital platforms.

The Australian Communications and Media Authority (ACMA) will be given new information-gathering and record-keeping powers to respond to misinformation and disinformation in a federal effort to create transparency around major digital platforms.

The new information-gathering powers will be similar to pre-existing industry codes that are already in place for the telecommunication and broadcasting sectors, but the ACMA will have the added ability to register an enforceable industry code to replace the voluntary self-regulating code that digital platforms have joined.

Federal Minister for Communications Michelle Rowland announced on Jan. 20 that the new powers mark a major step forward in addressing the rapid increase in online misinformation and disinformation.

“The new framework will focus on systemic issues which pose a risk of harm on digital platforms, rather than individual pieces of content posted online,” Rowland said in the statement.

“Digital platforms will continue to be responsible for the content they host and promote to users.

“In balancing freedom of expression with the need to address online harm, the code and standard-making powers will not apply to professional news and authorised electoral content, nor will the ACMA have a role in determining what is considered truthful.”

The Albanese government will undertake public and industry consultation on draft legislation and release a draft Bill in the first half of 2023, followed by legislation in Parliament later this year.

“The key here always is about keeping Australians safe, and we know that, unfortunately, misinformation has the potential to cause great harms, including harms to social order and threats to our democracy,” Rowland told ABC News in an interview.
The new framework will not focus on individual pieces of content posted online. Instead, it will focus on systemic issues on digital platforms.

Code of Practice for Disinformation and Misinformation

ACMA currently oversees the Australian Code of Practice on Disinformation and Misinformation, which is undertaken through the Digital Industry Group Inc. (DIGI).

DIGI currently has eight major technology companies, including Adobe, Apple, Google, Meta, Microsoft, Redbubble, TikTok and Twitter, which are also the founding signatories. These companies have adopted the mandatory commitments and nominated additional opt-in commitments through public disclosures on the DIGI website.

DIGI has welcomed the new oversight powers over misinformation and disinformation given to ACMA.

Managing Director Sunita Bose said in a statement on Jan. 20, “DIGI is committed to driving improvements in the management of mis- and disinformation in Australia, demonstrated through our track record of work with signatory companies to develop and strengthen the industry code.”

“We welcome that this announcement aims to reinforce DIGI’s efforts and that it formalises our long-term working relationship with the ACMA in relation to combatting misinformation online.”

Under the code of practice launched by Digital Industry Group Inc. (DIGI) on Feb. 22, 2021, the participating companies release an annual transparency report showing their commitment and efforts under the code that can help understand online misinformation and disinformation over time in Australia.

On Dec. 22, 2022, DIGI released an updated code (pdf) in response to stakeholder feedback received through a planned review of the code.
“This code is an important safeguard for Australians against the harms that arise from mis- and disinformation. DIGI is committed to the code’s continued improvement over time in response to the evolving digital environment and feedback from the community,” said DIGI Managing Director Sunita Bose in a statement.
“As mainstream platforms get better in their approaches to mis- and disinformation, it’s likely to proliferate elsewhere online. That’s why we’re also making changes today to make it easier for smaller companies to adopt the code.”

AI To Target Misinformation

Facebook and Facebook’s parent Meta currently use artificial intelligence (AI) algorithms that are able to detect any unwanted content with high accuracies, such as misinformation and hate speech.

Rowland said that the regulator needs to be empowered to ensure platforms such as Twitter and Facebook follow the code.

“They include artificial intelligence and consumer complaints. But, again, we need to make sure that all of those elements are working properly. And ultimately, this is about making sure that misinformation and disinformation is kept to a minimum,” Rowland told ABC News.
Facebook has implemented a range of policies and products to target misinformation on the platform, including added warning signs and more context to content rated by third-party fact-checkers, reducing their distribution, and removing posts or comments with misinformation.
However, in recent years there have been criticisms against Facebook, including Big Tech companies such as Google and Twitter, in which conservatives say their voices have been suppressed despite Big Tech denying the accusations.

Current AI tools have the ability to both flag certain posts and comments for review and automatically find new similar posts and comments that were previously identified as misinformation.

Facebook has also launched SimSearchNet++, an image-matching model that operates on images uploaded to Facebook and Instagram and is part of Meta’s end-to-end image indexing and matching system.

Lis Wang
Lis Wang
Author
Lis Wang is an Australia based reporter covering a range of topics including health, culture, and social issues. She has a background in design. Lis can be contacted on [email protected]
Related Topics