Brace for the New ‘Mega-Censor’ Who Will Determine What Is Correct and True

If there is any doubt as to ACMA’s role as a state censor, impending laws say government information cannot be labelled misinformation or disinformation.
Brace for the New ‘Mega-Censor’ Who Will Determine What Is Correct and True
The icons of mobile apps are seen on the screen of a smart phone in New Delhi, India, on May 26, 2021. Sajjad Hussain/AFP via Getty Images
Graham Young
Updated:
0:00
Commentary

With various Australia Day arguments circulating around, it leads to the question of what it means to be Australian.

Is Australia a place, or is it a set of ideas and sensibilities?

One piece of legislation and one set of regulations might end those debates forever as they will censor everything so that we must all agree with the government.

The legislation is the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2023, and the regulations are the Draft Online Safety (Relevant Electronic Services—Class 1A and Class 1B Material) Industry Standard 2024.

Both are under the control of the Australian Communication and Media Authority (ACMA).

Originally a simple regulator, formed from the merger of the Australian Broadcasting Authority and the Australian Communications Authority in 2005, ACMA is morphing into a mega-censor, capable of directing or curtailing public political discussion and eavesdropping without a warrant on those merely suspected of potential terrorism and crimes.

The ‘Misinformation and Disinformation’ Bill

The Combatting Misinformation and Disinformation Bill will allow ACMA to regulate anything it deems misinformation (unintentionally wrong content) or disinformation (intentionally wrong content) on websites.

Large media companies that are members of the Australian Press Council are exempted and left to the Press Council to deal with.

Smaller players like blogs, political and organisational websites, niche internet publishers, and many of the authors on aggregated publishing platforms like Substack, would come under their control.

Through the use of imposed standards of conduct, ACMA will be able to determine the “truth” of what they publish.

If there is any doubt as to ACMA’s role as a state censor, the legislation specifically states that government information cannot be misinformation or disinformation.

Yes, please read that last sentence again and let it sink in—governments never make errors or lie.

Treasurer Jim Chalmers hands down the 2023 Budget in the House of Representatives at Parliament House in Canberra, Australia, on May 9, 2023. (Martin Ollman/Getty Images)
Treasurer Jim Chalmers hands down the 2023 Budget in the House of Representatives at Parliament House in Canberra, Australia, on May 9, 2023. Martin Ollman/Getty Images

Ironically, this enhancement of the ACMA’s power is based on misinformation itself.

Originally the Australian Competition and Consumer Commission (ACCC) was asked to look at the question of online misinformation, disinformation, and malinformation.

The commission report recommended misinformation not be regulated, and it limited disinformation to things like “doctored and dubbed video footage misrepresenting a political figure’s position on issues” or “information incorrectly alleging that a public individual is involved with illegal activity.”

In the case of disinformation, the commission also thought that the law already adequately covered things like false and misleading advertising and defamation, but suggested that ACMA monitor the situation.

Not very encouraging from the point of view of state censors who might want more justification for increased powers, but still leaves a thin gap that a crafty bureaucrat could try to widen.

Ignoring the ACCC’s reservations about misinformation, the ACMA plowed ahead, commissioning academics at the News and Media Research Centre at Canberra University, which has long been lobbying for censorship in Australia, to investigate the “harms” caused by online misinformation.

The academics produced a report, COVID-19: Australian News & Misinformation Longitudinal Study, which leveraged the threat of COVID to prove that misinformation could cause harm serious enough to regulate it.

Except that it actually proves the opposite. Each of the five propositions they tested as misinformation was either true or a matter of opinion.

If you thought masks made little difference, you were misinformed. Yet the Cochrane Collaboration meta-analysis completed last year confirms you were, in fact, correct.

If you thought there were risks with the mRNA vaccines you were misinformed. The large number of vax injured, greater than for all other vaccines combined, says you were actually correct.

People who thought politicians exaggerate were classified as misinformed. If you suspected authorities were flying by the seat of their pants without proper scientific evidence, you were also classified as misinformed.

And if you realised that, for most people, COVID was not a serious risk and could be treated using supplements and over-the-counter drugs, you were also misinformed.

An ivermectin bottle next to a positive blood sample of COVID-19. (Novikov Aleksey/Shutterstock)
An ivermectin bottle next to a positive blood sample of COVID-19. Novikov Aleksey/Shutterstock

Ironically, when you re-analyse the data using the correct values for misinformation, you find that the people most likely to be well-informed are those who spent the most time on internet platforms, particularly social media, yet it is those sites that the government, and ACMA, has its sights on and wants to control.

The misinformation and disinformation act will intersect with the industry standard to deputise service providers to spy on Australians who use email, social media, chat rooms, and end-to-end encrypted services.

The New ‘Online Safety’ Standard

Like the Misinformation Act, which uses COVID-19 to constitute a real threat that might justify action, the Standard uses potential harm caused by child abuse material and terrorism.

That is the thin end of the wedge stuff.

In the normal, rule-of-law world, it is well-recognised that threats can be posed by child exploitation and terrorism, but there are fail-safes to ensure the privacy and rights of innocent citizens are not interfered with.

To do what the commissioner is asking service providers to do for her would in the rule of law world, require, at the very least, a warrant from a judge, and the warrant would need to be requested by someone with proper training, like a law officer.

What we will have under this proposal is employees of tech companies, without necessarily any relevant training, using algorithms of variable accuracy to monitor the accounts of millions of users and make ad hoc decisions as to what they may, or may not, say or do.

The Standard is administered by the e-Safety commissioner, Julie Inman Grant, who should know exactly what risks are entailed when the government asks businesses to do its censorship for it, because her immediate previous job was at Twitter (now X) where “she set up and drove the company’s policy, safety, and philanthropy programs across Australia, New Zealand, and Southeast Asia.”

As the Twitter files demonstrate, the U.S. government used the platform and other social media companies to vandalise the First Amendment rights of Americans.

Photo illustration featuring the Twitter logo. (Leon Neal/Getty Images)
Photo illustration featuring the Twitter logo. Leon Neal/Getty Images
Ms. Inman Grant has a progressive attitude to free speech, telling a World Economic Forum seminar that, “I think we’re going to have to think about a recalibration of a whole range of human rights that are playing out online, from freedom of speech to the freedom to—you know—be free from online violence.”

Perhaps her CV and speaking activities explain why she has got into a major stoush with X, whose CEO she disparages to the Davos crowd.

Elon Musk is the new champion of genuine free speech, and his empire comes under her purview.

It’s certainly puzzling as to why the eSafety commissioner is tangling with Twitter.

The commission’s own research shows only 3 percent of Australians had a “negative experience” on Twitter compared to the 30 percent who said they had on Facebook—it was also fewer than email, sms/msm, websites, Instagram, chat apps, and Snapchat.
There is certainly cause for concern about online child abuse, but surely this is a role for specialised law enforcement. But in any case, I would think algorithms would be fairly accurate in detecting potential child abuse material.

Governments Weaponising Censorship

Terrorism is another matter.
We’ve seen in the United States various stalwart citizens being labelled as potential terrorists because of the church they attend, or their position on pornography in schools.

In an increasingly polarised world, even in the longest-established democracies, the political party in power is finding ways to weaponise the bureaucracy and through them extract cooperation from corporates to spy on and persecute their opponents.

A system where government information is defined as true by legislative fiat would seem to lend itself to the abuse of power by ideologically motivated commissioners.

Australia, lacking a free speech guarantee in its Constitution, is in a deteriorating position.

This is a position that in the old days—when “telling it like it is” was thought of as an Australian virtue and “cocking a snook at authority” a national pastime—would now be seen as “un-Australian.”

And that’s exactly what it is.

Views expressed in this article are opinions of the author and do not necessarily reflect the views of The Epoch Times.
Graham Young
Graham Young
Author
Graham Young is the executive director of the Australian Institute for Progress. He is the editor and founder of OnlineOpinion.com.au and has conducted qualitative polling on Australian politics since 2001. Mr. Young has contributed to The Australian newspaper, The Australian Financial Review, and is a regular on ABC Radio Brisbane.
Related Topics