Australian Government Considering European-Style AI Law

Industry and Science Minister Ed Husic said Australians want stronger protections on AI.
Australian Government Considering European-Style AI Law
An AI (artificial intelligence) logo is pictured at the Mobile World Congress in Barcelona, Spain, on Feb. 27, 2024. Josep Lago/AFP via Getty Images
Monica O’Shea
Updated:
0:00

The Australian government is considering regulating artificial intelligence with a specific AI Act, similar to what has been adopted in Europe.

An “Australian AI Act” is one of three options being considered by the government to regulate the technology in high risk settings.

The Department of Industry, Science, and Resources unveiled a discussion paper outlining this plan as part of a four week consultation.

“These options include adapting existing regulatory frameworks to introduce additional guardrails on AI, or creating new frameworks such as through framework legislation or by introducing an Australian AI Act,” the paper stated (pdf). 

The European Union passed the AI Act in March, the first of its kind, which establishes regulations to control AI based on the assessed risk levels.

For example, systems that use AI to identify people using biometrics remotely and in real-time is considered an unacceptable risk and is banned. Content created or modified using AI, while not considered high-risk, will need to be clearly labelled as AI generated.

The law came into force in August.

“If adopted, Australia’s approach would come closer into line with jurisdictions including the European Union, and proposed approaches of Canada, and the United Kingdom, who join Australia as signatories to the multilateral Bletchley Declaration,” the consultation paper said.

Australia will also include mandatory guardrails for AI systems in high-risk settings.

These include enabling human control or intervention in AI, testing AI systems, disclosing when AI is used, keeping records, engaging stakeholders, and publishing an accountability process for AI.

The government has released a voluntary AI Safety Standard for businesses to provide them with a head start before they potentially become mandatory.

Minister for Industry and Science Ed Husic said the government had heard and listened to Australians wanting stronger protections on AI.

“Australians know AI can do great things, but people want to know there are protections in place if things go off the rails,” he said.

Husic said the government is starting to put those protections in place, noting that business had called for greater clarity around using AI safely.

“We need more people to use AI and to do that we need to build trust,” he said.

‘Too Hard’ Basket

However, Coalition Shadow Minister for Communications David Coleman and Shadow Minister for Digital Economy Paul Fletcher raised concerns Labor is leaving this issue in the “too hard basket.”

The Opposition said the Labor government continues its “meandering and indecisive approach to artificial intelligence policy” when there is a need for clarity and direction.

“Of course we need to be alive to the risks associated with this technology and its implications on legislation and regulations, but the Albanese government must also provide leadership and start making decisions,” the ministers said.

Fletcher and Coleman said Australia needs more action “on such a critical issue” beyond holding roundtables, commissioning reports, and announcing advisory bodies.

“After more than two years in government and seven months after this advisory body was established, the best Labor can do is to issue yet another discussion paper,” they said.

Reaction to the Proposal

University of Queensland Business School Postdoctoral Research Fellow Steve Lockey said he was pleased to see the government take proactive action in developing guardrails for AI, especially in high-risk contexts.
“Our research on public attitudes towards AI in Australia and around the world shows that people want regulation, they are more comfortable with independent, external regulation, and are more likely to trust when AI systems that adhere to trustworthy principles and practices, and organisations provide assurances of trustworthiness,” he said.

He said his research reveals 70 percent of Australians believe in AI regulation and more than 90 percent believe trustworthy AI is important.

“The Australian government has cited this research, and I am pleased to see that it has clearly taken public sentiment and expectations around AI regulation and governance into account in the development of the 10 guardrails,” Lockey said.

Uni SA Associate Professor Vitomir Kovanović said he was pleased to see the Australian government was taking the task seriously and looking to learn from regulators in other parts of the world, mainly Canada and EU.

“AI will have a profound impact on the whole aspect of Australian society and it is important to ensure that AI is not causing harm but instead used in safe and productive ways,” he said.

Monica O’Shea
Monica O’Shea
Author
Monica O’Shea is a reporter based in Australia. She previously worked as a reporter for Motley Fool Australia, Daily Mail Australia, and Fairfax Regional Media.
Related Topics