The Australian government is considering regulating artificial intelligence with a specific AI Act, similar to what has been adopted in Europe.
An “Australian AI Act” is one of three options being considered by the government to regulate the technology in high risk settings.
The Department of Industry, Science, and Resources unveiled a discussion paper outlining this plan as part of a four week consultation.
The European Union passed the AI Act in March, the first of its kind, which establishes regulations to control AI based on the assessed risk levels.
For example, systems that use AI to identify people using biometrics remotely and in real-time is considered an unacceptable risk and is banned. Content created or modified using AI, while not considered high-risk, will need to be clearly labelled as AI generated.
The law came into force in August.
“If adopted, Australia’s approach would come closer into line with jurisdictions including the European Union, and proposed approaches of Canada, and the United Kingdom, who join Australia as signatories to the multilateral Bletchley Declaration,” the consultation paper said.
Australia will also include mandatory guardrails for AI systems in high-risk settings.
These include enabling human control or intervention in AI, testing AI systems, disclosing when AI is used, keeping records, engaging stakeholders, and publishing an accountability process for AI.
Minister for Industry and Science Ed Husic said the government had heard and listened to Australians wanting stronger protections on AI.
Husic said the government is starting to put those protections in place, noting that business had called for greater clarity around using AI safely.
‘Too Hard’ Basket
However, Coalition Shadow Minister for Communications David Coleman and Shadow Minister for Digital Economy Paul Fletcher raised concerns Labor is leaving this issue in the “too hard basket.”The Opposition said the Labor government continues its “meandering and indecisive approach to artificial intelligence policy” when there is a need for clarity and direction.
“Of course we need to be alive to the risks associated with this technology and its implications on legislation and regulations, but the Albanese government must also provide leadership and start making decisions,” the ministers said.
Fletcher and Coleman said Australia needs more action “on such a critical issue” beyond holding roundtables, commissioning reports, and announcing advisory bodies.
Reaction to the Proposal
University of Queensland Business School Postdoctoral Research Fellow Steve Lockey said he was pleased to see the government take proactive action in developing guardrails for AI, especially in high-risk contexts.He said his research reveals 70 percent of Australians believe in AI regulation and more than 90 percent believe trustworthy AI is important.
“The Australian government has cited this research, and I am pleased to see that it has clearly taken public sentiment and expectations around AI regulation and governance into account in the development of the 10 guardrails,” Lockey said.
Uni SA Associate Professor Vitomir Kovanović said he was pleased to see the Australian government was taking the task seriously and looking to learn from regulators in other parts of the world, mainly Canada and EU.
“AI will have a profound impact on the whole aspect of Australian society and it is important to ensure that AI is not causing harm but instead used in safe and productive ways,” he said.