Over 100 AI experts and concerned individuals have signed an open letter calling for the next Australian government to establish an AI safety institute (AISI) to manage AI risks before “it’s too late.”
The letter has been sent to political parties ahead of the May federal election.
AI experts urged politicians to deliver the commitments that the Australian government made at the Seoul AI Summit in May 2024.
However, Australia is currently the only signatory that has not established an AISI.
“It sets a dangerous precedent for Australia to formally commit to specific actions but fail to follow through,” said Greg Sadler, CEO of Good Ancestors Policy and coordinator of Australians for AI Safety.
Toby Ord, a senior researcher at Oxford University and a board member of the Centre for the Governance of AI, said Australia was at risk of being in a position where it had little say on the AI systems.
“An Australian AI Safety Institute would allow Australia to participate on the world stage in guiding this critical technology that affects us all,” he said.
The letter also highlighted that while organisations have made massive investments to accelerate AI capabilities, minimal funding has been dedicated to understanding and addressing its risks.
Mandatory AI Guardrails
At the same time, the letter called for the enactment of an “AI Act” that requires AI developers and deployers in the country to implement mandatory guardrails in their products.While the government has consulted with the sector on safe and responsible AI and received advice about imposing mandatory guardrails on high-risk systems, experts believe it to be the right timing for the next parliament to turn discussion into action.
University of the Sunshine Coast Professor Paul Salmon, who is also the founder of the Centre for Human Factors and Sociotechnical Systems, supported the implementation of an AI Act, saying it would ensure that risks are effectively managed.
“We are fast losing the opportunity to ensure that all AI technologies are safe, ethical, and beneficial to humanity,” he said.
Meanwhile, Yanni Kyriacos, Director of AI Safety Australia and New Zealand, pointed out that the country currently lacks a legal framework to assure Australians that AI is safe for adoption.
“Robust assurance justifies trust. We’re all excited about the potential opportunities of AI, but not enough work is currently happening to address genuine safety concerns,” he said.
“It’s easy to understand why Australians are hesitant to adopt AI while these big issues are outstanding.”
The top concerns among those surveyed were AI acting in conflict with human interests (misalignment), misuse of AI by bad actors, and unemployment caused by AI.
In addition, nine in ten respondents wanted the government to set up a new regulatory body for AI.