Greens Senator Warns Australia Not ‘Nimble Enough’ to Deal With Surge in AI Capabilities

‘I think if you look at the last three years, you can see how non-nimble the parliament has been,’ said Greens Senator David Shoebridge.
Greens Senator Warns Australia Not ‘Nimble Enough’ to Deal With Surge in AI Capabilities
A man interacts with a 3D printable humanoid robot in Hanover, northern Germany, on March 31, 2025. Ronny Hartmann/AFP via Getty Images
Alfred Bui
Updated:
0:00

Greens Senator David Shoebridge has called on the Australian federal parliament to be more nimble in addressing the risks around AI development.

At a recent online event on AI safety, Shoebridge said one of the greatest challenges was getting the parliament to respond fast enough.

“We can’t spend eight years working out a white paper before we roll out regulation in this space,” he said.

“We can’t see a threat emerging and say, ‘Okay, cool, we’re going to begin a six-year parliamentary process in order to work out how we respond to a high-risk deployment of AI.’

“We need to be much more nimble, and we need the resources and assistance in parliament to get us there. And I think if you look at the last three years, you can see how non-nimble the parliament has been.”

The senator also noted that while some work on AI safety had managed to get attention, not much progress had been made.

“What’s come out? Where’s the product from parliament? Where is the AI Safety Act? Where is the national regulator?” he asked.

“Where’s the resource agency that can help parliament navigate through this bloody hard pathway we’re going to have to do in the next three years?”

Shoebridge’s remarks came as research from the AI research institute Epoch AI reveals a surge in in intelligence from AI models, with many some mastering PhD-level science in just a few months.

Greens’ Proposal for National AI Regulator

To address AI risks, Shoebridge says the Greens would put forward a standalone “AI Act” to legislate guardrails and create a national regulator.

“We don’t call it an AI Safety Institute, but it has the functions of an AI Safety Institute,” he said.

“So it’s well-resourced. It’s a national regulator. And its focus is on, first of all, guiding parliament so that we get the right regulatory models in place, strict handrails, strict guidelines, and they’re legislated.”

The senator further stated that the proposed national AI regulator would have a team of on-call, highly qualified experts led by an independent statutory officer to test high-risk deployments of AI.

The expert team would also be responsible for establishing a reliable process to test AI models before they are deployed to identify any risks in real-time.

In addition, the Greens would propose to set up a “digital rights commissioner” whose role is to regulate digital rights and the impacts of AI on those rights.

“I would think of a digital rights commissioner as a kind of an ombudsman in the [digital] space to ensure that our data isn’t being fed without our consent into large language models, [and] to put in remedy so that if that happens, people are held to account, and our data is removed from training data sets,” Shoebridge said.

Greens Senator David Shoebridge speaks at an event in Sydney, Australia, on Jan. 26, 2019. (Cole Bennetts/Getty Images)
Greens Senator David Shoebridge speaks at an event in Sydney, Australia, on Jan. 26, 2019. Cole Bennetts/Getty Images

Legal Expert Says Liability Already a Hazy Area

Jisoo Kim, a law professor at the University of Sydney and co-founder of Clear AI, said there were existing challenges with identifying where problems start or occur in AI automation processes.

“Automation makes liability hard at a general level–it’s just harder to pin liability on a company or a person if they can say, well, it was the system, what done it? It wasn’t me,” she said.

“And if the technology underlying that automated system is in any way unpredictable, which some of the AI is, or we don’t understand it, it makes it even harder to pin things like liability and to hold companies responsible.

“Another reason why we need to be thinking about things like guard rails [is] to ensure that systems are safe before they go out and monitoring and auditing that goes on afterwards.”

Alfred Bui
Alfred Bui
Author
Alfred Bui is an Australian reporter based in Melbourne and focuses on local and business news. He is a former small business owner and has two master’s degrees in business and business law. Contact him at [email protected].