Uncertainty Over Whether ‘Deepfakes’ Will Be Allowed at the Next Australian Election

A Senate Committee couldn’t agree on how to handle the issue, with 4 out of the 6 members disagreeing.
Uncertainty Over Whether ‘Deepfakes’ Will Be Allowed at the Next Australian Election
Participants chat in front of an electronic image of a soldier before the closing session of the Responsible AI in the Military Domain (REAIM) summit in Seoul on Sept. 10, 2024. Humans not artificial intelligence should make the key decisions when it comes to using nukes, a global summit AI in the military domain agreed on Sept. 10, in a non-binding declaration. Jung Yeon-je/AFP via Getty Images
Updated:
0:00

The inability of the Senate Inquiry into Adopting Artificial Intelligence (AI) to agree on recommendations to Parliament has left open the possibility that “deepfake” ads could be screened in the lead-up to the next federal election.

Four of the six members disagreed with the Committee’s interim report, issuing two dissenting reports.

The official report contains five recommendations, including that the government implement “voluntary codes relating to watermarking and credentialling of AI-generated content” before the next election.

The potential introduction of mandatory codes would be explored, with the aim of having them in place two elections from now.

In one of the dissenting reports, Inquiry Deputy Chair Greens Senator David Shoebridge said the recommendations would allow deepfake political ads to “mislead voters or damage candidates’ reputations” in the period before the next election because the interim report failed to propose the “urgent remedies” needed to protect the democratic process.

Instead, a temporary, targeted ban on political deepfakes should be introduced to help voters next year.

“Under current laws, it would be legal to have a deepfake video pretending to be the Prime Minister or the Opposition Leader saying something they never, in fact, said as long as this is properly authorised under the Electoral Act,” Shoebridge said.

“That falls well below community expectations of our electoral regulation.”

Those concerns were echoed by independent Senator David Pocock, who said rules outlawing the use of deepfake videos and voice clones would be critical before the next federal election and could be refined by the time of the 2029 poll.

“Suggestions that we need to go slowly in the face of rapidly changing use of AI seem ill-advised,” he said. “There should be a swift move to put laws in place ahead of the next federal election that rule out the use of generative AI.”

Too Rushed: Coalition

The two Coalition Committee members—Senators James McGrath and Linda Reynolds—issued another dissenting report for the opposite reasons, saying they would not support quick legislative reforms or measures to govern truth in political advertising.

They said Australia should move only after reviewing the outcome of U.S. laws on its federal election, which will take place on Nov. 5.

“The Coalition members of the committee are concerned that, should the government introduce a rushed regulatory AI model with prohibitions on freedom of speech in an attempt to protect Australia’s democracy, the cure will be worse than the disease,” their report said.

The AI inquiry was set up in March to investigate the risks and opportunities of the technology. It held six public hearings and heard testimony from academics, scientists, technology firms, and social media companies.

Its recommended restrictions could apply to generative AI models such as ChatGPT, Microsoft CoPilot, and Google Gemini, as well as social media platforms.

Other recommendations included extending mandatory rules for AI used in high-risk settings to apply to election material and increasing efforts by the government to boost AI literacy, including among parliamentarians and government agencies.

The AI inquiry’s final report is expected in November.

AEC says Deepfakes Already Used Worldwide

The Australian Electoral Commission’s (AEC) submission to the Inquiry cited numerous examples of deepfake video and audio being used across the world.

Just before the U.S. New Hampshire presidential primary in January this year, a robocall—reported to have likely used AI voice cloning technology impersonating U.S. President Joe Biden—urged voters to skip the primary election.

(Illustration by The Epoch Times, Getty Images, Shutterstock)
Illustration by The Epoch Times, Getty Images, Shutterstock

In Pakistan, jailed former Prime Minister Imran Khan claimed party election victory in a video created using AI.

In India, an AI-generated video of deceased former Tamil Nadu Chief Minister M Karunanidhi praised the leadership of his son and current Tamil Nadu Chief Minister ahead of elections in May.

Prior to the Indonesian election in February, a deepfake of deceased former President Suharto circulated, endorsing his former political party.

And ahead of the South Korean election in April, it was reported that the National Election Commission detected 388 pieces of AI-generated media content, in violation of the country’s newly revised election law banning the use of political campaign videos using AI-generated deepfakes within 90 days of an election.

The AEC said its existing powers were directed at the integrity of the electoral process, and it had no authority over the content of election communications.

Even if granted such power, it told the Committee that it was “concerned about the current lack of potential legislative tools and [its own] internal technical capabilities to enable us to detect, evaluate, and respond to information manipulation about the electoral process generated by that technology.”

AAP contributed to this report.
Rex Widerstrom
Rex Widerstrom
Author
Rex Widerstrom is a New Zealand-based reporter with over 40 years of experience in media, including radio and print. He is currently a presenter for Hutt Radio.
Related Topics