Artificial intelligence (AI) poses not just technological risks, but could also create viruses that pose a threat to humanity, says one think tank.
Speaking at a recent Senate inquiry hearing on AI, Good Ancestors Policy CEO Greg Sadler said AI could assist with manufacturing dangerous viruses, which is normally only possible in leading laboratories.
“In March 2022, a paper was published in [the] Nature Machine Intelligence [scientific journal] detailing how an AI intended to find new drugs, but instead designed 40,000 novel and lethal molecules in less than six hours,” he said.
“Similarly, a 2023 study showed that students were able to use ChatGPT to suggest potential pandemic pathogens, explain how they could be made from DNA ordered online, and supply the names of DNA synthesis companies unlikely to screen orders to ensure that they don’t include dangerous DNA sequences.”
Sadler also noted that in the second study, ChatGPT’s safeguards failed to prevent the application from providing that kind of dangerous assistance.
“It’s realistic that AIs could help people build bioweapons during the next term of government,” he said.
The CEO added that the threat had prompted the U.S. government to take action, including introducing an executive order in October 2023.
While Sadler has raised the issue with several Australian government departments, he has not seen any evidence of risk management similar to those taken by the United States.
At the same time, Sadler said there was a huge gap between investment in safety and investment in capability.
“A lot of [AI] safeguards are quickly defeated. They’re not up to safeguarding the kinds of capabilities that future models might have,” he said.
“For every $250 (US$168) spent on driving AI capability, only $1 is spent on AI safety.
Proposal to Establish AI Safety Institute
To address biosecurity and other types of risks, Soroush Pour, the CEO of AI safety research company Harmony Intelligence, proposed establishing an AI safety institute in Australia.According to the CEO, the institute would focus on developing the technical capability and ability needed to support the government in responding to threats.
In addition, the institute needed to be paired with a strong regulator that could enforce mandatory policies such as third-party testing, effective shutdown capabilities, and safety incident reporting.
“If we do all of this, if we respond effectively, not only can we keep Australians safe, but we can also be a net exporter of AI assurance and defence technologies,” Pour said.
Regarding the regulation framework for AI safety, Sadler suggested Australia consider the SB-1047 AI bill introduced by the Californian government in the United States.
He explained that the Californian framework offered Australia a practical way forward as it required developers to have a positive obligation to ensure that their AI models were safe and did not pose a risk to public safety.
“It [SB-1047] says if developers don’t go through those safeguarding processes, they could be held liable for any catastrophic harms that their models do cause,” he said.
“Subsequently, it puts an obligation on those developers to assist other parties in providing third-party verification.”
Furthermore, Sadler stated that the Californian framework required developers to have the ability to turn off their AI if it became dangerous.
The legislation cleared a key hurdle on Aug. 15 and will next proceed to California’s lower house in the coming weeks.