AI Could Be Used to Create Viruses, Bioweapons: Think Tank Warns

‘It’s realistic that AIs could help people build bioweapons during the next term of government,’ Good Ancestors Policy CEO Greg Sadler said.
AI Could Be Used to Create Viruses, Bioweapons: Think Tank Warns
An AI (artificial intelligence) logo is pictured at the Mobile World Congress in Barcelona, Spain, on Feb. 27, 2024. Josep Lago/AFP via Getty Images
Alfred Bui
Updated:
0:00

Artificial intelligence (AI) poses not just technological risks, but could also create viruses that pose a threat to humanity, says one think tank.

Speaking at a recent Senate inquiry hearing on AI, Good Ancestors Policy CEO Greg Sadler said AI could assist with manufacturing dangerous viruses, which is normally only possible in leading laboratories.

“In March 2022, a paper was published in [the] Nature Machine Intelligence [scientific journal] detailing how an AI intended to find new drugs, but instead designed 40,000 novel and lethal molecules in less than six hours,” he said.

“Similarly, a 2023 study showed that students were able to use ChatGPT to suggest potential pandemic pathogens, explain how they could be made from DNA ordered online, and supply the names of DNA synthesis companies unlikely to screen orders to ensure that they don’t include dangerous DNA sequences.”

Sadler also noted that in the second study, ChatGPT’s safeguards failed to prevent the application from providing that kind of dangerous assistance.

“It’s realistic that AIs could help people build bioweapons during the next term of government,” he said.

The CEO added that the threat had prompted the U.S. government to take action, including introducing an executive order in October 2023.

While Sadler has raised the issue with several Australian government departments, he has not seen any evidence of risk management similar to those taken by the United States.

At the same time, Sadler said there was a huge gap between investment in safety and investment in capability.

“A lot of [AI] safeguards are quickly defeated. They’re not up to safeguarding the kinds of capabilities that future models might have,” he said.

“For every $250 (US$168) spent on driving AI capability, only $1 is spent on AI safety.

“So driving AI safety research to actually address these problems before they arrive is critical.”

Proposal to Establish AI Safety Institute

To address biosecurity and other types of risks, Soroush Pour, the CEO of AI safety research company Harmony Intelligence, proposed establishing an AI safety institute in Australia.

According to the CEO, the institute would focus on developing the technical capability and ability needed to support the government in responding to threats.

In addition, the institute needed to be paired with a strong regulator that could enforce mandatory policies such as third-party testing, effective shutdown capabilities, and safety incident reporting.

“If we do all of this, if we respond effectively, not only can we keep Australians safe, but we can also be a net exporter of AI assurance and defence technologies,” Pour said.

Regarding the regulation framework for AI safety, Sadler suggested Australia consider the SB-1047 AI bill introduced by the Californian government in the United States.

He explained that the Californian framework offered Australia a practical way forward as it required developers to have a positive obligation to ensure that their AI models were safe and did not pose a risk to public safety.

“It [SB-1047] says if developers don’t go through those safeguarding processes, they could be held liable for any catastrophic harms that their models do cause,” he said.

“Subsequently, it puts an obligation on those developers to assist other parties in providing third-party verification.”

Furthermore, Sadler stated that the Californian framework required developers to have the ability to turn off their AI if it became dangerous.

The SB-1047 bill targets AI models that have computing power greater than 1,026 floating-point operations and cost at least US$100 million to develop.
After the bill was introduced, tech companies and AI experts raised concerns about its impact on innovation in the sector.

The legislation cleared a key hurdle on Aug. 15 and will next proceed to California’s lower house in the coming weeks.

Alfred Bui
Alfred Bui
Author
Alfred Bui is an Australian reporter based in Melbourne and focuses on local and business news. He is a former small business owner and has two master’s degrees in business and business law. Contact him at [email protected].