The CEO of a charity focused on AI says each generation of the technology presents new risks that humanity will need to contend with.
Greg Sadler, CEO of Good Ancestors Policy (GAP), who testified before the Australian Parliament at an inquiry hearing in August 2024 said the current generation was manageable, but the future was harder to predict because of the way artificial intelligence can develop
AI Developing AI
Sadler said AI was expanding at dramatic rates, as evidenced by the growth of companies like Nvidia.“I saw an announcement from Nvidia’s CEO that they’re already using AI to design AI chips,” he said.
“The algorithms are getting much more efficient, so smaller and cheaper models can perform as well as previous expensive models.
“And the amount of data that’s being fed into these models is growing dramatically.”
Nvidia currently uses ChipNeMo, a custom AI model, to support its chip development process.
Another factor, according to Sadler, is “recursive self-improvement,” a process in which an AI system iteratively improves itself, which could lead to an “intelligence explosion.”
The CEO cited the example of AlphaGo, a computer program designed to play the board game Go.
Developed in 2014 by DeepMind Technologies, later acquired by Google, AlphaGo learned to play Go by playing against itself millions of times without human input.
![China's 19-year-old Go player Ke Jie prepares to make a move during the second match against Google's artificial intelligence programme AlphaGo in Wuzhen, eastern China's Zhejiang province on May 25, 2017. (STR/AFP via Getty Images)](/_next/image?url=https%3A%2F%2Fimg.theepochtimes.com%2Fassets%2Fuploads%2F2025%2F02%2F10%2Fid5807236-GettyImages-688097326.jpg&w=1200&q=75)
In 2017, AlphaGo defeated world’s number-one ranking Go player in three consecutive games.
“That’s sort of another way that AI can generate data to make itself more capable,” Sadler said.
“So what all of those things point to is that you could have this intelligence explosion where AI models become rapidly more capable than they are today.”
In cybersecurity, AI quickly went from being unable to hack programs, to being highly adept at it.
Sadler cited a study were a researcher gave an AI model prompts to execute cyber attacks.
Previously the model was unable to carry out the request, but last year, an AI model could exploit almost 90 percent of newly discovered cybersecurity vulnerabilities.
“So it went very suddenly from no relevant capability to quite incredibly dangerous capability in sort of a single generation,” Sadler said.
“So we think about these risks as being far away, but it’s also possible that just one more generation of AI progress could turn risks that currently don’t exist, into risks that are sort of really present.”