AI Undergoing an ‘Intelligence Explosion’ That Could Extend Beyond Human Oversight: CEO

In one example, AI models were able to go from novice hackers to identifying 90 percent of cyber vulnerabilities.
AI Undergoing an ‘Intelligence Explosion’ That Could Extend Beyond Human Oversight: CEO
Robots appear on stage during the Nvidia GTC Artificial Intelligence Conference in San Jose, California, on March 18, 2024. Justin Sullivan/Getty Images
Alfred Bui
Updated:

The CEO of a charity focused on AI says each generation of the technology presents new risks that humanity will need to contend with.

Greg Sadler, CEO of Good Ancestors Policy (GAP), who testified before the Australian Parliament at an inquiry hearing in August 2024 said the current generation was manageable, but the future was harder to predict because of the way artificial intelligence can develop

“I think it would be wrong to say that this loss of control problem is a risk to date,” he told The Epoch Times. “But there is some evidence we have today that this is a real problem that we couldn’t be too far away from.”

AI Developing AI

Sadler said AI was expanding at dramatic rates, as evidenced by the growth of companies like Nvidia.

“I saw an announcement from Nvidia’s CEO that they’re already using AI to design AI chips,” he said.

“The algorithms are getting much more efficient, so smaller and cheaper models can perform as well as previous expensive models.

“And the amount of data that’s being fed into these models is growing dramatically.”

Nvidia currently uses ChipNeMo, a custom AI model, to support its chip development process.

At the New York Times Dealbook Summit in November 2023, Nvidia CEO Jensen Huang admitted that none of the company’s chips could have been made without AI.

Another factor, according to Sadler, is “recursive self-improvement,” a process in which an AI system iteratively improves itself, which could lead to an “intelligence explosion.”

The CEO cited the example of AlphaGo, a computer program designed to play the board game Go.

Developed in 2014 by DeepMind Technologies, later acquired by Google, AlphaGo learned to play Go by playing against itself millions of times without human input.

China's 19-year-old Go player Ke Jie prepares to make a move during the second match against Google's artificial intelligence programme AlphaGo in Wuzhen, eastern China's Zhejiang province on May 25, 2017. (STR/AFP via Getty Images)
China's 19-year-old Go player Ke Jie prepares to make a move during the second match against Google's artificial intelligence programme AlphaGo in Wuzhen, eastern China's Zhejiang province on May 25, 2017. STR/AFP via Getty Images

In 2017, AlphaGo defeated world’s number-one ranking Go player in three consecutive games.

“That’s sort of another way that AI can generate data to make itself more capable,” Sadler said.

“So what all of those things point to is that you could have this intelligence explosion where AI models become rapidly more capable than they are today.”

In cybersecurity, AI quickly went from being unable to hack programs, to being highly adept at it.

Sadler cited a study were a researcher gave an AI model prompts to execute cyber attacks.

Previously the model was unable to carry out the request, but last year, an AI model could exploit almost 90 percent of newly discovered cybersecurity vulnerabilities.

“So it went very suddenly from no relevant capability to quite incredibly dangerous capability in sort of a single generation,” Sadler said.

“So we think about these risks as being far away, but it’s also possible that just one more generation of AI progress could turn risks that currently don’t exist, into risks that are sort of really present.”

Alfred Bui
Alfred Bui
Author
Alfred Bui is an Australian reporter based in Melbourne and focuses on local and business news. He is a former small business owner and has two master’s degrees in business and business law. Contact him at [email protected].