Scientists currently have no idea how AI models are getting more intelligent, an AI safety expert said.
This comes amid a significant improvement in the intelligence of many cutting-edge AI systems in the past few months.
According to data from the AI research institute Epoch AI, these systems have reached the human expert level in a test that features a set of comprehensive Ph.D.-level science questions.
Then in the three months to April 2025, many frontier AI models broke through that expert threshold.
While AI capabilities are advancing rapidly, Liam Carroll, a researcher at the Gradient Institute, pointed out a troubling problem.
“Even though we know how to build the systems, we do not understand what is actually going on inside of them, and we don’t understand why they act the way that they do,” he said at a recent online event about AI safety.
“They are essentially like aliens to us at this point.”
Carroll explained that the science in this area is very young, and not many breakthroughs have been made.
“Only in the last couple of years have any kinds of breakthroughs been made on understanding the systems more deeply and scientifically interpreting what’s going on,” he said.
It’s Difficult to Trust AI Models: Carroll
Due to a lack of understanding of AI systems’ capabilities, Carroll said it was difficult to trust them.“Will [you] trust that they will perform and act in the way that we want them to?” he asked.
Carroll’s remarks came as researchers recently found that AI was capable of being deceptive.
A typical case is ChatGPT o1, which was found to take measures to avoid being shut down, including trying to disable oversight mechanisms imposed on the AI model, and making copies of itself so that it would be more resilient to shutdown attempts.
When researchers discovered ChatGPT o1’s behaviour, the AI model lied and tried to cover it up.

AI Needs to Be Properly Regulated: Expert
Amid the worrying signs of AI capabilities, Carroll stated that AI technology, just like others, needed to be regulated properly to enable adoption and harvest the economic growth that it can facilitate.“The classic examples here are bridges and planes and all sorts of engineering around society. If we didn’t have safety regulations ensuring that planes were going to safely take passengers from Melbourne to Sydney, or that the bridge would hold thousands of cars on the West Gate, whatever it is, we wouldn’t be able to ensure that society can operate in the way that it does, and that we can harness these technologies,” he said.
Labor MP Andrew Leigh, who attended the event in his own capacity, said it was important for companies and governments to consider the risks of AI.
“I don’t know about anyone else in the call, but I wouldn’t get on a plane which had a 5 percent chance of crashing,” he said.
“And it seems to me a huge priority to reduce that 5 percent probability. Even if you think it is 1 percent, you still wouldn’t get on that plane.”
Leigh also noted that new AI centres and public awareness could play a role in addressing AI risks.
“I am also quite concerned about super intelligent AI, and the potential for that to reduce the chances that humanity lives a long and prosperous life,” he said.
“Part of that could be to do with setting up new [AI] centres, but I think there’s also a huge amount of work that can be done in raising public awareness.”