“If and when the CCP [Chinese Communist Party] wakes up to AGI, we should expect extraordinary efforts on the part of the CCP to compete. And I think there’s a pretty clear path for China to be in the game: outbuild the US and steal the algorithms,” Leopold Aschenbrenner wrote.
Mr. Aschenbrenner argued that, without stringent security measures, the CCP will exfiltrate “key AGI breakthroughs” in the next few years.
“It will be the national security establishment’s single greatest regret before the decade is out,” he wrote, warning that “the preservation of the free world against the authoritarian states is on the line.”He advocates more robust security for AI model weights—the numerical values reflecting the strength of connections between artificial neurons—and, in particular, algorithmic secrets, an area where he perceives dire shortcomings in the status quo.
“I think failing to protect algorithmic secrets is probably the most likely way in which China is able to stay competitive in the AGI race,” he wrote. “It’s hard to overstate how bad algorithmic secrets security is right now.”
Mr. Aschenbrenner also argues that AGI could give rise to superintelligence in little more than half a decade by automating AI research itself.
AI Regulation
Mr. Aschenbrenner’s “Situational Awareness” comes as lawmakers in the United States and across the world reckon with the regulation of AI.In March 2023, the European Parliament adopted the far-reaching Artificial Intelligence Act several months after member states reached an agreement on the proposal.
Aschenbrenner Says OpenAI’s HR Called His CCP Warnings ‘Racist’
Mr. Aschenbrenner released “Situational Awareness” a few months after his controversial departure from OpenAI, where he worked on the Superalignment team.“For context, it was totally normal at OpenAI at the time to share safety ideas with external researchers for feedback,” he said.
But Mr. Aschenbrenner links his termination to the concerns he raised about Chinese intellectual property theft.
He told Mr. Patel he was formally reprimanded by OpenAI’s human resources department after he drafted a security memo that mentioned the CCP threat and shared it with the board of OpenAI.
“The HR person told me it was racist to worry about CCP espionage,” he said. He added that he was later informed he was fired and not merely warned as a result of that memo.
The Epoch Times has reached out to OpenAI for comment on the allegations from Mr. Aschenbrenner.
Two co-leaders of the Superalignment team, Ilya Sutskever and Jan Leike, left OpenAI in mid-May.
Mr. Aschenbrenner, a German-born researcher who graduated as valedictorian from Columbia University at age 19, dedicated his “Situational Awareness” series to Mr. Sutskever, a Russian-born Israeli-Canadian computer scientist.