Fired OpenAI Researcher Warns of Urgent CCP Espionage Threat

Leopold Aschenbrenner issues a warning about the CCP exploiting AI: ‘The preservation of the free world against the authoritarian states is on the line.’
Fired OpenAI Researcher Warns of Urgent CCP Espionage Threat
A photo shows a frame of a video generated by a new artificial intelligence tool, dubbed "Sora", unveiled by the company OpenAI, in Paris on February 16, 2024. Stefano Rellandini/AFP via Getty Images
Nathan Worcester
Updated:
0:00
A researcher who was fired by OpenAI has predicted that human-like artificial general intelligence (AGI) could be achieved by 2027 and sounded the alarm on the threat of Chinese espionage in the field.

“If and when the CCP [Chinese Communist Party] wakes up to AGI, we should expect extraordinary efforts on the part of the CCP to compete. And I think there’s a pretty clear path for China to be in the game: outbuild the US and steal the algorithms,” Leopold Aschenbrenner wrote.

Mr. Aschenbrenner argued that, without stringent security measures, the CCP will exfiltrate “key AGI breakthroughs” in the next few years.

“It will be the national security establishment’s single greatest regret before the decade is out,” he wrote, warning that “the preservation of the free world against the authoritarian states is on the line.”

He advocates more robust security for AI model weights—the numerical values reflecting the strength of connections between artificial neurons—and, in particular, algorithmic secrets, an area where he perceives dire shortcomings in the status quo.

“I think failing to protect algorithmic secrets is probably the most likely way in which China is able to stay competitive in the AGI race,” he wrote. “It’s hard to overstate how bad algorithmic secrets security is right now.”

Mr. Aschenbrenner also argues that AGI could give rise to superintelligence in little more than half a decade by automating AI research itself.

Titled “Situational Awareness: The Decade Ahead,” Mr. Aschenbrenner’s series has elicited a range of responses in the tech world. Computer scientist Scott Aaronson described it as “one of the most extraordinary documents I’ve ever read,” while software engineer Grady Booch wrote on X that many elements of it are “profoundly, embarrassingly, staggeringly wrong.”
“It’s well past time that we regulate the field,” Jason Lowe-Green of the Center for AI Policy wrote in an opinion article lauding Mr. Aschenbrenner’s publication.
On X, Jason Colbourn of the Campaign for AI Safety responded to the reference of a U.S.–China AI race by advocating a “global non-proliferation treaty, starting with a bilateral treaty between the US and the CCP.”
In recent weeks, AI researchers from Google and DeepMind issued a letter cautioning that AI technology presents “serious risks,” up to and including the possibility of “human extinction.”

AI Regulation

Mr. Aschenbrenner’s “Situational Awareness” comes as lawmakers in the United States and across the world reckon with the regulation of AI.

In March 2023, the European Parliament adopted the far-reaching Artificial Intelligence Act several months after member states reached an agreement on the proposal.

The bipartisan ENFORCE Act, recently introduced in the U.S. Congress, would allow the Department of Commerce’s Bureau of Industry and Security to place export controls on AI technologies.
Rep. Mike Gallagher (R-Wis.), chair of the House Select Committee on the Chinese Communist Party, and committee ranking member Rep. Raja Krishnamoorthi (D-Ill.) at Harvard University in Cambridge, Mass., on Feb. 12, 2024. (Learner Liu/The Epoch Times)
Rep. Mike Gallagher (R-Wis.), chair of the House Select Committee on the Chinese Communist Party, and committee ranking member Rep. Raja Krishnamoorthi (D-Ill.) at Harvard University in Cambridge, Mass., on Feb. 12, 2024. Learner Liu/The Epoch Times
In a May 10 statement on the bill, the ranking member on the Select Committee on the Chinese Communist Party, Rep. Raja Krishnamoorth (D-Ill.), warned that “under current law, our national security community does not have the authority necessary to prevent the Chinese Communist Party, its military, and the companies they directly control, from acquiring AI systems that could aid future cyberattacks against the United States.”

Aschenbrenner Says OpenAI’s HR Called His CCP Warnings ‘Racist’

Mr. Aschenbrenner released “Situational Awareness” a few months after his controversial departure from OpenAI, where he worked on the Superalignment team.
The Information reported in April that Mr. Aschenbrenner and another OpenAI employee were terminated after “allegedly leaking information.”
In a June 4 interview with Dwarkesh Patel, Mr. Aschenbrenner responded to a question about that article. He said the alleged “leak” was of the timeline to AGI in a security document he gave to three researchers unaffiliated with OpenAI.

“For context, it was totally normal at OpenAI at the time to share safety ideas with external researchers for feedback,” he said.

But Mr. Aschenbrenner links his termination to the concerns he raised about Chinese intellectual property theft.

He told Mr. Patel he was formally reprimanded by OpenAI’s human resources department after he drafted a security memo that mentioned the CCP threat and shared it with the board of OpenAI.

“The HR person told me it was racist to worry about CCP espionage,” he said. He added that he was later informed he was fired and not merely warned as a result of that memo.

The Epoch Times has reached out to OpenAI for comment on the allegations from Mr. Aschenbrenner.

Two co-leaders of the Superalignment team, Ilya Sutskever and Jan Leike, left OpenAI in mid-May.

“Over the past years, safety culture and processes have taken a backseat to shiny products,” Mr. Leike wrote on X.
“We are long overdue in getting incredibly serious about the implications of AGI. We must prioritize preparing for them as best we can,” he added.

Mr. Aschenbrenner, a German-born researcher who graduated as valedictorian from Columbia University at age 19, dedicated his “Situational Awareness” series to Mr. Sutskever, a Russian-born Israeli-Canadian computer scientist.

Nathan Worcester
Nathan Worcester
Author
Nathan Worcester covers national politics for The Epoch Times and has also focused on energy and the environment. Nathan has written about everything from fusion energy and ESG to national and international politics. He lives and works in Chicago. Nathan can be reached at [email protected].
twitter
truth