OpenAI, Google DeepMind Employees Warn of ‘Serious Risks’ Posed by AI Technology

The employees called on ‘advanced AI companies’ not to retaliate against those ’who publicly share risk-related confidential information' about their systems.
OpenAI, Google DeepMind Employees Warn of ‘Serious Risks’ Posed by AI Technology
Screens displaying the logos of OpenAI and ChatGPT are pictured in Toulouse, France, on Jan. 23, 2023. Lionel Bonaventure/AFP via Getty Images
Aldgra Fredly
Updated:

A group of current and former employees of AI giants OpenAI and Google warned of the “serious risks” posed by AI technology in an open letter on Tuesday and called for greater protections for whistleblowers in the industry.

The letter, published on the righttowarn.ai website, was signed by 13 past and present workers of OpenAI and Google DeepMind.

It was endorsed by AI experts Yoshua Bengio, Geoffrey Hinton, and Stuart Russell.

The employees said they believe AI technology has the potential to deliver “unprecedented benefits to humanity,” but warned that it comes with serious risks, which AI companies have also acknowledged.

“So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public,” the letter reads.

The letter cited risks ranging from “the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.”

“We are hopeful that these risks can be adequately mitigated with sufficient guidance from the scientific community, policymakers, and the public,” the employees wrote.

“However, AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this,” they added.

They emphasized that AI firms have “only weak obligations” to disclose to governments “substantial non-public information” that relates to the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of the technology.

The letter calls on “advanced AI companies” to refrain from entering into or enforcing “any agreement that prohibits disparagement or criticism of the company for risk-related concern” and not retaliate “by hindering any vested economic benefit.”

AI companies were also urged not to retaliate against employees “who publicly share risk-related confidential information after other processes have failed,” and to promote “a culture of open criticism.”

“Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated,” the employees stated.

“Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry,” they added.

Last week, OpenAI announced that its board had formed a “Safety and Security Committee,” led by Mr. Altman and other board directors.

The committee is tasked with making recommendations to the board about critical safety and security decisions.

OpenAI CEO Sam Altman during a meeting in Paris, France, on May 26, 2023. (Joel Saget/AFP via Getty Images)
OpenAI CEO Sam Altman during a meeting in Paris, France, on May 26, 2023. Joel Saget/AFP via Getty Images

In a statement, OpenAI stated that the committee’s first task would be to evaluate and further develop OpenAI’s “processes and safeguards” over the next 90 days.

The committee will present its recommendations to the full board after completion.

Aldgra Fredly
Aldgra Fredly
Author
Aldgra Fredly is a freelance writer covering U.S. and Asia Pacific news for The Epoch Times.