More Tech Companies Agree to Sign White House AI Safety Pledge

More Tech Companies Agree to Sign White House AI Safety Pledge
President Joe Biden (L) and California Gov. Gavin Newsom take part in an event discussing the opportunities and risks of Artificial Intelligence at the Fairmont Hotel in San Francisco on June 20, 2023. (Andrew Caballero-Reynolds/AFP via Getty Images)
Bryan Jung
9/12/2023
Updated:
9/12/2023
0:00

Eight more major tech firms involved in artificial intelligence (AI) development signed the White House’s AI safety pledge.

The White House announced on Sept. 12 that the firms had agreed to voluntarily follow standards for safety, security, and transparency, related to their use of artificial intelligence.

Adobe, IBM, Palantir, Nvidia, Salesforce, Stability AI, Cohere, and Scale AI joined Amazon, Anthropic, Google, Inflection AI, Microsoft, and OpenAI, which signed the pledge in July.

The Biden administration initiated an industry-led effort on AI safeguards with tech companies during the summer.

All of the signatories have committed to AI testing and other security measures, but these are all voluntary and not regulations that can be enforced by the government.

Potential Threats Concern Washington

The rapid advancements in AI have become a major concern in Washington since OpenAI released its ChatGPT chatbot last year.

AI is facing scrutiny from lawmakers for its potential threat to certain jobs, its ability to spread disinformation, creation of deep fakes, and the possibility of developing its own self-awareness.

Many lawmakers and regulators are increasingly debating on how to handle the technology.

The White House said those firms that joined the initiative agreed to ensure that AI products were safe before making them public, put security first, and earn the public’s trust.

In addition to voluntary commitments, the Biden administration is drafting an executive order with the same goals and encouraging legislative efforts in Congress to regulate AI.

“The President has been clear: harness the benefits of AI, manage the risks, and move fast—very fast,” Chief of Staff Jeff Zients said in a statement regarding the latest pledges. “And we are doing just that by partnering with the private sector and pulling every lever we have to get this done.”

The tech companies further agreed to share information on potential dangers from the technology and to develop mechanisms to let consumers know when content is generated by AI.

“These commitments represent an important bridge to government action, and are just one part of the Biden-Harris Administration’s comprehensive approach to seizing the promise and managing the risks of AI,” the White House stated.

Congress Acts to Regulate AI

The move by the White House comes as Sen. Chuck Schumer (D-N.Y.) plans on hosting a number of tech companies for an AI forum on Sept. 13, Axios reported.

CEOs from a dozen of the world’s biggest tech companies, several lawmakers, labor officials, and nongovernmental organization representatives will join the senator for the event, which is expected to last six hours.

CEOs Elon Musk of X, Mark Zuckerberg of Meta, Sam Altman of OpenAI, and Sundar Pichai of Google will be among the tech executives at Mr. Schumer’s closed-door AI summit, according to The New York Times.

There are bills pending in Congress that have been proposed to regulate AI, including the Artificial Intelligence and Biosecurity Risk Assessment Act and the No Robot Bosses Act.

Axios reported in July that Sen. John Thune (R-S.D.) was preparing to introduce his own bill, the Artificial Intelligence Innovation and Accountability Act, which would require companies to self-certify their AI systems and inform consumers when their platforms are using generative AI.

Under the purported proposal from Mr. Thune, the Commerce Department would enforce civil action against any company if noncompliance were discovered and not appropriately remedied.

The No Robot Bosses Act, introduced by Sen. Bob Casey (D-Pa.), would ban employers from using only algorithms, machine learning, and other AI tools to make employment decisions, while the bipartisan Artificial Intelligence and Biosecurity Risk Assessment Act, would require regulators to monitor the risks of technical advancements in AI and how the technology could be used to develop lethal pathogens.

On Sept. 12, Microsoft President Brad Smith and Nvidia’s chief scientist William Dally testified about AI regulations in front of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, led by Sens. Richard Blumenthal (D-Conn.) and Josh Hawley (R-Mo.).

Big Tech Seeks Self Regulation

The series of pledges shows a growing momentum by Big Tech firms to set voluntary industry standards before the government acts on its own.

Although the pledges are not legally binding, the companies agreed to ensure internal and external testing before releasing future products, label AI-generated content using watermarks or similar technology, and share information with the industry and the government about potential risks, biases, and vulnerabilities in their systems.

Adobe encouraged the signers of the pledge and those that have yet to sign to support the FAIR Act, another proposed bill that would ensure that celebrities and others retain the right to their digital likenesses.

Adobe General Counsel Dana Rao told Axios that Adobe was working on AI responsibility efforts for the past four years and has been a leader in the Content Authenticity Initiative, which identifies when content is created or edited using AI.

“I’m really excited to see the White House step in,” Rao said. “We need that momentum from the White House to really push these initiatives to where they need to be.”

Meanwhile, consumer advocacy groups and others are worried about the influential role of tech companies in discussions about AI regulations and their self-regulatory pledges.

Merve Hickok, president of the Center for AI and Digital Policy, told The New York Times that tech firms “have outsized resources and influence policymakers in multiple ways” and that “their voices can’t be privileged over civil society.”
Bryan S. Jung is a native and resident of New York City with a background in politics and the legal industry. He graduated from Binghamton University.
Related Topics