The event focused on ensuring that AI companies engage in activities that are in the interests of humanity.
America’s biggest tech executives have called for strict regulation of artificial intelligence (AI), with entrepreneur Elon Musk warning that leaving the technology unchecked poses a “civilizational risk” for human beings.
The billionaire called the meeting a “service to humanity,” stating that it may “go down in history as very important to the future of civilization.” He also confirmed that during the forum, he had called artificial intelligence a “double-edged sword.”
Mr. Musk said that the consequences of AI going wrong are so severe that humanity has to be “proactive rather than reactive,” according to NBC.
“The question is really one of civilizational risk. It’s not like … one group of humans versus another. It’s like, hey, this is something that’s potentially risky for all humans everywhere,” he said.
“There is some chance that is above zero that AI will kill us all. I think it’s low. But if there’s some chance, I think we should also consider the fragility of human civilization.”
Meta CEO Mark Zuckerberg told reporters that Congress should “engage with AI to support innovation and safeguards.”
“This is an emerging technology, there are important equities to balance here, and the government is ultimately responsible for that,” he said.
It is better that “the standard is set by American companies that can work with our government to shape these models on important issues.”
In his prepared remarks at the event, Google CEO Sundar Pichai suggested that there should be “greater use of AI in government” and proposed advancing a “workforce transition agenda that benefits everyone,” according to CNBC.
Closed-Door Meeting
The AI summit attracted severe criticism from lawmakers, with some members from both the Republican and Democrat parties questioning the closed-door nature of the event where senators were not allowed to directly ask questions to the tech CEOs. Instead, the lawmakers could only submit written questions.In an interview with NBC, Sen. Elizabeth Warren (D-Mass.) said that “these tech billionaires want to lobby Congress behind closed doors with no questions asked. That’s just plain wrong.”
“They want to shape regulation so that the current tech billionaires are the ones who continue to dominate and make money … They should not have a forum to do that, especially a closed-door forum.”
Sen. Josh Hawley (R-Mo) had said he would not attend what he called a “giant cocktail party for big tech.”
“I don’t know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money and then close it to the public,” he said, according to AP.
Though this event was conducted behind closed doors, Mr. Schumer suggested that some of the future meetings could be open to the public.
The AI Threat
Calls for strengthening AI regulation have been rising over recent years as advancements in artificial intelligence technologies have many experts worried about potential risks.At the hearing, Sam Altman, CEO of OpenAI, which developed ChatGPT, said that AI could greatly destabilize societies by manipulating elections and eliminating numerous jobs.
“We have tried to be very clear about the magnitude of risks here,” he said. “Given that we’re going to face an election next year … I do think some regulation would be quite wise on this topic. ... It’s one of my areas of greatest concern.”
“I think the weaponization of AI is the biggest danger,” he said. “I think that we will get into the equivalent of a nuclear arms race with AI, and if we don’t build it, the other guys are for sure going to build it, and so then it'll escalate.”
“You could imagine an AI in a combat theater, the whole thing just being fought by the computers at a speed humans can no longer intercede, and you have no ability to de-escalate.”
Meanwhile, Senators Hawley and Blumenthal announced a bipartisan framework for artificial intelligence legislation earlier this month that seeks to establish “guardrails” for AI tech.
The framework seeks to establish licensing requirements for companies developing sophisticated general-purpose AI models. Such companies would have to register with an independent oversight body.