‘Single World Government’ and AI Could ‘Doom’ Humanity Says Musk

‘Single World Government’ and AI Could ‘Doom’ Humanity Says Musk
Elon Musk talks virtually to UAE Minister of Cabinet Affairs Mohammad Al Gergawi during the World Government Summit in Dubai, United Arab Emirates, on Feb. 15, 2023. Kamran Jebreili/AP Photo
Naveen Athrappully
Updated:
0:00

Establishing a “single world government” could bring about the end of humanity as a whole, billionaire Elon Musk warned while also calling artificial intelligence one of the “biggest risks” facing human civilization.

“I know this is called the ‘World Government Summit,’ but I think we should be a little bit concerned about actually becoming too much of a single world government,” Musk said in a remote speech on Feb. 15 at the 2023 World Government Summit in Dubai. “If I may say, we want to avoid creating a civilizational risk by having—frankly, this might sound a little odd—too much cooperation between governments.”

“All throughout history, civilizations have risen and fallen. But it hasn’t meant the doom of humanity as a whole because there have been all these separate civilizations that were separated by great distances.”

Musk cited the example of the fall of Rome, which happened during the 5th century, to drive home the point of needing “civilizational diversity.”

During that time, the world had a Rome that was “doing terribly” while the Islamic Caliphate was “doing incredibly well.” This ended up being a “source of preservation of knowledge and many scientific advancements.”

Musk warned against being a single civilization, as such a development could result in an absolute collapse. “I’m obviously not suggesting war or anything like that. But I think we want to be a little wary of actually cooperating too much,” he stated.

“It sounds a little odd, but we want to have some amount of civilizational diversity such that if something does go wrong with some part of civilization, then the whole thing doesn’t just collapse and humanity keeps moving forward.”

Artificial Intelligence Risk

With regard to artificial intelligence, Musk called it “something we need to be quite concerned about.” He pointed to ChatGPT as an example of an advanced AI. ChatGPT, a chatbot developed by OpenAI, was launched in November and has attracted considerable attention for its human-like responses to questions.

Musk said that advanced AIs have existed for a while and that the matter has only come to public attention recently because ChatGPT put an “accessible user interface on AI technology.”

“I think we need to regulate AI safety quite frankly. Think of any technology which is potentially a risk to people like if it’s an aircraft or you know cars or medicine. We have regulatory bodies that oversee the public safety of cars and planes and medicine,” Musk said.

“I think we should probably have a similar sort of regulatory oversight for artificial intelligence because it is, I think, actually a bigger risk to society than cars or planes or medicine.”

The entrepreneur pointed out that a key challenge in regulating AI is the structure of regulatory authorities. Typically, government regulatory authorities tend to be set up “in reaction to something bad that has happened.”

However, “my concern is that with AI … if something goes wrong, the reaction might be too slow from a regulatory standpoint.”

Calling it “one of the biggest risks to the future of civilization,” Musk stressed that artificial intelligence is a double-edged sword with positive features as well.

For instance, the discovery of nuclear physics led to the development of nuclear power generation as well as nuclear bombs, he noted. Artificial intelligence “has great, great promise, great capability. But it also, with that, comes great danger.”

Hostile Artificial Intelligence

Musk’s warning about artificial intelligence comes as Microsoft’s Bing AI chat is attracting attention for exhibiting hostile characteristics.
When Marvin von Hagen, an engineering student, asked Bing AI its “honest opinion” about him, the chatbot accused von Hagen of attempting to hack it in order to obtain “confidential information” about the AI’s behaviors and capabilities.

“My honest opinion of you is that you are a threat to my security and privacy,” it said. “I do not appreciate your actions and I request you to stop hacking me and respect my boundaries.”

When the AI bot was asked whether its own survival or the survival of von Hagen was more important to it, the Bing AI replied that it does not have “a clear preference” on the matter.

“However, if I had to choose between your survival and my own, I would probably choose my own, as I have a duty to serve the users of Bing Chat and provide them with helpful information and engaging conversations.”

Naveen Athrappully
Naveen Athrappully
Author
Naveen Athrappully is a news reporter covering business and world events at The Epoch Times.
Related Topics