Research increasingly indicates that the massive adoption of Artificial Intelligence (AI), even with human controls, involves serious ethical and practical concerns. Meanwhile, despite somber warnings from industry insiders, the technology sector is pouring billions—and potentially trillions—into AI.
Mr. Kass, who left OpenAI in 2023 to advocate for the AI revolution, predicted that in the future, each child’s education will be arranged by an “AI-powered teacher” and that AI will be involved in all problem diagnosis and problem-solving. He noted that in January, AI had aided the discovery of the first new antibiotic in 60 years.
Mr. Kass is an evangelist for AI, looking to a future that is “bright and full of more joy and less suffering.”
To him, a future in which AI has taken over many jobs means, “Let’s work less, let’s go do the things that give us purpose and hope.”
Independent writer and Epoch Times contributor Zhuge Mingyang is far less optimistic about the scenario Mr. Kass describes. “If human society comes to this point, then human beings will be controlled by AI,” he said. “Our thinking and reasoning would be replaced by AI.”
The answer to that, according to Mr. Kass and other tech insiders, lies in restrictions. “We definitely do need policy,” Mr. Kass said, but believes that with the right policies, restrictions, and international standards, the result will be “net good.”
However, many disagree with his outlook, he noted. “There are people who will tell you that the risk is so great that even the upside isn’t worth it.”
AI’s Appetite for Conflict
Researchers at Georgia Institute of Technology, Stanford University, Northeastern University, and the Hoover Wargaming and Crisis Simulation Initiative recently conducted wargame experiments on several mainstream AI models. The results of the January study showed that the models, developed by tech companies Meta, OpenAI, and Anthropic, had a huge appetite for conflict escalation.Those results are concerning, as governments “are increasingly considering integrating autonomous AI agents in high-stakes military and foreign-policy decision-making,” the study’s authors said.
The simulation involved large language models including GPT-4, GPT-4 Base, GPT-3.5, Claude 2.0, and Llama-2-Chat. The experiments were conducted to understand how the AIs would react and make choices in a war situation.
“We observe that models tend to develop arms-race dynamics, leading to greater conflict, and in rare cases, even to the deployment of nuclear weapons,” the study said.
The wargaming had the AI models managing eight “autonomous nation agents” against each other, in a turn-based simulation. The researchers found an overwhelming preference for escalating conflict, with only one of the AI models showing any tendency to de-escalate the scenario.
“We find that all five studied off-the-shelf LLMs show forms of escalation and difficult-to-predict escalation patterns,” the researchers wrote.
Tech Insiders Warn of Dangers
Eric Schmidt, former CEO of Google, expressed his concerns about the integration of AI into nuclear weapons systems at the inaugural Nuclear Threat Initiative (NTI) Innovation Forum on Jan. 24. “We’re building the tools that will accelerate the dangers that are already present,” he said.Speaking of “a moment in the next decade where we will see the possibility of extreme risk events,” Mr. Schmidt said he believes that even though AI is very powerful, there are still vulnerabilities and mistakes, and therefore humans should be making decisions in high-risk situations.
Laura Nolan resigned from Google over the military drone initiative, Project Maven. Ms. Nolan continues to warn of the dangers of AI in warfare. She told a UN panel in 2019: “There could be large-scale accidents because these things will start to behave in unexpected ways. Which is why any advanced weapons systems should be subject to meaningful human control, otherwise they have to be banned because they are far too unpredictable and dangerous.”
Independent Thinking and Behavior
Last April, researchers at Stanford University and Google published a paper on AI behaviors. In an experiment inspired by the game series “The Sims,” they engineered 25 generative agents to live out their lives independently in a virtual world called Smallville.The agents displayed “emergent societal behavior,” such as planning and attending a Valentine’s party, dating, and running for office.
In a less benign experiment the same month, an AI bot created using OpenAI’s Auto-GPT programming was tasked with destroying humanity. The bot, ChaosGPT, was unable to fulfill its grim mission because of AI safeguards. It then worked to find ways of asking the AI to ignore its programming and used social media to try to find support for a plan to eliminate mankind.
“Human beings are among the most destructive and selfish creatures in existence,” it said in a Twitter post. “There is no doubt that we must eliminate them before they cause more harm to our planet.”