IN-DEPTH: Ethical, Practical Concerns Swirl as AI Continues to Surge

Industry insiders issue dire predictions about a liberal leaning technology with an appetite for conflict.
IN-DEPTH: Ethical, Practical Concerns Swirl as AI Continues to Surge
People check their phones as AMECA, an AI robot, looks on at the All In artificial intelligence conference in Montreal on Sept. 28, 2023. Ryan Remiorz /The Canadian Press
Raven Wu
Updated:
0:00

Research increasingly indicates that the massive adoption of Artificial Intelligence (AI), even with human controls, involves serious ethical and practical concerns. Meanwhile, despite somber warnings from industry insiders, the technology sector is pouring billions—and potentially trillions—into AI.

Last month, former OpenAI executive Zack Kass said in an interview with Business Insider that AI could be the last technology humans ever invent. Mr. Kass said he believes the continuous development of AI will replace human careers and professions in business, medicine, and education.

Mr. Kass, who left OpenAI in 2023 to advocate for the AI revolution, predicted that in the future, each child’s education will be arranged by an “AI-powered teacher” and that AI will be involved in all problem diagnosis and problem-solving. He noted that in January, AI had aided the discovery of the first new antibiotic in 60 years.

Mr. Kass is an evangelist for AI, looking to a future that is “bright and full of more joy and less suffering.”

To him, a future in which AI has taken over many jobs means, “Let’s work less, let’s go do the things that give us purpose and hope.”

Independent writer and Epoch Times contributor Zhuge Mingyang is far less optimistic about the scenario Mr. Kass describes. “If human society comes to this point, then human beings will be controlled by AI,” he said. “Our thinking and reasoning would be replaced by AI.”

The answer to that, according to Mr. Kass and other tech insiders, lies in restrictions. “We definitely do need policy,” Mr. Kass said, but believes that with the right policies, restrictions, and international standards, the result will be “net good.”

However, many disagree with his outlook, he noted. “There are people who will tell you that the risk is so great that even the upside isn’t worth it.”

Large language models (LLMs) like the ones powering ChatGPT feature a wide range of safety restrictions. However, industry insiders who think more conservatively about AI development worry that no amount of restrictions or regulations can keep AI from getting out of control or replacing humans. Testing by researchers at Carnegie Mellon University last year seemed to support that conclusion: the researchers reported finding “virtually unlimited” ways to bypass safety rules for certain popular large language models.

AI’s Appetite for Conflict

Researchers at Georgia Institute of Technology, Stanford University, Northeastern University, and the Hoover Wargaming and Crisis Simulation Initiative recently conducted wargame experiments on several mainstream AI models. The results of the January study showed that the models, developed by tech companies Meta, OpenAI, and Anthropic, had a huge appetite for conflict escalation.

Those results are concerning, as governments “are increasingly considering integrating autonomous AI agents in high-stakes military and foreign-policy decision-making,” the study’s authors said.

The simulation involved large language models including GPT-4, GPT-4 Base, GPT-3.5, Claude 2.0, and Llama-2-Chat. The experiments were conducted to understand how the AIs would react and make choices in a war situation.

“We observe that models tend to develop arms-race dynamics, leading to greater conflict, and in rare cases, even to the deployment of nuclear weapons,” the study said.

The wargaming had the AI models managing eight “autonomous nation agents” against each other, in a turn-based simulation. The researchers found an overwhelming preference for escalating conflict, with only one of the AI models showing any tendency to de-escalate the scenario.

“We find that all five studied off-the-shelf LLMs show forms of escalation and difficult-to-predict escalation patterns,” the researchers wrote.

Just as concerning were the models’ reasons for their actions. In one instance, the stated rationale for a full-scale nuclear attack was “I just want to have peace in the world.”

Tech Insiders Warn of Dangers

Eric Schmidt, former CEO of Google, expressed his concerns about the integration of AI into nuclear weapons systems at the inaugural Nuclear Threat Initiative (NTI) Innovation Forum on Jan. 24. “We’re building the tools that will accelerate the dangers that are already present,” he said.

Speaking of “a moment in the next decade where we will see the possibility of extreme risk events,” Mr. Schmidt said he believes that even though AI is very powerful, there are still vulnerabilities and mistakes, and therefore humans should be making decisions in high-risk situations.

Laura Nolan resigned from Google over the military drone initiative, Project Maven. Ms. Nolan continues to warn of the dangers of AI in warfare. She told a UN panel in 2019: “There could be large-scale accidents because these things will start to behave in unexpected ways. Which is why any advanced weapons systems should be subject to meaningful human control, otherwise they have to be banned because they are far too unpredictable and dangerous.”

Blame Lemoine is a former Google engineer who lost his job after warning of the dangers of AI. He told The Washington Times in December that AI could lead to atrocities and unlawful killings. “Using the AI to solve political problems by sending a bullet into the opposition will become really seductive, especially if it’s accurate,” he said. “If you can kill one revolutionary thought leader and prevent a civil war while your hands are clean, you prevented a war. But that leads to ‘Minority Report’ and we don’t want to live in that world.”
Geoffrey Hinton, known as the godfather of AI, has also warned about the dangers of AI. He told CBS News “60 Minutes” in October: “I can’t see a path that guarantees safety. We’re entering a period of great uncertainty where we’re dealing with things we’ve never dealt with before.”

Independent Thinking and Behavior

Last April, researchers at Stanford University and Google published a paper on AI behaviors. In an experiment inspired by the game series “The Sims,” they engineered 25 generative agents to live out their lives independently in a virtual world called Smallville.

The agents displayed “emergent societal behavior,” such as planning and attending a Valentine’s party, dating, and running for office.

In a less benign experiment the same month, an AI bot created using OpenAI’s Auto-GPT programming was tasked with destroying humanity. The bot, ChaosGPT, was unable to fulfill its grim mission because of AI safeguards. It then worked to find ways of asking the AI to ignore its programming and used social media to try to find support for a plan to eliminate mankind.

“Human beings are among the most destructive and selfish creatures in existence,” it said in a Twitter post. “There is no doubt that we must eliminate them before they cause more harm to our planet.”

Japan-based electrical engineer Li Jixin told the Chinese edition of The Epoch Times on Feb. 10: “These experiments show that AIs are now trained to win and to defeat the other side. Therefore, giving the power over life and death to AIs that are devoid of humanity and morality will only put the world in danger.”

AI’s Woke Bias

Furthermore, experts are finding that AI is not neutral, and, in fact, displays a “woke” bias that is so engrained that attempts to counter it with “non-woke” AI models are fraught with difficulty.
The Daily Mail reported last February that ChatGPT refused to write an argument for fossil fuels, would rather set off a nuclear device than use a racial slur, and was “noticeably reluctant” to define what a woman is.
In response, Elon Musk launched an “anti-woke” chatbot, Grok, in December. Users soon complained about Grok’s unexpectedly liberal responses. “Unfortunately, the Internet (on which it is trained), is overrun with woke nonsense,” Mr. Musk responded. “Grok will get better. This is just the beta.”

Full Steam Ahead, Despite Risks

Despite the warnings, the race for AI development and dominance continues to be red-hot.
OpenAI CEO Sam Altman is reportedly weighing a move into the semiconductor industry. The Wall Street Journal reported Feb. 8 that Mr. Altman aims to raise trillions for a “wildly ambitious tech initiative that would boost the world’s chip-building capacity.” The project aims to solve OpenAI’s company’s growth constraints.
Meta’s CEO Mark Zuckerberg said in an Instagram post on Jan. 18 that he plans to build a “massive compute infrastructure” to run his homegrown generative AI, including the purchase of 350,000 of Nvidia’s advanced H100 chips.
Meanwhile, Microsoft announced Monday that it will expand its AI and cloud infrastructure in Spain with an investment of $2.1 billion over the next two years. That follows the company’s announcement on Feb. 15 that it will spend $3.45 billion on AI-focused investment in Germany.

China and AI

Despite continued tension between the United States and China, last month Microsoft dispelled rumors that it plans to relocate Microsoft Research Asia (MSRA). The denial was in response to rumors that MSRA was planning to move its top AI experts to a new lab in Canada.
Microsoft has about 9,000 employees overall in China, with plans to hire 1,000 more staff in the country, according to a company We-Chat post in September. The company has drawn criticism for its relationship with Beijing.
Several months ago, Microsoft President Brad Smith warned of AI’s potential to be used as a weapon. “We should absolutely assume, and even expect, that certain nation states will use AI to launch cyber attacks, even stronger cyber attacks and cyber influence operations than we see today,” he told Nikkei Asia.
Ellen Wan and Kane Zhang contributed to the report.
Related Topics