US and China Face Risks, Ethical Challenges in the Weaponization of AI

US and China Face Risks, Ethical Challenges in the Weaponization of AI
A man takes a picture of robots during the World Artificial Intelligence Conference (WAIC) in Shanghai, China, on July 7, 2023. Wang Zhao/AFP via Getty Images
Stephen Xia
Updated:
0:00
Commentary
Prior to the APEC summit in San Francisco in mid-November, President Joe Biden and Chinese leader Xi Jinping were reportedly discussing the potential ban on the use of artificial intelligence (AI) in autonomous weapons such as drones and nuclear warhead control. President Biden told reporters after the summit that the two countries would resume military-to-military contact and maintain direct communications to avoid misunderstandings and potential accidents.
Ultimately, no formal agreement was reached on limiting the use of AI in the military. Still, both the White House and the Chinese foreign ministry releases mentioned the possibility of U.S.–China negotiations on AI. President Biden told reporters, “We’re going to get our experts together to discuss risk and safety issues associated with artificial intelligence.”

Obstacles to a Mutual Ban

Neither Washington nor Beijing have disclosed which experts will participate or what issues will be discussed. This could involve the use of AI in lethal autonomous weapon systems, which the United Nations has expressed concerns over due to the lack of human control and oversight. One example is the U.S. military’s use of “killer robots,” which are controlled by AI and can technically function independently. Such a technology has come under scrutiny in congressional hearings and within the Pentagon.

However, the prospects for a consensus and a binding agreement between the United States and China are not at all promising. There are several reasons why a treaty or ban may not be forthcoming. The lack of a simple and clear definition poses a challenge in distinguishing between harmless AI technology widely used in everyday life and that which could potentially pose a serious threat to humans in the future.

It is believed that AI applied to industrial management, automated machinery, system analysis, and other fields through processing large amounts of data has advanced modern technology and transformed human productivity. Another roadblock is that no one truly wants to exclude AI from military technology, let alone put themselves at a competitive disadvantage due to the absence of AI.

Ethical Concerns

The success of AI in civilian technology can be replicated in the military, and it has long been used successfully for military purposes. With the United States, China, and other countries scrambling to incorporate AI into their military, an arms race in the AI field is inevitable. Such a competition may neglect the responsible use of AI in the military.

The most worrying aspect would be the integration of AI and the nuclear arsenal. As AI evolves to become an integral part of human activity, the serious implications of its use in the military must be considered for the future of humanity.

Within the United States, the ethics of AI-enabled weapons systems has been a significant topic of debate. The State Department launched the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy in February. The declaration is a framework of principles without any legal effects.
The State Department declaration outlines several principles for using AI based on the idea of having a human decision-maker in the chain of control of weapons managed by AI. The goal is to address the risks associated with AI and ensure that autonomous weapons systems do not kill indiscriminately. This principle applies to all weaponry, from drones to nuclear warheads and intercontinental ballistic missiles (ICBMs). The idea of maintaining human control has been the basis for American military strategy on this issue.
In June, U.S. national security adviser Jake Sullivan called on other nuclear-armed states to commit to maintaining human decision-making authority over the command, control, and use of nuclear weapons. This would be a major topic to discuss if the United States and China are to negotiate on this issue.

Influence on Human Decision-Making

Historically, even the former Soviet Union, which invested countless resources in automating its nuclear command-and-control infrastructure during the Cold War, did not go all the way to building an automatic doomsday weapons system. Moscow’s “Dead Hand“ system still relied on humans in underground bunkers. The Dead Hand was a special Soviet nuclear weapon system that could launch a nuclear warhead in the event of a devastating nuclear strike with minimal human involvement. 

The increasing use of AI has led some to question how meaningful it is to have humans in the command-and-control system. The reason is that humans make a lot of bad decisions. Many argue that the real danger is not that the decision to use nuclear weapons will be handed over to an AI but that the decision-makers will rely on the options provided by AI and influence their decisions in much the same way that they rely on GPS when driving a car.

Human decision-makers relying too heavily on the advice provided by AI may potentially lead to far more serious consequences. If an AI system produces convincing but bad advice, human decision-makers may be unable to identify a better option due to their reliance on AI.

The United States has the world’s second-largest nuclear arsenal after Russia, with a variety of nuclear warheads that can be launched at a few minutes’ notice. China is seeking to have the same capabilities by constructing new missile silos and launching new early warning satellites.

In a time of conflict, the real danger is the possibility of a false alarm in the nuclear alert system, which may prompt dangerous responses from either side. In other words, whether or not there is human decision-making in the command-and-control system, the risk will always be there, but this does not conflict with the importance of human decision-making.

A deactivated Titan II ICBM in a silo at the Titan Missile Museum in Green Valley, Ariz., on May 12, 2015. (Brendan Smialowski/AFP via Getty Images)
A deactivated Titan II ICBM in a silo at the Titan Missile Museum in Green Valley, Ariz., on May 12, 2015. Brendan Smialowski/AFP via Getty Images

Human control of a system in which AI is involved cannot guarantee correct decision-making. However, human decision-making should never be absent, and reducing human reliance on AI is necessary. The fundamental purpose of having human decision-makers in the command-and-control system of automatic weapons is not to leave the decision-making power entirely in the hands of AI.

To avoid the frightening prospect of AI controlling the world’s fate, world leaders need to reach a consensus on the responsible use of such a powerful technology. Currently, the outlook for a binding treaty between China, the United States, and the West on the weaponization of AI is rather bleak.

Whether people like it or not, world leaders are likely to rely more and more on AI, and its development is likely to advance. One can only hope that decision-makers in China, Russia, the United States, and other countries will set aside their differences in geopolitics and address the safety concerns surrounding the increased use and development of AI.

Michael Zhuang contributed to this commentary.
Views expressed in this article are opinions of the author and do not necessarily reflect the views of The Epoch Times.
Stephen Xia
Stephen Xia
Author
Stephen Xia, a former PLA engineer, specialized in aviation equipment and engineering technology management. After retiring from military service, he has been following the world's development of military equipment.
Related Topics