Obstacles to a Mutual Ban
Neither Washington nor Beijing have disclosed which experts will participate or what issues will be discussed. This could involve the use of AI in lethal autonomous weapon systems, which the United Nations has expressed concerns over due to the lack of human control and oversight. One example is the U.S. military’s use of “killer robots,” which are controlled by AI and can technically function independently. Such a technology has come under scrutiny in congressional hearings and within the Pentagon.However, the prospects for a consensus and a binding agreement between the United States and China are not at all promising. There are several reasons why a treaty or ban may not be forthcoming. The lack of a simple and clear definition poses a challenge in distinguishing between harmless AI technology widely used in everyday life and that which could potentially pose a serious threat to humans in the future.
Ethical Concerns
The success of AI in civilian technology can be replicated in the military, and it has long been used successfully for military purposes. With the United States, China, and other countries scrambling to incorporate AI into their military, an arms race in the AI field is inevitable. Such a competition may neglect the responsible use of AI in the military.The most worrying aspect would be the integration of AI and the nuclear arsenal. As AI evolves to become an integral part of human activity, the serious implications of its use in the military must be considered for the future of humanity.
Influence on Human Decision-Making
Historically, even the former Soviet Union, which invested countless resources in automating its nuclear command-and-control infrastructure during the Cold War, did not go all the way to building an automatic doomsday weapons system. Moscow’s “Dead Hand“ system still relied on humans in underground bunkers. The Dead Hand was a special Soviet nuclear weapon system that could launch a nuclear warhead in the event of a devastating nuclear strike with minimal human involvement.The increasing use of AI has led some to question how meaningful it is to have humans in the command-and-control system. The reason is that humans make a lot of bad decisions. Many argue that the real danger is not that the decision to use nuclear weapons will be handed over to an AI but that the decision-makers will rely on the options provided by AI and influence their decisions in much the same way that they rely on GPS when driving a car.
Human decision-makers relying too heavily on the advice provided by AI may potentially lead to far more serious consequences. If an AI system produces convincing but bad advice, human decision-makers may be unable to identify a better option due to their reliance on AI.
In a time of conflict, the real danger is the possibility of a false alarm in the nuclear alert system, which may prompt dangerous responses from either side. In other words, whether or not there is human decision-making in the command-and-control system, the risk will always be there, but this does not conflict with the importance of human decision-making.
Human control of a system in which AI is involved cannot guarantee correct decision-making. However, human decision-making should never be absent, and reducing human reliance on AI is necessary. The fundamental purpose of having human decision-makers in the command-and-control system of automatic weapons is not to leave the decision-making power entirely in the hands of AI.
To avoid the frightening prospect of AI controlling the world’s fate, world leaders need to reach a consensus on the responsible use of such a powerful technology. Currently, the outlook for a binding treaty between China, the United States, and the West on the weaponization of AI is rather bleak.
Whether people like it or not, world leaders are likely to rely more and more on AI, and its development is likely to advance. One can only hope that decision-makers in China, Russia, the United States, and other countries will set aside their differences in geopolitics and address the safety concerns surrounding the increased use and development of AI.