Merge AI with the above weapons, particularly nuclear weapons, cautions Zachary Kallenborn, a research affiliate with the Unconventional Weapons and Technology Division of the National Consortium for the Study of Terrorism and Responses to Terrorism (START), and you have a recipe for unmitigated disaster.
He isn’t exaggerating. Exactly 40 years ago, as Mr. Kallenborn, a policy fellow at the Schar School of Policy and Government, described, Stanislav Petrov, a Soviet Air Defense Forces lieutenant colonel, was busy monitoring his country’s nuclear warning systems. All of a sudden, according to Mr. Kallenborn, “the computer concluded with the highest confidence that the United States had launched a nuclear war.” Mr. Petrov, however, was skeptical, largely because he didn’t trust the current detection system. Moreover, the radar system lacked corroborative evidence.
Thankfully, Mr. Petrov concluded that the message was a false positive and opted against taking action. Spoiler alert: The computer was completely wrong, and the Russian was completely right.
“But,” noted Mr. Kallenborn, a national security consultant, “if Petrov had been a machine, programmed to respond automatically when confidence was sufficiently high, that error would have started a nuclear war.”
Furthermore, he suggested, there’s absolutely “no guarantee” that certain countries “won’t put AI in charge of nuclear launches,” because international law “doesn’t specify that there should always be a ‘Petrov’ guarding the button.”
“That’s something that should change, soon,” Mr. Kallenborn said.
He told me that AI is already reshaping the future of warfare.
Artificial intelligence, according to Mr. Kallenborn, “can help militaries quickly and more effectively process vast amounts of data generated by the battlefield; make the defense industrial base more effective and efficient at producing weapons at scale, and may be able to improve weapons targeting and decision-making.”
This should concern all readers.
“If the launch of nuclear weapons is delegated to an autonomous system,” Mr. Kallenborn fears that they “could be launched in error, leading to an accidental nuclear war.”
“Adding AI into nuclear command and control,” he said, “may also lead to misleading or bad information.”
Although there isn’t one particular country that keeps Mr. Kallenborn awake at night, he’s worried by “the possibility of Russian President Vladimir Putin using small nuclear weapons in the Ukraine conflict.” Even limited nuclear usage “would be quite bad over the long-term” because “the nuclear taboo” would be removed, thus “encouraging other states to be more cavalier with nuclear weapons usage.”
“Nuclear weapons,” according to Mr. Kallenborn, are the “biggest threat to humanity.”
“They are the only weapon in existence that can cause enough harm to truly cause human extinction,” he said.
As mentioned earlier, throwing AI into the nuclear mix appears to increase the risk of mass extinction. The warnings of Mr. Kallenborn, a well-respected researcher who has dedicated years of his life to researching the evolution of nuclear warfare, carry a great deal of credibility.