AI Operations Could Lead to ‘Catastrophic’ Consequences for China and US: Expert

AI Operations Could Lead to ‘Catastrophic’ Consequences for China and US: Expert
Visitors look at an AI (Artificial Inteligence) security software program on a screen at the 14th China International Exhibition on Public Safety and Security at the China International Exhibition Center in Beijing on Oct. 24, 2018. NICOLAS ASFOURI/AFP via Getty Images
Andrew Thornebrooke
Updated:
The rapid evolution of artificial intelligence (AI) technologies poses a challenge to international security and could be used by third-party actors to push nuclear rivals into catastrophic conflict, according to one expert.
“AI-powered technology is rapidly becoming another capability in the toolkit of third-party actors to wage campaigns of disinformation and deception, which both sides in a competitive dyad, [such as] the U.S. and China, may have used against them,” said James Johnson, a lecturer in strategic studies at Aberdeen University.
Groups leveraging AI, whether they be nation states or otherwise, would have an “outsized strategic effect” in the coming years, Johnson said during a webinar on China-U.S. AI competition hosted by the International Institute for Strategic Studies.
He expressed concern that nonstate or other third-party actors could leverage critical and emerging technologies against nuclear powers, potentially hampering their capability to conduct military operations or drawing them into an unwitting nuclear conflict.
“In theory, a nonstate actor could target nuclear command and control systems, early-warning satellites, and radars, with AI-enhanced cyber weapons without the need for any kinetic or physical attack, let alone the possession of nuclear weapons,” Johnson said.

Johnson said that AI will drastically lower the capability threshold for so-called false flag attacks, effectively allowing small groups with limited resources to engage in attacks designed to misdirect nations.

A hacking group, for example, could fool American or Chinese leaders and/or systems into believing that they were under attack, thereby triggering a retaliation, effectively tricking one nation into attacking another.

“Massive increases” in the speed of machine decision making surpassing human comprehension, as well as the widespread adoption of AI systems, Johnson said, could further lead to a loss of ability to control or contain such events.

“Imagine, for example, if the Cuban Missile Crisis was truncated from 13 days to a matter of hours, minutes, or even nanoseconds,” Johnson said.

Contributing to this threat, Johnson said, was the fact that China, Russia, and the United States all keep some of their nuclear forces on a so-called “launch-on-warning” posture. This means that those nations will launch retaliatory nuclear strikes upon receiving a warning that they are under nuclear attack, rather than waiting for a detonation to confirm such an attack actually took place.

If a group of hackers was to set off one of these nations’ nuclear warning systems, therefore, there is a decent chance that it would trigger a real nuclear strike.

Johnson said that third parties employing such tactics “could bring two or more nuclear-armed rivals close to the brink,” and would be a destabilizing force in the years to come.

“This is fast becoming a plausible scenario,” Johnson said.

Johnson also explained that operations ostensibly not connected to the nuclear warning system could still wreak havoc on military and civil leadership and operations. Deepfake videos and other disinformation operations designed to promote miscalculation and misconception among national leadership are likely to rise, he said, and could bring about the same consequences.

Such interference with the information landscape during a crisis between two nuclear powers, wherein communications are already compromised and decision-making compressed, could result in the worst-case scenario.

“The consequences of these kinds of information operations could obviously be catastrophic,” Johnson said.

To that end, Johnson said that human error and machine error would likely compound to form uncertain and unexpected outcomes, as new technologies emerge and are deployed.

“In short, absolute human judgment and control together with the inherent brittleness or lack of context or common sense of existing machine algorithms, the risk of destabilizing accidents and false alarms is set to rise,” Johnson said.

To prevent an unmitigated disaster, Johnson said that nations ought to act now and take proactive steps to safeguard against such attacks and miscalculations. He suggested improvements be made to command and control, redundancies be implemented to nuclear protocols, and new norms be developed among both allies and adversaries concerning the use and deployment of AI.

“Now is a time for positive intervention before it is too late,” Johnson said.

Andrew Thornebrooke
Andrew Thornebrooke
National Security Correspondent
Andrew Thornebrooke is a national security correspondent for The Epoch Times covering China-related issues with a focus on defense, military affairs, and national security. He holds a master's in military history from Norwich University.
twitter
Related Topics