US Grapples With AI Policy As China Threat Looms

Communist China is pushing forward in military AI development as the United States has yet to adopt a whole-of-government policy framework for AI.
US Grapples With AI Policy As China Threat Looms
University freshmen take part in a military education and drill session at the beginning of the new semester in Yangzhou in China's eastern Jiangsu province on Sept. 4, 2018. (STR/AFP via Getty Images)
Andrew Thornebrooke
6/28/2024
Updated:
6/28/2024
0:00

Cloistered within the laboratories of a Chinese defense university, soldiers are fighting on virtual battlefields, spurred on by the orders of a first-of-its-kind digital commander.

These are war games intended to assist the Chinese Communist Party (CCP) in conducting large-scale training exercises even when high-ranking military officers cannot be pulled away from their duties elsewhere.

What sets these war games apart from the dozens of others conducted every year is one critical distinction: In these games, supreme command authority for the Chinese forces has been granted to an artificial intelligence (AI).

Researchers designed this AI commander to mirror its human counterparts in every way. It develops thought patterns and adopts unique personalities. It can even be “forgetful,” shedding its virtual memories when a human commander would be unlikely to retain a similar amount of information.

The commander’s existence and exploits were published last month in the Chinese-language journal Common Control and Simulation, and first reported by the South China Morning Post.

The research, and the creation of the AI commander, is pivotal. Though the Chinese regime has worked for years to develop human-machine teaming software for its military commanders, this is the first known example of command authority being delegated to an AI during a training simulation.

In large-scale computer war games involving all branches of the CCP’s military wing, the People’s Liberation Army (PLA), the AI commander learned from numerous, evolving virtual wars. Even if such AI is never granted real-world command authority, the insights gleaned from its virtual conquests will accelerate the evolution of Chinese military strategy in unprecedented ways as PLA officers learn from its victories and defeats.

The emergence of the world’s first virtual generalissimo also marks a pivotal moment for political and military decision-making as a whole, and raises questions as to whether AI may soon influence, and possibly control, key strategic decisions throughout the world.

Such developments are increasingly on display in the United States as well.

As the CCP’s AI commander was vanquishing its virtual foes last month, President Joe Biden said in a CNN interview in May that his own expert advisers were wholly divided on the threat posed by AI and to what extent it would reshape American society.
To that end, President Biden acknowledged that at least one expert advised him that AI would soon “overtake human thinking” altogether.

US Lacks Comprehensive Policy Vision for AI

Should critical military and political decision-making processes be handed over to AI, how they operate and, by extension, how they perform, will largely be a product of the policies that exist in the nations adopting them.

Though leading voices in the field may have their own opinions on how widespread the integration of AI and national decision-making will be, the fact remains that virtually no nation on earth has developed a robust, whole-of-government policy framework for developing and deploying AI at the state level.

In the United States, such questions have been left to the adaptation of various government entities, with the Departments of Defense, Energy, and Transportation all building out their own separate operational guidelines for developing and deploying AI.

According to John Mills, former director for cybersecurity policy, strategy, and international affairs in the Office of the Secretary of Defense, this ad hoc approach misses the critical need for a whole-of-government policy framework to direct AI development and deployment.

“Policy is important but our people dismiss policy,” Mr. Mills told The Epoch Times. “But you always have to have a policy framework. [You have to ask,] ‘How are we going to use something?’”

“You don’t want policymakers reaching into the operational [side] and making operational decisions. But you don’t want operators making policy decisions either.”

To be sure, there have been some attempts by Congress to more fully comprehend the surge of AI, such as Senate Majority Leader Chuck Schumer’s proposal for Congress to research and adopt guidelines for AI.
The efforts have made little headway, however, given in large part to the radically divided beliefs of industry insiders about the promises and perils of AI, and of Congress’s own inability to grasp some of the basic principles of a rapidly developing field.

Mr. Mills said that AI is in many ways the most recent “boogeyman” that Congress does not yet understand, but noted that a policy framework, or lack thereof, would nevertheless have wide-reaching consequences.

One such issue at hand is that, so long as the United States fails to implement wide-reaching AI policy and regulation, standards will be set elsewhere in the world, and the United States may be compelled to adopt them.

While the United States has made little headway in establishing government-wide AI policy, for example, European lawmakers have drafted and implemented more robust rules for AI.

Sam Kessler, a geopolitical adviser at the North Star Support Group risk advisory firm, said that the United States needs to do more to lead if it is to ensure its own values are embedded into how AI is deployed.

“The U.S. is still in the process of creating its own version of AI regulation,” Mr. Kessler told The Epoch Times in an email.

“The European Union is further ahead on this than the U.S., with the recent adoption of the EU AI Act, which established a mechanism to set up norms and application standards that can build trust and reliability by businesses, government, and general users of the technology.”

The EU AI Act, adopted in August of last year, establishes guidelines for AI at different “risk levels” and creates transparency requirements regarding content created with the assistance of AI.

Mr. Kessler says that such policy guidelines help to provide a blueprint for technological development that will ultimately have a larger impact on future AI legislation, much like the first in a series of legislative dominoes.

“The manner in which this regulation is implemented will determine how AI is used in the application of systems and decision-making processes,” Mr. Kessler said.

Like most nations, the United States does have a national AI strategy, which aims at “advancing responsible AI systems that are ethical, trustworthy, and safe, and serve the public good.”

Conceptualizing that strategic goal and turning it into legal reality are two very different things, however, and to date most U.S. AI guidance has centered on voluntary pledges made by tech executives to police themselves.

That is not to say that there are no regulations. According to a report by Stanford University, there were 25 AI-related regulations enshrined in law in the United States in 2023, up from just one in 2016. Moreover, some 188 AI-related bills were proposed in 2023. Though the vast majority of them did not materialize, it is clear that a trend toward regulation is accelerating.

Where the United States has failed to meet the challenge of AI policy, however, The CCP has already begun drafting sweeping regulations. Most notable among them, perhaps, is a requirement by the regime’s internet regulator that all content generated by AI “reflect the socialist core values” espoused by the CCP.

The lack of a counterweight in the United States could spell trouble, according to Mr. Kessler.

When AI systems are integrated into U.S. decision-making processes, to whatever extent, their use or misuse could hinge on whether the United States has ensured its own AI will uphold the values of the republic.

“Misuse or misapplication of AI is a big concern given the current level of volatility we are witnessing in the international system,” Mr. Kessler said.

“This is where our fundamental beliefs as a nation will be greatly tested if we go the direction of AI usage that our competitors and adversaries are applying for their purposes.”

Ideology must therefore be a critical consideration in developing the nation’s AI policy, and it is in the realm of ideology that the United States and China’s methods diverge dramatically.

2 Visions of AI Decision-Making

To that end, President Biden said last month that how AI development is “controlled” will ultimately dictate whether it is a boon or catastrophe for humanity, and that control requires hard policy solutions.

For Mr. Kessler, it is not a matter of if AI will influence key decision-making processes but of how soon. When that moment arrives, and how resilient democratic systems are to it, will largely depend on how well U.S. policy has been defined and how well those in positions of power have been educated to its uses.

“AI will certainly have an impact on the decision-making capabilities of current leaders as well as managers of systems in general,” Mr. Kessler said.

“Incorporating AI into our management systems and policy decisions means we must stay on top of our game and keep getting smarter, savvier, and big picture-oriented, in order to have a better understanding of the growing complexities of our world, our systems, and how we maneuver in it effectively.”

Smaller in scope than the struggle between liberty and oppression, yet equally pressing, is the apparent difference in how comfortable the United States and China are with allowing AI to make decisions in what Mr. Mills calls “high consequence environments.”

“The key thing which is different is actioning independent of human intervention,” Mr. Mills said.

“The real point of contention from the policy makers’ side is the latitude to which we give independent decision-making separate from hands-on-keyboard and eyes-on-screen human control.”

To that end, the Department of Defense has opted thus far to only field lethal systems that keep a human “in the loop” on AI decision-making, effectively ensuring a human being approves any AI attempt to pull the trigger on a lethal act.

However, Chinese strategic writings on ensuring humans remain in the loop are few and far between, and Mr. Mills believes the CCP is more than willing to allow AI control of critical systems if it believes doing so will ensure some lasting advantage against the United States.

“The basic problem is China sees no boundary and has no problem interfacing AI with catastrophic outcomes, like nuclear war,” Mr. Mills said. “We are much more hesitant to do that.”

“AI in high-risk systems where you have potential for catastrophe, or loss of life or injury, is really the grave concern.”

But the integration of AI with nuclear or other systems is just one domain that the Chinese regime seeks to “dominate,” Mr. Mills said. Others include biological, cyber, and space warfare.

To that end, Mr. Mills said that the key difficulty would be to ensure AI was integrated into the proper systems to eliminate the risk of catastrophic surprise while maintaining human control over critical decisions.

“We want to use AI to ensure we don’t meet strategic surprise that could be catastrophic to the nation-state and what we want to use in our weapon systems is smart AI that makes them far more accurate and far more efficient.”

As such, Mr. Mills suggested the United States could deploy AI in a way that made military “engagements far more efficient and cost-effective” without giving up human control of decisions concerning life and death.

One method of doing this is through passive systems, which China is also developing. Mr. Mills gave the example of a new passive system, seen in photographs of advanced Chinese warships, that appears to be an infrared camera system dedicated to constantly generating images of the surrounding environment and then using AI to compare the images to detect changes in real time.

Such a system might detect an incoming hypersonic missile that radar missed, Mr. Mills said, and AI would be the only system capable of responding in time.

“We’re not talking about a nuclear weapon, but we are talking about destruction and loss of the ship. And it’s in that split second that that decision must be made.”

Analogies to the CCP’s newest AI development are not difficult to infer. How fast might one have to respond to counter a missile, a ship, or a navy led by an AI commander? If the United States never allows humans to leave the loop of control, will its forces be able to respond in time?

As to how policy could help drive forward the positive uses of AI without incurring catastrophic consequences, Mr. Mills said that the world was entering uncharted waters.

While the big data used to train AI and the analytics used to inform its decision-making were understood well enough, he said, actioning those things in real time and in the real world was a different game altogether.

That actionability, he said, “That’s the untested environment.”

Andrew Thornebrooke is a national security correspondent for The Epoch Times covering China-related issues with a focus on defense, military affairs, and national security. He holds a master's in military history from Norwich University.
twitter