The Pentagon is updating a key guideline for how it uses artificial intelligence (AI) due to misperceptions about the technology’s goals, according to one military official.
The department has updated its directive, “Autonomy in Weapon Systems,” because of “a lot of confusion” about how the Pentagon hopes to use AI, according to Deputy Assistant Defense Secretary Michael Horowitz. That ambiguity had led many outside the Pentagon to believe the military was “maybe building killer robots in the basement,” he said.
At the same time, some senior Pentagon leaders believed the document prohibited the use of fully autonomous lethal systems.
Neither is true, according to Mr. Horowitz, who said the Pentagon intends to follow international law in the development of lethal autonomous systems.
“Just to be clear about that, the directive does not prohibit the development of any systems,” Mr. Horowitz said during a Jan. 9 talk at the Center for Strategic and International Studies think tank.
Instead of banning or promoting the development of killer robots, Mr. Horowitz said the directive mandated a review process, in which “certain types of autonomous weapon systems” need to be screened by the most senior Pentagon officials.
That program seeks to enhance U.S. military capabilities by increasing the production of thousands of cheap, autonomous, lethal drones and other capabilities to counter the numerical advantage of the Chinese military.
“Replicator itself is about a process,” Mr. Horowitz said. “It’s about figuring out how ... we can field at speed and scale key capabilities that we view as important given the national defense strategy.”
Mr. Horowitz says a key issue in developing autonomous lethal systems is the rapid innovation in software as opposed to hardware. Much of the department’s work, he said, is focused on the systems that weapons operate on, rather than the weapons themselves.
‘Future of War’
Autonomy will play a “critical role” in the “future of war,” and the United States will need to “accelerate adoption” of AI-driven and autonomous technologies as it confronts the “pacing challenge” of China, which is developing lethal autonomous systems of its own, Mr. Horowitz said.“I think the adoption capacity in terms of the department is improving, but we have more work to do, frankly, as we’ve been very public in stating,” he said.
“We realized that a lot of what we were doing was trying to figure out how the future force could more effectively incorporate these kinds of technologies in a safe and responsible way.”
To that end, senior military leadership has openly acknowledged its ambition to remake the armed forces in largely robotic terms.
“If you add robotics with artificial intelligence and precision munitions and the ability to see at range, you’ve got the mix of a real fundamental change.”
“That’s coming. Those changes, that technology ... we are looking at inside of 10 years.”
As such, Mr. Horowitz said, it’s vital that the Defense Department “make clear what is and isn’t allowed,” and uphold a “commitment to responsible behavior,” as it develops lethal autonomous systems.
“Our commitment to international humanitarian law is ironclad,” he said. “All weapons systems that we field, we believe can comply with international humanitarian law.”