The Ethical Tightrope: The Pentagon’s Challenge From AI Attack Drones

As autonomous drones become increasingly vital on the battlefield, the Pentagon faces the challenge of balancing ethical constraints with combat capabilities.
The Ethical Tightrope: The Pentagon’s Challenge From AI Attack Drones
DJI Matrice 300 reconnaissance drones during test flights in Kyiv, Ukraine, on Aug. 2, 2022. Sergei Supinsky/AFP via Getty Images
Stephen Xia
Sean Tseng
Updated:
0:00
Commentary

How soon will the United States deploy lethal autonomous robots alongside human soldiers? While the exact timeline is uncertain, this reality might be closer than we think.

Technological advancements have made this possibility increasingly feasible, but they also raise significant ethical challenges, particularly regarding the potential phasing out of the human role in lethal decision-making.

Since the Russia–Ukraine war erupted in 2022, small drones have been changing how wars are fought. In recent months, Ukraine has launched drones that intercept Russian aircraft or burn tree lines by dropping incendiaries. Ukraine is also testing drones equipped with automatic rifles and grenade launchers. No longer limited to dropping small bombs, Ukraine’s drones now handle tasks from planting mines to delivering supplies—turning the battlefield into drone warfare.

With rapid developments in artificial intelligence (AI), the latest attack and reconnaissance drones can carry out increasingly complex missions with minimal—or even no—human intervention.

On Oct. 10, American defense technology company Anduril Industries unveiled the Bolt series of portable vertical take-off and landing autonomous aerial vehicles. These drones can handle a variety of complex battlefield missions.
The basic Bolt model is designed for intelligence, surveillance, reconnaissance, and search and rescue tasks. Then there’s the “Bolt-M,” an attack variant that provides ground forces with lethal precision firepower. It can autonomously track and strike targets, offering operators four simple decision modes: where to look, what to follow, how to engage, and when to strike.

Currently, operators such as the Ukrainian military and U.S. Army personnel need specialized training before using first-person view drones and face many operational limitations, such as wearing a virtual reality headset or specialized immersive goggles. The AI-driven Bolt-M eliminates the need for complex training, meeting combat requirements while providing more information and functionality than existing drones.

The Bolt-M is built for rapid deployment, emphasizing ease of operation and portability. It offers options such as autonomous waypoint navigation, tracking, and engagement. With more than 40 minutes of flight time and a control range of about 12 miles, it effectively supports ground combat. It can carry up to a three-pound payload of munitions, delivering powerful attacks on static or moving ground targets, including light vehicles, infantry, and trenches.

Anduril has secured a contract with the U.S. Navy to develop an autonomous attack drone under the Marine Corps’s “Organic Precision Fires-Light” program. The core technology lies in the AI software provided by Anduril’s Lattice platform. Operators simply draw a boundary box on a battlefield monitor and set a few rules and the drone autonomously completes its mission.
The Lattice platform integrates information from various sensors and databases, providing autonomy throughout the mission while keeping humans in the loop.

Once AI identifies a target, the operator can assign a target area to the Bolt-M. The system can accurately track and aim the target, whether out of sight or moving. Built-in visual and guidance algorithms ensure effective attacks even if the drone loses connection with the operator.

The Bolt-M also assists operators in understanding the battlefield: tracking, monitoring, and attacking targets as instructed. For example, a tank with added camouflage might not be recognized by the computer. However, the system can relay this information back to the operator for decision-making. Importantly, these lethal drones can maintain control over targets and autonomously complete previously issued orders even if the link to the operator is severed.

Such autonomous attack capabilities push the boundaries of the Pentagon’s AI principles, which state that robotic weapons must always involve a human in lethal decision-making. The Pentagon upholds AI ethics guidelines requiring operators to exercise “appropriate levels of human judgment” over the use of AI weapons. Last year, the Defense Department sought to clarify what is permitted while allowing flexibility to adjust rules as situations evolve. The new directive now makes explicit the need to build and deploy autonomous weapon systems safely and ethically and not without significant human oversight.
Marta flies a first-person view DJI drone near Kyiv, Ukraine, on May 20, 2023. (Paula Bronstein/Getty Images)
Marta flies a first-person view DJI drone near Kyiv, Ukraine, on May 20, 2023. Paula Bronstein/Getty Images

As drones become more effective on the battlefield, demand for autonomous attack drones is rapidly increasing. For companies such as Anduril, achieving autonomous attack capability is no longer a technical issue; the real challenge is balancing ethical constraints with lethal autonomous operations. Industry players aim to make their systems as powerful as possible within the framework of government policies, rules of engagement, regulations, and user requirements.

One key lesson from the Ukrainian battlefield is that conditions change quickly. Different countries, whether allies or adversaries, may have different ethical standards regarding the development and use of lethal autonomous weapons. This largely depends on what happens on the battlefield.

The lack of consensus is serious because although the Pentagon emphasizes AI ethics and the need to ensure a human is “in the loop” for lethal force, there is no guarantee that adversaries will accept similar constraints. This situation brings unprecedented risks for the Pentagon, and it explains why the U.S. military, government, and industry are putting so much effort into optimizing the use of AI, autonomy, and machine learning in operations and weapons development.

Recently, the U.S. Army introduced the “100-Day” AI risk assessment program to strengthen and improve AI systems under ethical constraints. These efforts underscore the importance of both human and machine capabilities.
Not only does the Pentagon require adherence to the “human-in-the-loop” principle for lethal force, but also, U.S. Army technology developers recognize that advanced AI computing methods cannot replicate certain critical human traits such as morality, intuition, consciousness, and emotions. Although these attributes make up a small part of decision-making, they can be crucial during combat.

Pure technology, lacking human qualities, may incur ethical risks and prove insufficient for handling the complexities of the battlefield.

Stephen Xia, a former PLA engineer, specialized in aviation equipment and engineering technology management. After retiring from military service, he has been following the world's development of military equipment.