The Perverse Rise of Killer Robots

The development of “killer robots” is a new and original way of using human intelligence for perverse means.
The Perverse Rise of Killer Robots
A mock "killer robot" is pictured in central London on April 23, 2013, during the launching of the Campaign to Stop Killer Robots, which calls for the ban of lethal robot weapons that would be able to select and attack targets without any human intervention. The campaign calls for a pre-emptive and comprehensive ban on the development, production, and use of fully autonomous weapons. Carl Court/AFP/Getty Images
Updated:

The development of “killer robots” is a new and original way of using human intelligence for perverse means. Humans creating machines to kill and destroy on a scale not yet imagined is a concept that not even George Orwell could have imagined. In the meantime, the leading world powers continue their un-merry-go-round of destruction and death—mostly of innocent civilians—without stopping to consider the consequences of their actions.

Killer robots are fully autonomous weapons that can identify, select, and engage targets without meaningful human control. Although fully developed weapons of this kind do not yet exist, the world leaders such as the United States, the U.K, Israel, Russia, China, and South Korea are already working on creating their precursors.

The U.S. Government Accountability Office reports that in 2012, 76 countries had some kind of drones, and 16 countries already possessed armed ones. The U.S. Department of Defense spends $6 billion every year on the research and development of better drones.

The U.S. Government Accountability Office reports that in 2012, 76 countries had some kind of drones, and 16 countries already possessed armed ones.

South Korea is presently using the Samsung Techwin security surveillance guard robots, which the country uses in the demilitarized zone it shares with North Korea. Although these units are currently operated by humans, the robots have an automatic feature that can detect body heat and fire a machine gun without human intervention.

Israel is developing an armed drone called Harop that could select targets with a special sensor. Northrop Grumman has also developed an autonomous drone called the X-47B, which can travel on a preprogrammed flight path while being monitored by a pilot on a ship. It is planned to enter into active service by 2019. China is also moving rapidly in this area. In 2012 it already had 27 armed drone models, one of which is an autonomous air-to-air supersonic combat aircraft.

Killer robots follow the generation of drones and, as with drones, their potential use is also creating a host of human rights, legal, and ethical issues. Military officials state that this kind of hardware protects human life by taking soldiers and pilots out of harm’s way. What they don’t say, however, is that the protected lives are those of the attacking armies, not those of the mostly civilians who are their targets, whose untimely deaths are euphemistically called collateral damage.

From the ethical point of view, the use of these machines presents a moral dilemma: by allowing machines to make life-and-death decisions we remove people's responsibility for their actions and eliminate accountability.

According to Denise Garcia, an expert in international law, four branches of international law have been used to limit violence in war: the law of state responsibility, the law on the use of force, international humanitarian law, and human rights law. As currently carried out, U.S. drone strikes violate all of them.

From the ethical point of view, the use of these machines presents a moral dilemma: by allowing machines to make life-and-death decisions we remove people’s responsibility for their actions and eliminate accountability. Lack of accountability almost ensures future human rights violations. In addition, many experts believe that the proliferation of autonomous weapons would make an arms race inevitable.

As the United Nations is trying to negotiate the future use of autonomous weapons, U.S. and U.K. representatives want to support weaker rules that would prohibit future technology but not killer robots developed during the negotiating period. That delay would allow existing semi-autonomous prototypes to continue being used.

The need for a pre-emptive ban on the development and use of this kind of weapon is urgent. As Christ of Heyns, the U.N. special rapporteur on extrajudicial, summary, or arbitrary executions stated recently, “If there is not a pre-emptive ban on the high-level autonomous weapons, then once the genie is out of the bottle it will be extremely difficult to get it back in.”

César Chelala, M.D., Ph.D., is a global public health consultant for several U.N. and other international agencies. He has carried out health-related missions in 50 countries worldwide. He lives in New York and writes extensively on human rights and foreign policy issues, and is the recipient of awards from Overseas Press Club of America, ADEPA, and Chaski, and recently received the Cedar of Lebanon Gold Medal.