Elon Musk, Stephen Hawking, and Others Urge Ban on Killer Robots

AI researchers warn that killer robots may be as ubiquitous in the 21st century as AK-47s were in the 20th.
Elon Musk, Stephen Hawking, and Others Urge Ban on Killer Robots
An autonomous weapon military robot preforms demonstrations for spectators at the Memorial Service on the Intrepid on May 28, 2012. Benjamin Chasteen/Epoch Times
Jonathan Zhou
Updated:

A group of distinguished artificial intelligence researchers warned that the international community must place restrictions on the development of autonomous weapons before it’s too late to stop an “AI arms race.” In an open letter published by the Future of Life Institute (FLI) and signed by luminaries like Elon Musk, Stephen Hawking, and Steve Wozniak, the researchers called for the United Nations to place “a ban on offensive autonomous weapons beyond meaningful human control.”

“If any major military power pushes ahead with AI [artificial intelligence] weapon development, a global arms race is virtually inevitable,” the letter states. “Autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce.”

Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations, and selectively killing a particular ethnic group.
Text from an open letter, published by Future of Life Institute

The letter argues that the potential upsides of automated warfare—primarily fewer human casualties—would be overshadowed by the evils it could unleash, such as making nations more inclined to start conflicts in the first place.

“It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.,” the letter states. “Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations, and selectively killing a particular ethnic group.”

The letter is part of a larger campaign by FLI to raise awareness about what it perceives to be the existential risk that AI could pose to humanity. Earlier this year, the institute gave out $7 million in research grants to AI researchers to brainstorm potential safeguards against a worst-case scenario where robots rise up against man.

A large number of grants were awarded to projects that dealt with the problem of how to create a system of ethics for artificial intelligence. A number of AI researchers believe that the emergence of a smarter-than-human superintelligence is inevitable and will almost certainly occur before the end of the century. 

Not If, but When

For those researchers, the most important question is whether the emergence of a superintelligence happens within a window of time wide enough for the human race to develop the necessary countermeasures. In a hypothetical “hard takeoff,” the creation of a software program that can recursively improve itself would allow the cutting-edge AI to leap from below-human-level intelligence to above-human-level in a matter of days, too fast for humans to react.

Musk joins a growing number of public figures clamoring for safeguards against the machine threat.

Nick Bostrom, a researcher at the University of Oxford, has been instrumental in introducing the plausibility of such a scenario to the general public. His 2014 best seller “Superintelligence: Paths, Dangers, Strategies“ argues that humanity needs to start preparing for the ”hard takeoff” scenario—he’s not predicting that it will happen, just that we should have a contingency plan—right now.

The book has made a convert of Musk, who said in late 2014 that AI was potentially more dangerous than atomic weapons. Earlier this year, Musk donated $10 million to the FLI, which has awarded $1.5 million to Bostrom to start a “joint Oxford–Cambridge research center” to make policy recommendations for governments, industry leaders, and other organizations to minimize the risk of AI.

Musk joins a growing number of public figures clamoring for safeguards against the machine threat. The celebrated physicist Stephen Hawking, a long-standing Cassandra on AI, is holding a question and answer on Reddit devoted specifically to the dangers of AI.

Jonathan Zhou
Jonathan Zhou
Author
Jonathan Zhou is a tech reporter who has written about drones, artificial intelligence, and space exploration.
Related Topics