Northrop Grumman personnel conduct pre-operational tests on an X-47B Unmanned Combat Air System demonstrator on the flight deck of the aircraft carrier USS George H.W. Bush, on May 13, 2013.

Northrop Grumman personnel conduct pre-operational tests on an X-47B Unmanned Combat Air System demonstrator on the flight deck of the aircraft carrier USS George H.W. Bush, on May 13, 2013. U.S. Navy photo by Mass Communication Specialist 3rd Class Kevin J. Steinberg

The Scientists and Technologists Who Want To Keep AI Out of Weapons

Stephen Hawking, Steve Wozniak, and hundreds of others signed an open letter that begged leaders to stop a military robotics arms race.

Maybe we should take the warnings of RoboCop more seriously. Famous scientists, engineers, and businessmen are banding together to call for a ban on autonomous weapons development.

In an open letter published today by the Future of Life Institute—a research group concerned with making sure humanity stays in charge of our technological future—Stephen Hawking, Elon Musk, and Steve Wozniak, along with hundreds of other researchers, signed on to the idea that “starting a military AI arms race is a bad idea.”

The letter questions the idea of researching technology that can be used to remotely kill humans without anyone telling the weapon to do so. While we have aerial technology today that lets us kill someone in the Middle East from a shipping container outside of Las Vegas, this is not what the Institute is concerning itself with. The letter says its focus is not on “cruise missiles or remotely piloted drones for which humans make all targeting decisions.”

Rather, the institute is worried about easily replicable technology that could search and kill humans based on “pre-defined criteria.”

“Unlike nuclear weapons, [autonomous weapons] require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.”

(RelatedThe US Military Is Building Gangs of Autonomous Flying War Bots)

While current autonomous technologies are struggling to stand on their own two feet and learn defensive driving techniques, the institute says military technology that could lead to robots killing humans could be “feasible within years, not decades.”

The three famous engineers also signed a broader open letter from the institute in January about making sure research into AI is rigidly structured to safeguard against the creation of Terminators. The new letter comes ahead of the 2015 International Joint Conference on Artificial Intelligence, which begins today in Buenos Aires.

Hawking is also answering questions on Reddit this week, with the guiding topic of “making the future of technology more human.” Reddit users can submit questions now, and Hawking will start answering them tomorrow. Undoubtedly, some of the questions will likely hinge on the role robots and AI have in our future.