Stephen Hawking Doesn’t Have to Fear Killer Robots

An MQ-9 Reaper sits on the flight line at Hurlburt Field, Fla., April 24, 2014.

U.S. Air Force photo/Staff Sgt. John Bainter

AA Font size + Print

An MQ-9 Reaper sits on the flight line at Hurlburt Field, Fla., April 24, 2014.

Autonomous weapons might instead thin the fog of war and make combat more transparent.

On Monday, a broad group of several thousand academics, legal scholars, roboticists, and scientific luminaries – including Stephen Hawking – released an open letter warning that efforts to develop autonomous weapons could unleash a Pandora’s Box of ills, from ethnic cleansing to pervasive arms races. Certainly, this is not a world in which any of us would like to live. But nor is it a world that rises inevitably from the development of autonomous weapons.  

On the contrary, autonomous weapons could shape a better world – one in which the fog of war thins, if ever so slightly. These systems are not subject to the fatigue, combat stress, and other factors that occasionally cloud human judgment. And if responsibly developed and properly constrained, they could ensure that missions are executed strictly according to the rules of engagement and the commander’s intent. A world with autonomous weapons could be one where the principles of international humanitarian law (IHL) are not only respected but also strengthened.

The key to avoiding the catastrophes predicted in the letter is restricting what such a weapon can and can’t do. It might, for example, be allowed to operate only for a limited time or in a limited geographic area — perhaps underwater or in other areas in which civilians are not present. This would, in turn, increase a commander’s control over the weapon and strengthen accountability for the system’s use – a key concern among advocates of a ban on autonomous weapons. These machines could also be designed to target only military hardware, thereby strengthening adherence to the IHL principles of proportionality and distinction and further reducing civilian casualties.

These positive outcomes are possible only if the international community offers careful stewardship. First and most importantly, states will need to agree upon a definition of “autonomous weapons.” At the recent UN meeting on Lethal Autonomous Weapon Systems, governments and non-governmental organizations expressed widely divergent – and frequently incompatible – notions of autonomy and autonomous weapons. (Some argued that existing systems qualify as autonomous, while others, such as the United Kingdom, argued that they “do not and may never” exist.) Clarifying the definition will be critical to advancing the dialogue and to establishing responsible design requirements and operating parameters.

Second, governments must ensure that meaningful human control exists over their autonomous weapons. As Michael Horowitz and Paul Scharre have written in their primer on the subject, this should include standards that help operators make deliberate, legal, and moral decisions about the use of force.

Third, governments should implement strict testing and evaluation standards to ensure that any weapon that can autonomously pick a target or begin an engagement performs as intended, and that procedures for its use are clear to operators. In order to ensure that such systems continue to operate in accordance with their design features, governments should agree to periodic testing in realistic operating environments.

Finally, it will be essential for governments to further explore the ways in which accountability for the use of autonomous weapons can be established and enforced. While such an undertaking will be undoubtedly challenging, it will be critical in ensuring that the use of these systems adheres to IHL and reducing potential risks.

Despite the somewhat breathless tone of the letter, its signatories draw attention to an important technological development that the international community would do well to address in a measured and calm way. Autonomous weapons will never remove the human dimension of warfare, but if we are careful, they just might thin the fog.

Close [ x ] More from DefenseOne

Thank you for subscribing to newsletters from
We think these reports might interest you:

  • Ongoing Efforts in Veterans Health Care Modernization

    This report discusses the current state of veterans health care

  • Modernizing IT for Mission Success

    Surveying Federal and Defense Leaders on Priorities and Challenges at the Tactical Edge

  • Top 5 Findings: Security of Internet of Things To Be Mission-Critical

    As federal agencies increasingly leverage these capabilities, government security stakeholders now must manage and secure a growing number of devices, including those being used remotely at the “edge” of networks in a variety of locations. With such security concerns in mind, Government Business Council undertook an indepth research study of federal government leaders in January 2017. Here are five of the key takeaways below which, taken together, paint a portrait of a government that is increasingly cognizant and concerned for the future security of IoT.

  • Coordinating Incident Response on Posts, Camps and Stations

    Effective incident response on posts, camps, and stations is an increasingly complex challenge. An effective response calls for seamless conversations between multiple stakeholders on the base and beyond its borders with civilian law enforcement and emergency services personnel. This whitepaper discusses what a modern dispatch solution looks like -- one that brings together diverse channels and media, simplifies the dispatch environment and addresses technical integration challenges to ensure next generation safety and response on Department of Defense posts, camps and stations.

  • Forecasting Cloud's Future

    Conversations with Federal, State, and Local Technology Leaders on Cloud-Driven Digital Transformation


When you download a report, your information may be shared with the underwriters of that document.