Civilians walk amid the rubble after fight between Iraqi security forces against Islamic State militants in the eastern side of Mosul, Iraq, Saturday, Jan. 21, 2017.

Civilians walk amid the rubble after fight between Iraqi security forces against Islamic State militants in the eastern side of Mosul, Iraq, Saturday, Jan. 21, 2017. AP Photo/Khalid Mohammed

Think Before You Pledge Not to Build Military AI

Like self-driving cars, which also kill, autonomous weapons should be considered in a suitably complex context.

The machine-learning revolution has arrived. Artificial intelligence is rapidly conquering complex tasks, from facial recognition to autonomous navigation. But the ethical debate over AI is only just starting to catch up. Recently, for instance, employees at Google staged a minor revolt over the company’s work with the Department of Defense, which has placed AI at the center of its “Third Offset” strategy to maintain U.S. military superiority.

Now, hundreds of companies (including Google’s DeepMind) and thousands of researchers have signed a pledge to “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.” That is of course their right, and the pledge points to an important moral debate over the proper uses of AI.

But there are at least two serious problems with the pledge: there has been no broad-based public debate over the morality of using AI to kill, and many AI researchers don’t have a sophisticated understanding of modern military operations.

For instance, what counts as an autonomous weapon? The Tomahawk cruise missile has for years used rudimentary image/scene matching to guide itself during its terminal phase. Modern image-recognition technology could take that process in new directions. For instance, a missile’s seeker could survey its aim point and abort the attack if it identifies children near the target. Such a missile—in effect—makes the “decision” to kill or not. Is that not a good thing?

Related: U.S. Needs a National Strategy for Artificial Intelligence, Lawmakers and Experts Say

Related: Google DeepMind Researchers Join Pledge Not to Work in Lethal AI

Related: DARPA Plans Bugbot ‘Olympics’ to Foster Breakthrough in Tiny Machines

Now, it’s disingenuous to craft an argument about autonomous weapons only around an idealized save-the-children scenario. AI weapons will largely, and surely, focus on killing more effectively. But context matters. Consider the recent months-long battle to retake Mosul from ISIS. The block-by-block fighting essentially destroyed the city and trapped civilians in the crossfire. Iraqi and U.S. forces rolled ISIS back methodically with the support of artillery and bombs packed with hundreds of pounds of explosives. It was an ugly look at the type of urban fighting that is poised to be more common in the 21st century.

But there may be a better way to fight for an occupied city than a Stalingrad-style battle. Imagine how a swarm of armed drones—say, modeled on the quadcopters for sale on Amazon but given some form of collective intelligence—could have been used in Mosul. Instead of smart bombs and artillery, the swarm could search through a building looking for rooms that contained men and contained weapons and were free of women and children. A drone might then detonate a warhead roughly equivalent in power to a large grenade, sparing the building from a devastating artillery barrage.

Technology can and does fail or behave in unexpected ways. An autonomous drone will eventually attack the wrong target for similar reasons that a self-driving car slams into a wall. Like self-driving cars, autonomous weapons should be judged based on their net value, which requires a degree of knowledge about military operations that most AI researchers have not yet taken the time to learn. That does the public a potentially tragic disservice.