Could smarter robots turn against their masters?

An AI researcher warns that the military's rush to build fully autonomous systems could have unintended consequences.

DARPA illustration robots

The Pentagon is putting a lot of chips on sophisticated autonomous systems making up a sizable chunk of the future military—in the air, on the ground, at sea and within the cyber realm. Unmanned vehicles, humanoid robots and other machines are seen as a way to account for a shrinking human force, save money on conventional development and improve safety by letting machines take risks that soldiers can then avoid.

But could those machines turn against us?

The notion might seem like a paranoid sci-fi reflex, born of countless books and movies, from “Metropolis” to “The Matrix.” But at least one scientist has a word of caution about autonomous systems, particularly about the speed at which they’re being developed.

In a recently published study in the Journal of Experimental & Theoretical Artificial Intelligence, AI researcher Steve Omohundro warns that fully autonomous systems, by design, have the potential to develop antisocial and possibly harmful behavior.

Autonomous, artificially intelligent systems are designed to be “rational,” so they can make decisions for themselves, he writes. And a rational system would include a self-preservation drive, which, if left unchecked, could lead to trouble such as a machine stealing resources, using resources in its own way or removing its own design constraints.

And the risk is exacerbated by the headlong rush toward smarter, more sophisticated machines.

“Military and economic pressures for rapid decision-making are driving the development of a wide variety of autonomous systems,” Omohundro writes. “The military wants systems which are more powerful than an adversary’s and wants to deploy them before the adversary does. This can lead to ‘arms races’ in which systems are developed on a more rapid time schedule than might otherwise be desired.”

Indeed, speed is becoming a goal for all kinds of programs. The Defense Advanced Research Projects Agency recently said the days of slow-developing projects, such as those for aircraft, are numbered, and that development times need to be faster and more flexible.

For autonomous systems, Omohundro recommends what he calls “the safe-AI scaffolding strategy,” building systems carefully, somewhat in the manner of ancient stone arches that have stood for millennia.  

In the big picture, autonomous systems are still in their nascent stage; the drones and robots the military uses now aren’t going to be in revolt. But machines that think independently might not be all that far off. The Office of Naval Research, noting the emergence of autonomous systems such as Google’s self-driving cars, recently awarded several universities a $7.5 million grant to research ways of building autonomous robots with a moral sense of right and wrong, Defense One reported.

Defense Department policy forbids lethal autonomous systems and requires that semi-autonomous systems, such as attack drones, take orders from a human operator. But DOD is developing systems that could make choices in other situations, such as robots built for disaster response or fighting fires aboard ships

Whether a scaffolding approach, or programming a sense of right and wrong into robots, or some other down-the-road effort is the best way remains to be seen. But as machines continue to rise in their importance to the future military, the question of ethics will at some point have to be addressed.