In this video, Atlas, a humanoid robot from Google subsidiary Boston Dynamics, shows off its ability to navigate rough terrain.

In this video, Atlas, a humanoid robot from Google subsidiary Boston Dynamics, shows off its ability to navigate rough terrain. Boston Dynamics

The International-Relations Argument Against Killer Robots

AI-assisted weapons could spark an arms race that would increase the likelihood of wars — and the slope of their escalation.

There are two levels of argument at work in last month’s open letter from the Future of Life Institute, which called for a ban on artificially intelligent weapons. One of them – that it is now technically simple to create machines that fire weapons at people – was explored by Musgrave and Roberts in Defense One. As one of the letter’s drafters and one of its 16,000 signatories – along with Stephen Hawking, Steve Wozniak and Elon Musk – I’d like to explore the other level: the danger of a new arms race.

One could, today, rig a quadcopter with sensors, software and a gun. This system would be crude, indiscriminate and not very effective against much larger weaponry or advanced counter munitions systems. Indeed, it is like comparing DARPA’s “autonomous land vehicle (ALV) to today’s Google Car – or better yet, the self-driving Tesla now under development. The ALV was a first try, and it was cumbersome, stupid and slow. But a Tesla Model S is none of these. We have had three decades to hone the technology for self-driving cars. We’ve had governments funding the research, we’ve had “Grand Challenges,” and now we have giant private corporations doing much of the hard work on the problem. “Killer robots” follow a similar trajectory.

This is the second level of the argument: technological progress and the fear of relative gains between states. The United States has for decades prided itself on having a technological edge on any potential adversary. Indeed, the drive for this superiority fueled its Cold War arms race with the former Soviet Union. Much in the way of robotics and computing technology was accomplished during this period, and it paved the way for Predator drones and other ubiquitous systems.

Today, Pentagon officials are intensifying their quest for technological superiority. It “is one of the most important strategic tasks and risks facing our Department,” Deputy Defense Secretary Bob Work said at the opening of the China Aerospace Studies Institute. “Because if we allow our technical superiority to erode too much, again, this will undermine our conventional deterrence. It will greatly raise the cost, the potential cost of any intervention overseas, and will contribute to crisis instability.”

The fear of relative gains here is noteworthy. In Work’s view, China, Russia or any other potential adversary cannot be allowed to achieve too much by way of technological progress. If it does, the U.S. will be unable to maneuver and project its power. Thus to deny any potential adversary the ability to degrade U.S. access and maneuver, the U.S. has to speed up the tempo, distribute its assets, and have total information domination on the battlefield.

Deputy Work expanded upon his vision of future AI-assisted war. “You’ll have a high degree of human-machine collaboration, like free-style chess, in which machines, using big data analytics and advanced computing, will inform human decision makers on the battlefield to make better decisions than humans can do alone or machines can do alone,” he said. “You’re going to have routine manned and unmanned teaming. You’re going to have increasingly capable autonomous unmanned systems. You are going to have all of this. So the future of combat, we believe is going to be characterized by a very high degree of human-machine symbiosis, such as crude platforms controlling swarms of unmanned, inexpensive unmanned systems that can be flexibly combined and fielded in greater numbers.”

These types of systems, far beyond the cheap and crude ones of today, herald a qualitative shift in combat. If the U.S. begins to wage war in this way—or even gains the ability to do so—then competitor nations will want it as well. (Truth be told, they already want it.)

This is the long-term worry: the kind of damage that can result from an arms race between nations. First, arms races—as interactive competitions between rival states, where the competitors build up particular weapons technologies, capabilities or personnel over time—increase not only the probability of militarized disputes between competitors, but also the probability of escalation when those disputes erupt. Arms races make war more likely and more violent.

Second, the type of technology we are discussing here is not merely conventional weapons in conventional war. We are talking about creating weapons that push the boundaries of artificial intelligence. This push, to create adaptive, learning and intelligent weapons platforms will ultimately require greater onboard abilities, less communication, and a system-of-systems approach to war. These pushes to delegate decision-making and information processing machines will speed up the tempo of war, and will challenge, if not eliminate, the current command and control structure to combat.

Moreover, unlike the nuclear stockpiling that occurred during the Cold War, the material for these arms are not hard to come by. One can purchase a credit-card sized Raspberry Pi computer for $30. Certainly, initial machines would be more sophisticated and require more computing horsepower, but neither would their programs run to the 24 million lines of code in the F-35 Joint Strike Fighter. Yet the logic of technological dominance will mean that rival nations that fear losing their technological edge will push this edge forward. The interactive competition will drive them to seek more gains in AI, to field more networked weapons that can counter and survive their adversary’s capability, and the low cost to entry will speed the proliferation of the technology throughout the international system. This new arms race will produce stronger AI, even as it makes older and more crude systems more readily available to states and non-states alike.

The DoD has a policy directive on autonomy in weapons, as do the services (here are other publicly available reports). In short, look forward to more of it. The directive is set to expire in two years, and it contains the caveat that fully autonomous weapons can be deployed if the requisite defense undersecretaries sign off on it.

There is no doubt that humans and machines will work more and more closely together in future combat. The question is to what extent machines will be delegated lethal roles and how much they will “help” a human commander make decisions. We already have sophisticated battle management software that helps run wargames, logistics, and the like. However, if we begin outsourcing everything, then the risk is that a human “operator” may indeed push a button, yet have no meaningful control.

We are not going to wake up tomorrow to swarms of hundreds of robots descending on our towns and cities, but someone is going to wake up tomorrow and work on how to make that possible. We are at a critical juncture in weapons development and how we fund and pursue that development, and it is imperative to hold a public discussion about the extent to which we make and delegate life-and-death decisions.

Saying “no one wants to create a Terminator” is not an argument; it’s more like saying “no one wants to get cancer.” Yet just as one can reduce the chance of getting cancer by living a healthy lifestyle, not smoking, and eating well, one can mitigate the chances of creating weaponized and intelligent systems by preventing an AI arms race between powerful countries with large militaries, and by taking a public stand about how many decisions are delegated to machines.