In this May 7, 2010, file photo traders work on the floor of the New York Stock Exchange in New York, following a flash crash.

In this May 7, 2010, file photo traders work on the floor of the New York Stock Exchange in New York, following a flash crash. Richard Drew/AP

When 'Killer Robots' Declare War

Militaries must ensure that the decision to go to war is made by humans—not autonomous weapons.

This week, nations from around the world will debate the future of lethal autonomous weapon systems (LAWS), or so-called “killer robots,” at the United Nations Convention on Certain Conventional Weapons in Geneva. As they do, they should remember that automated systems have long controlled operations across a variety of endeavors, including military ones, often with unexpected results. Some have been amusing, while others have been nearly catastrophic. This essay presents three historical case studies that underscore how LAWS could unexpectedly lead us to war.

Textbooks on insect genetic design are not typically bestsellers. Yet, on April 18, 2011, the Amazon.com price for The Making of a Fly topped $23.6 million. How could this be? The answer is a robotic price war. Unbeknownst to consumers, the pricing algorithms employed by two competing booksellers triggered a feedback loop. The first algorithm always set its price at 1.27059 times the next-most-expensive copy of the book. The second always set its price at 0.9983 times the price set by the first. An absurd price spiral ensued.

Not all machine errors are so amusing. On September 26, 1983, the Soviets’ new early-warning system, Oko, sent a “highest-confidence” alert that the U.S. had launched five nuclear-armed missiles at Russia. Lt. Col. Stanislov Petrov was manning the Oko alert feed that night. Inherently mistrustful of the new technology and certain that the U.S. would not have used only five missiles in a real attack, Petrov chose not to pass the alert to his superiors and set nuclear retaliation in motion, even though he was required to. As it turned out, he chose wisely. Oko had malfunctioned.

(Related: Why There Will Be A Robot Uprising)

The potential for a machine’s mistake to cause nuclear war is terrifying. Only moderately less terrifying is the potential for a machine’s error to crash the global financial market—which is what nearly happened on May 6, 2010. In a matter of minutes, American shares and futures indices dropped nearly 10 percent. The Dow Jones Industrial Average lost nearly 1,000 points, and major stocks like Apple, Proctor & Gamble, and Accenture took severe hits.

The 2010 “flash crash” was caused by an unintended interaction between automated trading systems. First, a mutual fund started an automated program to sell off financial contracts, known as “e-minis.” The program was instructed to sell the e-minis as fast as possible. As a result, rather than taking the normal five hours to sell that amount of contracts, it did so in 20 minutes. High-frequency trading firms (HFTs) reacted to the rapid sell-off by buying up the e-minis. These firms also run automated programs and make money by buying and selling stocks in milliseconds to capitalize on momentary price changes. So, as the HFTs bought the e-minis, they sold them almost immediately. Because there were more sellers than buyers, the e-minis’ price plummeted. The plummet spooked the rest of the market, causing formerly active buyers and sellers to withdraw, and resulting in the severe but temporary implosion of the financial market.

The 1983 nuclear near-miss and 2010 flash crash show that automated systems can malfunction or interact unexpectedly in ways that can yield disaster. But they also show us that keeping a human “in the loop”—in the decision-making process—can prevent bad outcomes. The Oko’s malfunction did not lead to nuclear war because Petrov’s common sense told him something was amiss, and he decided not to alert his superiors. In contrast, Wall Street’s automated trading programs interacted so quickly that they effectively “decided” their own course of interaction.

After the flash crash, the Security and Exchange Commission updated circuit breakers to prevent another by automatically halting trading market-wide in the event of rapid, major stock declines. Keeping a human “in the loop” for autonomous systems creates a “human circuit breaker” that could similarly shut down an operation before it runs out of control. This does not mean that humans are incapable of mistakes, as the downing of Iran Air Flight 655 by the USS Vincennes attests. But keeping humans “in the loop” means that they have the opportunity to prevent a one-off tragedy from becoming a catastrophe.

One of the greatest dangers posed by LAWS is that they could cut humans out of the loop during conflict-escalation decisions. This would make it possible for a single engagement to spiral quickly out of control, especially in situations where multiple LAWS can interact and create a feedback loop, like in the Amazon.com price war or 2010 flash crash. Instead of a flash crash, LAWS could start a “flash war.”

The danger of autonomous systems interacting faster than humans can keep pace is especially acute in the cyber domain. Governments and corporations around the world are investing in “active cyber defenses,” which can include retaliatory “hacking back,” or identifying the origins of a cyber-attack and mounting a counterattack. While hacking back is neither current nor anticipated future U.S. policy, it is technically feasible, albeit extremely challenging. Furthermore, the speed of interaction in cyberspace requires that defensive and offensive weapons be highly automated. This sets the stage for a feedback loop between two countries with active cyber defenses. Suppose Country A attacks Country B’s networks in an attempt to steal information. Country B’s cyber defenses would respond autonomously, tracing the origins of the intrusion and attacking Country A’s cyber infrastructure. Country B’s active cyber defenses would respond in kind, triggering a feedback loop.

In this hypothetical scenario, unintentional escalation to full-blown cyber war is a real possibility. As the volume of cyber-attacks increases, it could trip automatic redlines that would cause an autonomous active cyber defense system to expand its scope of retaliation to new networks. Or this could occur organically as old networks are taken down by attacks and new ones come online to retaliate. In these ways, an isolated network intrusion could escalate into a flash war very rapidly.

All of this could happen without human input. But even if humans did intercede early in the escalation chain, they would find themselves in a prisoner’s dilemma. That is, if human operators in Country B recognize that they are in a cyber conflict with Country A, they could defuse the escalation by ceasing their hack-back operations. But, if they did that, then Country B would be at great risk of exploitation by Country A. Without trust or enforcement mechanisms for mutual de-escalation, whichever country stopped hacking back first would find itself very vulnerable.

For all their peril, LAWS also have great potential to promote strategic stability. In order to safely harness this potential, militaries that are developing LAWS should consider the following questions:

  • In what ways can a given LAWS malfunction?
  • Is a given LAWS being deployed in the same operating environment as other LAWS—friendly, neutral, or hostile?
  • Given the potential for malfunction or unintended LAWS interactions, what are the risks of deploying a given LAWS in certain operating environments?
  • And finally, what are the military advantages—and disadvantages—of keeping a person “in the loop,” if nothing else as a fail-safe “human circuit breaker” to control escalation?

As delegates debate the future of LAWS at the United Nations this week, due consideration of these questions will help ensure that LAWS continue to serve the purposes of humanity and do not unexpectedly take humans to war. 

NEXT STORY: Stand Down, Senator Cotton