How Should We Treat Our Military Robots?

Boston Dynamics

AA Font size + Print

Increasingly human-like automated weapons demand an honest accounting of our emotional responses to them.

The audience of venture capitalists, engineers and other tech-sector denizens chuckled as they watched a video clip of an engineer using a hockey stick to shove a box away from the Atlas robot that was trying to pick it up. Each time the humanoid robot lumbered forward, its objective moved out of reach. From my vantage point at the back of the room, the audience’s reaction to the situation began to sound uneasy, as if the engineer’s actions and their invention’s response had crossed some imaginary line.

If these tech mavens aren’t sure how to respond to increasingly life-like robots and artificial intelligence systems, I wondered, what are we in the defense community missing?

This is a pressing question. Military autonomous capabilities are no longer an abstract area of promise – and peril. “The idea of artificial intelligence and computing becoming almost human is very much what we’re working on today with some of our technologies,” said AMD President and CEO Lisa Su in a WIRED video interview with legendary film director Ridley Scott, whose Alien saga and Blade Runner films shaped how many of us visualize lifelike robots. 

You don’t have to be a dystopic-minded science fiction writer to realize that the next few years will see military and government officials make long-reaching decisions regarding the use and regulation of AI and machine learning, robotics, and autonomous cyber and kinetic weapons. These choices will alter the course of global affairs in the 21st century, and even shape the conflicts in which we’re engaged today. “Almost nowhere do I see a technology that’s current that offers as much as autonomy,” Will Roper, the head of the Defense Department’s Strategic Capabilities Office, said in a recent interview. “We’re working very hard to produce a learning system.”

As that day nears, it is worth considering our current limitations in communicating with the neural-network machines that are up-ending our sense of normal. The promise of these technologies overshadows their inability to “explain” their decision-making logic to their creators. “This is our first example of alien intelligence,” Stephen Wolfram, CEO of Wolfram Research and a pioneer in machine learning, told an audience at March’s Xconomy Robo Madness conference in Cambridge, Massachusetts.

Warfare will be no different. While the U.S. military insists that it will have a human in or on the decision-making “loop,” the commercial world is aggressively pushing past that threshold. Many transactions will soon simply be AI to AI, Wolfram said, which will lead to fundamental changes in notions of contract law and enforcement.

Are these developments an invitation to reconsider the rules and norms of warfare as well? Is it too soon to start thinking about Computational Laws of Armed Conflict? That is a sensible, if far-reaching step, that runs right into a fundamental question that still has to be answered. “What do we want the AIs to do?” asked Wolfram. And how will the AIs know what we really want? There is the discomforting thought that we might invent a capability that quickly moves beyond our control, a gnawing awareness akin to the way Manhattan Project scientists must have stared over the precipice.

There are no easy answers to these questions. The defense community will have to make more room for constructive argument over the rules and norms that should govern autonomous machine conflict. It will get harder and more unpopular – yet increasingly important – to remain focused on the risks and strategic uncertainty introduced by technologies that are predicated on ideals of perfection. As National Security Advisor Lt. Gen. H.R. McMaster wrote in in 2014 about the four fallacies of technology in warfare, “These fallacies persist, in large measure, because they define war as one might like it to be rather than as an uncertain and complex human competition usually aimed at achieving a political outcome.”

Whether AI and robotics with increasingly human-like qualities further perpetuate these myths of modern warfare will depend as much on scientific knowledge as it will on an honest accounting of our emotional responses to them.

Close [ x ] More from DefenseOne
 
 

Thank you for subscribing to newsletters from DefenseOne.com.
We think these reports might interest you:

  • Ongoing Efforts in Veterans Health Care Modernization

    This report discusses the current state of veterans health care

    Download
  • Modernizing IT for Mission Success

    Surveying Federal and Defense Leaders on Priorities and Challenges at the Tactical Edge

    Download
  • Top 5 Findings: Security of Internet of Things To Be Mission-Critical

    As federal agencies increasingly leverage these capabilities, government security stakeholders now must manage and secure a growing number of devices, including those being used remotely at the “edge” of networks in a variety of locations. With such security concerns in mind, Government Business Council undertook an indepth research study of federal government leaders in January 2017. Here are five of the key takeaways below which, taken together, paint a portrait of a government that is increasingly cognizant and concerned for the future security of IoT.

    Download
  • Coordinating Incident Response on Posts, Camps and Stations

    Effective incident response on posts, camps, and stations is an increasingly complex challenge. An effective response calls for seamless conversations between multiple stakeholders on the base and beyond its borders with civilian law enforcement and emergency services personnel. This whitepaper discusses what a modern dispatch solution looks like -- one that brings together diverse channels and media, simplifies the dispatch environment and addresses technical integration challenges to ensure next generation safety and response on Department of Defense posts, camps and stations.

    Download
  • Forecasting Cloud's Future

    Conversations with Federal, State, and Local Technology Leaders on Cloud-Driven Digital Transformation

    Download

When you download a report, your information may be shared with the underwriters of that document.