An X-47B Unmanned Combat Air System Demonstrator (UCAS-D) completes its first flight at Edwards Air Force Base, Calif., Feb. 4, 2011.

An X-47B Unmanned Combat Air System Demonstrator (UCAS-D) completes its first flight at Edwards Air Force Base, Calif., Feb. 4, 2011. DoD photo courtesy of Northrop Grumman

‘Autonomy’: A Smart Overview of the Pentagon’s Robotic Plans

But the long-awaited Defense Science Board study tiptoes around the question of armed autonomous systems.

Today, the Defense Science Board (DSB) released a long-awaited study, simply titled Autonomy. Since the late 1950s, the DSB has consistently been at the forefront of investigating and providing policy guidance for cutting-edge scientific, technological, and manufacturing issues. Many of these reports are available in full online and are worth reading.

The Autonomy study task force, led by Dr. Ruth David and Maj. Gen. Paul Nielsen (ret.), was directed in 2014 “to identify the science, engineering, and policy problems that must be addressed to facilitate greater operational use of autonomy across all warfighting domains.” The study begins with a practical overview of what autonomous means, and how it has enhanced operations in the private sector and throughout the military. According to the authors: “To be autonomous, a system must have the capability to independently compose and select among different courses of action to accomplish goals based on its knowledge and understanding of the world, itself, and the situation.” They also present four categories for characterizing the technology and engineering required for autonomous systems: Sense (sensors), Think/Decide (artificial intelligence computation power), Act (actuators and mobility), and Team (human-machine collaboration).

The latter category of human interactions with autonomous systems is a reoccurring theme throughout the study. The authors emphasize that it will be more important to continuously educate and train human users than to develop the software and hardware for autonomous systems. The proliferation of such systems is already prevalent in the private sector, where there are few intelligent adversaries looking to corrupt or defeat them. The more complex challenge will be in assuring that policymakers and military operators can trust that more autonomous weapons platforms and networks “perform effectively in their intended use and that such use will not result in high-regret, unintended consequences,” as the authors indicate. They propose extensive use of early red teaming and modeling and simulations to identify and overcome inevitable vulnerabilities.

While technologists and futurists often propose far greater adaptation of autonomous systems in U.S. defense planning, the study warns of several adversarial uses of autonomy (many of which red teamers warned of regarding network-centric-warfare fifteen years ago):

“The potential exploitations the U.S. could face include low observability throughout the entire spectrum from sound to visual light, the ability to swarm with large numbers of low-cost vehicles to overwhelm sensors and exhaust the supply of effectors, and maintaining both endurance and persistence through autonomous or remotely piloted vehicles….

The U.S. will face a wide spectrum of threats with varying kinds of autonomous capabilities across every physical domain—land, sea, undersea, air, and space—and in the virtual domain of cyberspace as well.”

Read more: US Drone Pilots Are As Skeptical of Autonomy As Are Stephen Hawking and Elon Musk
See also: The Pentagon Needs You To Help Them Take Down Small Drones

The study also identifies ten projects that could be started immediately to investigate near-term benefits of autonomy. The most notable of these is “Project #6: Automated cyber-response,” which is in response to the move from attempting (and failing) to secure computer networks based upon past adversarial attacks, to developing defensible networks that can sense, characterize, and thwart attacks in real time. U.S. Cyber Command is tasked with leading this project for $50 million over the next two to three years, which includes the ambitious goal of “Develop a global clandestine infrastructure that will enable the deployment of the defensive option to thwart an attack.”

The study addresses, but does not make specific recommendations for the well-publicized and controversial issue of fully autonomous lethal systems used for offensive operations. However, it does call for “autonomous ISR analysis, interpretation, option generation, and resource allocation” to reduce the human requirements and time needed (currently twenty-four hours) to process and distribute the air tasking order for combat units to strike. Like an increasing number of other military missions, this is another example where machines can make warfare easier, faster, and safer for U.S. servicemembers. Whether, and how often, those wars occur remain up to elected civilian leaders. Read the Defense Science Board Autonomy study for a smart—and policy-relevant—overview on where the Pentagon is planning to head regarding autonomous systems.

This post appears courtesy of CFR.org.