Robert Work at CNAS annual conference 2013.

Robert Work at CNAS annual conference 2013. Flickr image via CNAS

These Are the Decisions the Pentagon Wants to Leave to Robots

The U.S. military believes its battlefield edge will increasingly depend on automation and artificial intelligence.

It’s a “wonderful time” to be a military scientist, where the next big leaps will depend on rapid advances in machine learning, artificial intelligence, or AI, and computer science — but the Pentagon needs to catch up to private industry, says deputy defense secretary Robert Work.

“The commercial world has already made this leap. The Department of Defense is a follower,” Work told the audience at a national security forum co-hosted by the Center for a New American Security and Defense One. The Pentagon will spend $12 to $15 billion on funding for the development of new AI tools.

The development of new autonomy capabilities is key to the Pentagon’s third offset strategy, a research effort intended to endow the United States with strategic advantage over her adversaries. In his talk, Work cleared up several of the mysteries surrounding the third offset strategy, including: Just what does the military want robots to take over for humans? Here are a few areas Work highlighted.

Cuing intelligence analysts about what to pay attention to.  Machine learning systems looking through volumes of data to find weak signals of social change could provide early indicators of danger or unrest, deserving analyst attention. “The AI guys say that what’s happening in the grey zone with the little green men is nothing more than a big data analytics problem,” Work said. He spoke of a National Geospatial Intelligence program called Coherence Out of Chaos, which could “cue human analysts” to take a look at different situations as those situations evolve on the ground. “It will do so in situations that require faster than human reaction,” he said.

Conducting cyber defensive operations, electronic warfare, and over-the-horizon targeting. “You cannot have a human operator operating at human speed fighting back at determined cyber tech,” Work said. “You are going to need have a learning machine that does that.” He did not say  whether the Pentagon is pursuing the autonomous or automatic deployment of offensive cyber capabilities, a controversial idea to be sure. He also highlighted a number of ways that artificial intelligence could help identify new waveforms to improve electronic warfare.

Semi-autonomous weapons could act  when humans might be too slow, such as aerial defense, or when communications were degraded. “We believe strongly that humans should be the only ones to decide to when use lethal force. But when you’re under attack, especially at machine speeds, we want to have a machine that can protect us.” He cited the Israeli Iron Dome air-defense system as an example.

Telling F-35 pilots what to point and shoot at... “Human-machine collaboration and decision-making” is another one of the core components of the offset strategy, as exemplified by the Rockwell Collins F-35 helmet, which provides a dynamic display to the pilot in real time. “360 degrees of information is being crunched by the machine and being displayed in an advanced way,” he said.”It can simplify the speed of operations by allowing the humans to make better decisions faster.”

…and how to fly and land. Assisted human operations is another key component of the AI-heavy offset strategy. Work cited the Aircrew Labor In-Cockpit Automation System, or ALIAS, which takes some of the decision-making away from the pilot. It was, he said, “A system designed to reduce the number of crew in the cockpit at any time.”

Flying drones and driving boats. Work defined human-machine combat teaming as a human working with an unmanned aerial vehicle, or UAV, to conduct operations. “The Army’s Apache and Gray Eagle UAV are designed to operate together. The P-8 [Poseidon] and [MQ-4C] Triton UAV work together,” he noted. “We are looking at a large number of very, very advanced things.” He cited recent programs that would deploy cascades of small drones from larger drone “motherships,” (a DARPA program called Gremlins), swarming boats (an Office of Naval Research program), and efforts to allow a single human operator to direct a wide number of drones (as opposed to several crewmembers operating one drone, which is the state today, part of an Air Force effort called the Vigilant Spirit Control Station).

Many of these AI aids and capabilities would be backed by a fast-learning system that collects, processes, and disseminates information to help commanders make better decisions. He described it as a “learning network.” It would not replace humans but vastly accelerate the collection and dissemination of relevant data and commands to more machines and humans on the battlefield.

“If we launch seven missiles at a surface action group and one missile goes high, and is looking at all the different things that the battle group is doing to defend itself and it sees something new that’s not in its library, it will immediately report back on the learning network, which will go back to a learning machine, which will create ‘here’s something you should do’ which will pass it over to human machine collaboration—so the mission commander can make an adjustment on the next salvo and then make a command change inside the software of the missile so that the next seven missiles launch will be that much more effective,” he said.

There’s “a lot of skepticism right now inside the department of defense that we will be able to perfect and protect such a network,” Work said. “But if you do the smart design up front, coupled with learning defenses. It is not only possible but it is a requirement.”

NEXT STORY: FAA Gets Serious About Drones