DOD Science Board Recommends “Immediate Action” to Counter Enemy AI

SHUTTERSTOCK

AA Font size + Print

Pentagon scientists worry that the U.S. could be on the losing side of a AI arms race.

The Defense Science Board’s much-anticipated “Autonomy” study sees promise and peril in the years ahead. The good news: autonomy, artificial intelligence, and machine learning could revolutionize the way the military spies on enemies, defends its troops, or speeds its  supplies to the front lines. The bad news: AI in commercial and academic settings is moving faster than the military can keep up. Among the most startling recommendations in the study: the United States should take “immediate action” to figure out how to defeat new AI-enabled operations.

In issuing this warning, the study harks back to military missteps in cyber and electronic warfare. While the Pentagon was busy developing offensive weapons, techniques, plans, and tricks to use against enemies, it ignored U.S. equipment’s own vulnerabilities.

“For years, it has been clear that certain countries could, and most likely would, develop the technology and expertise to use cyber and electronic warfare against U.S. forces,” the study’s authors wrote. “Yet most of the U.S. effort focused on developing offensive cyber capabilities without commensurate attention to hardening U.S. systems against attacks from others. Unfortunately, in both domains, that neglect has resulted in DoD spending large sums of money today to ‘patch’ systems against potential attacks.”

That cycle could repeat itself in the field of AI, says the study.

To counter the threat, the study says, the undersecretary of defense for intelligence should “raise the priority of collection and analysis of foreign autonomous systems.” Take that to mean figuring out what China, Russia, and others can do and will soon be able to do with artificial intelligence.

Meanwhile, the Pentagon’s office of acquisition technology and logistics should gather together a community of researchers to run tests and scenarios to discover “counter-autonomy technologies, surrogates, and solutions” — in other words, practice fighting enemy AI systems. This community should have wide discretion in conducting research into commercial drones, software, and machine learning.

“Such a community would not only explore new uses for autonomy, counter-autonomy, and countering potential adversary autonomy, but also more realistically inform what the tactical advantages and vulnerabilities would be to both the U.S. and adversaries in adopting or adapting commercially available technology,” the study says.

Just as over-reliance on information technology has led to new weaknesses, so autonomy, too, is not a silver bullet. The study names a handful of “opportunities to limit or defeat the use of autonomy against U.S. forces.”

They include “using deception to confound rules-based logic” or simply overwhelming the AI’s sensor inputs. In most settings, the human brain can differentiate signal from noise far more capably than any human-written program.

The study reiterates the importance of human-decision making, but offers that the greatest potential for autonomy is in software that learns or adapts on its own, with little to no human guidance. When, if ever, is it safe to put an autonomous learning system like that in charge of a howitzer? The study says that the Defense Department doesn’t yet have the means to even ask the question.

“Current testing methods and processes are inadequate for testing software that learns and adapts,” it reads.  Better testing procedures, particularly in virtual environments, will be key to getting the most out of next-generation artificial intelligence.

The United States faces a special ethical burden in how it develops and uses autonomy. The military faces pressure – both internally and from outside groups – to limit the use of autonomy in weapons. That’s less true in China and Russia; the latter of which boasts that it has tested lethal autonomous ground robots as guards for missile sites and is developing a crewless version of the Armata T-14 tank.

“While many policy and political issues surround U.S. use of autonomy, it is certainly likely that many potential adversaries will have less restrictive policies and [concepts of operation] governing their own use of autonomy, particularly in the employment of lethal autonomy. Thus, expecting a mirror image of U.S. employment of autonomy will not fully capture the adversary potential,” notes the study.

Close [ x ] More from DefenseOne
 
 

Thank you for subscribing to newsletters from DefenseOne.com.
We think these reports might interest you:

  • Software-Defined Networking

    So many demands are being placed on federal information technology networks, which must handle vast amounts of data, accommodate voice and video, and cope with a multitude of highly connected devices while keeping government information secure from cyber threats. This issue brief discusses the state of SDN in the federal government and the path forward.

    Download
  • Military Readiness: Ensuring Readiness with Analytic Insight

    To determine military readiness, decision makers in defense organizations must develop an understanding of complex inter-relationships among readiness variables. For example, how will an anticipated change in a readiness input really impact readiness at the unit level and, equally important, how will it impact readiness outside of the unit? Learn how to form a more sophisticated and accurate understanding of readiness and make decisions in a timely and cost-effective manner.

    Download
  • Cyber Risk Report: Cybercrime Trends from 2016

    In our first half 2016 cyber trends report, SurfWatch Labs threat intelligence analysts noted one key theme – the interconnected nature of cybercrime – and the second half of the year saw organizations continuing to struggle with that reality. The number of potential cyber threats, the pool of already compromised information, and the ease of finding increasingly sophisticated cybercriminal tools continued to snowball throughout the year.

    Download
  • A New Security Architecture for Federal Networks

    Federal government networks are under constant attack, and the number of those attacks is increasing. This issue brief discusses today's threats and a new model for the future.

    Download
  • Information Operations: Retaking the High Ground

    Today's threats are fluent in rapidly evolving areas of the Internet, especially social media. Learn how military organizations can secure an advantage in this developing arena.

    Download

When you download a report, your information may be shared with the underwriters of that document.