Report: Weapons AI Increasingly Replacing, Not Augmenting, Human Decision Making

The Navy experimental unmanned aircraft, the X-47B, taxies to it's launch position on the flight deck aboard the nuclear powered aircraft carrier USS Theodore Roosevelt, off the Virginia coast, Sunday, Nov. 10, 2013.

AP / STEVE HELBER

AA Font size + Print

The Navy experimental unmanned aircraft, the X-47B, taxies to it's launch position on the flight deck aboard the nuclear powered aircraft carrier USS Theodore Roosevelt, off the Virginia coast, Sunday, Nov. 10, 2013.

A new survey of existing and planned smart weapons finds that AI is increasingly used to replace humans, not help them.

The Pentagon’s oft-repeated line on artificial intelligence is this: we need much more of it, and quickly, in order to help humans and machines work better alongside one another. But a survey of existing weapons finds that the U.S. military more commonly uses AI not to help but to replace human operators, and, increasingly, human decision making.

The report from the Elon Musk-funded Future of Life Institute does not forecast Terminators capable of high-level reasoning. At their smartest, our most advanced artificially intelligent weapons are still operating at the level of insects … armed with very real and dangerous stingers.

So where does AI exist most commonly on military weapons? The study, which looked at weapons in military arsenals around the world, found 284 current systems that include some degree of it, primarily standoff weapons that can find their own way to a target from miles away. Another example would be Aegis warships that can automatically fire defensive missiles at incoming threats.

“This matches the overall theme – autonomy is currently not being developed to fight alongside humans on the battlefield, but to displace them. This trend, especially for UAVs [unmanned aerial vehicles], gets stronger when examining the weapons in development. Thus despite calls for ‘centaur warfighting,’ or human-machine teaming, by the US Defense Department, what we see in weapons systems is that if the capability is present, the system is fielded in the stay [meaning instead of] of humans rather than with them,” notes Heather M. Roff, the author of the report.

Roff found that the most common AI feature on a weapon was homing: “the capability of a weapons system to direct itself to follow an identified target, whether indicated by an outside agent or by the weapons system itself.” It’s been around for decades; many more recent AI capabilities spring from it.

On the the other end of the technology spectrum is certain drones’ ability to loiter over an area, compare objects on the ground against a database of images, and mark a target when a match comes up — all without human guidance.

Roff writes that these capabilities, which she calls autonomous loitering and target image and signal discrimination, represent “a new frontier of autonomy, where the weapon does not have a specific target but a set of potential targets in an image library or target library (for certain signatures like radar), and it waits in the engagement zone until an appropriate target is detected. This technology is on a low number of deployed systems, but is a heavy component of systems in development.”

For an indication of where AI on drones is headed look to cutting-edge experimental machines like Dassault’s nEUROn, BAE’s Taranis, and Northrop Grumman’s  X-47B. Unlike General Atomics’ Predator and Reaper drones, which the military armed to take out terrorist targets in places like Afghanistan, these more advanced drones are designed for war with countries that can actually shoot back. The so-called anti-access / area denial challenge, or A2AD, requires aircraft that use stealth to slip in under enemy radar and then operate on their own over enemy territory. It’s the key thing pushing autonomy in weapons to the next level.

“This is primarily due to the type of task the stealth combat UAV is designed to achieve: defeating integrated enemy air defense systems. In those scenarios, a UAV will likely be without communications and in a contested and denied environment. The system will need to be able to communications share with other deployed systems in the area opportunistically, as well as engage and replan when necessary,” Roff writes.

At the recent Air Force Association conference outside Washington, D.C., Deputy Defense Secretary Bob Work called greater autonomy essential to U.S. military technological dominance. Citing a report from the Defense Science Board, he said, “There is one thing that will improve the performance of the battle network more than any other. And you must win the competition because you are in it whether you like it or not. And that is exploiting advances in artificial intelligence and autonomy. That will allow the joint force to assemble and operate human machine battle networks of even greater power.”

But even if the U.S. military “wins the competition” by producing the best autonomic systems, other nations may yet put AI to unexpected and even destabilizing effect. “It should be noted that the technological incorporation of autonomy will not necessarily come only from the world’s strongest powers, and the balancing effect that may have will not likely be stabilizing. Regional powers with greater abilities in autonomous weapons development, such as Israel, may destabilize a region through their use or through their export to other nations,” says Roff.

That’s not the only reason more smarts on more weapons could be destabilizing.  Machines make decisions faster than humans. On the battlefield of the future, the fastest machines, those that make the best decisions with the least amount of human input, offer the largest advantage.

Today, the United States continues to affirm that it isn’t interested in removing the human decision-maker from “the loop” in offensive operations like drone strikes (at least not completely). That moral stand might begin to look like a strategic disadvantage against an adversary that can fire much faster, conduct more operations, hit more targets in a smaller amount of time by removing the human from loop.

The observe, orient, decide, and act cycle, sometimes called the OODA loop, is today in the hands of humanity when it comes to warfare. But in other areas of human activity, like high-frequency trading, it’s moved to the machines. William Roper, the head of the Pentagon’s Strategic Capabilities Office, discussed his concerns about that acceleration at the recent Defense One Technology Summit.

“When you think about the day-trading world of stock markets, where it’s really machines that are doing it, what happens when that goes to warfare?” Roper asked. “It’s a whole level of conflict that hasn’t existed. It’s one that’s scary to think about what other countries might do that don’t have the same level of scruples as the U.S.”

It’s also scary to think about what the United States might do if its leaders woke up in a war where they were losing to those countries.

Close [ x ] More from DefenseOne
 
 

Thank you for subscribing to newsletters from DefenseOne.com.
We think these reports might interest you:

  • Software-Defined Networking

    So many demands are being placed on federal information technology networks, which must handle vast amounts of data, accommodate voice and video, and cope with a multitude of highly connected devices while keeping government information secure from cyber threats. This issue brief discusses the state of SDN in the federal government and the path forward.

    Download
  • Military Readiness: Ensuring Readiness with Analytic Insight

    To determine military readiness, decision makers in defense organizations must develop an understanding of complex inter-relationships among readiness variables. For example, how will an anticipated change in a readiness input really impact readiness at the unit level and, equally important, how will it impact readiness outside of the unit? Learn how to form a more sophisticated and accurate understanding of readiness and make decisions in a timely and cost-effective manner.

    Download
  • Cyber Risk Report: Cybercrime Trends from 2016

    In our first half 2016 cyber trends report, SurfWatch Labs threat intelligence analysts noted one key theme – the interconnected nature of cybercrime – and the second half of the year saw organizations continuing to struggle with that reality. The number of potential cyber threats, the pool of already compromised information, and the ease of finding increasingly sophisticated cybercriminal tools continued to snowball throughout the year.

    Download
  • A New Security Architecture for Federal Networks

    Federal government networks are under constant attack, and the number of those attacks is increasing. This issue brief discusses today's threats and a new model for the future.

    Download
  • Information Operations: Retaking the High Ground

    Today's threats are fluent in rapidly evolving areas of the Internet, especially social media. Learn how military organizations can secure an advantage in this developing arena.

    Download

When you download a report, your information may be shared with the underwriters of that document.