US Drone Pilots Are As Skeptical of Autonomy As Are Stephen Hawking and Elon Musk

A Predator drone at the Creech Air Force base in Nevada

DEFENSE ONE / PATRICK TUCKER

AA Font size + Print

A Predator drone at the Creech Air Force base in Nevada

There are many reasons to be cautious about greater autonomy in weapons like drones, according to the men and women at the joystick.

For the sake of humanity, letter was published Monday by Stephen Hawking, Elon Musk, more than 7,000 tech watchers and luminaries, and 1,000 artificial intelligence researchers; it urged the world’s militaries to stop pursuing ever-more-autonomous robotic weapons. It’s unlikely the scientists will win their plea, despite a strong and surprising set of allies: U.S. drone pilots.

“The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow,” the signatories write in an open letter posted to the site of the Future Life Institute. The post has virtually no signatories from the upper levels of the U.S. defense establishment, but many of America’s drone pilots share the the group’s deep suspicion of greater levels of weaponized drone autonomy — albeit for very different reasons.

First: Is the letter’s basic premise credible? Are autonomous killer robots the cheap automatic rifles of tomorrow? The United States military maintains a rigid public stance on robot weapons. It’s enshrined in a 2012 DOD policy directive that says that autonomous weapons “shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

But the military keeps working steadfastly at increasing the level of autonomy in drones, boats, and a variety of other weapons and vehicles. The Air Force Human Effectiveness Directorate is working on a software and hardware package called the Vigilant Spirit Control Station, which is designed to allow a single drone crew, composed primarily of a drone operator and a sensor operator, to control up to seven UAVs by allowing the UAVs to mostly steer themselves. Last year, the Office of Naval Research completed a historic demonstration in autonomy with 13 small boats. The Defense Advanced Research Projects Agency, or DARPA, last year announced a program to develop semi-autonomous systems to “collaborate to find, track, identify and engage targets.” Greater autonomy is a key potential game-changer.

One of the reasons for the big push: there are not enough drone pilots to fight America’s wars. That’s strained the drone force and America’s pilots. But Pentagon leaders and drone operators are at disagreement about the future, and how military leaders are throwing money at research to develop smart drones that make a lot more of their own decisions, including enabling a single human operator to preside over a constellation of multiple drones at once.

In June, several drone pilots who spoke to Defense One expressed skepticism about greater levels of autonomy in weaponized remote-controlled robots.

“I don’t know if I would look so much for more autonomous systems because, as the pilot, I want to have control over what’s happening with my aircraft because if something bad happens, I want to know that there’s an input I can fix, that I can deal with, not like, ‘God, where did the coding go?’ Something more user friendly for aviation. We were taught to be aviators and then we were put in a computer,” said Capt. Kristi, a drone operator at Creech Air Force Base, in Nevada, whom the military made available to speak to reporters but identified by her rank and first name only.

If it’s going to be a strike that my name is on, I sure as hell want to know. I’ll have all my attention on it. I don’t want to halfway that.
Capt. Kristi, a drone operator at Creech Air Force Base, in Nevada

Kristi, like the signatories of the letter, wanted big restrictions on what sorts of drones the military incorporates into its autonomy plans. “Are we talking about [signals intelligence] capabilities? Are we talking about image capabilities? Are we talking about predominantly strike? Those are all different variables that come into play … If it’s going to be a strike that my name is on, I sure as hell want to know. I’ll have all my attention on it. I don’t want to halfway that.”

The Defense Department began struggling with issues surrounding autonomy in drones almost as soon as it began to rely on them more during the campaigns in Afghanistan and Iraq. Consider this 2009 slide show by former Lt. Gen. David Deptula, dean of the Mitchell Institute of Aerospace Power Studies, who once ran the Air Force’s drone program. The slideshow mentions a program called multi-aircraft control, or MAC, whose objective was to enable one pilot to fly four aerial vehicles by 2012 and autonomous flying not long after. Thanks to higher levels of autonomy, what took 570 pilots to do in 2009  — fly 50 round-the-clock combat air patrols — would take just 150 pilots in the not-too-distant future. But ithe project never really took off, according to Col. Jim Cluff, who commands the 432nd Wing and the 432nd Air Expeditionary Wing at Creech.

“The challenge we discovered was that the technology wasn’t there yet to allow those four airplanes to all be dynamic at the same time,” Cluff said. “When I say dynamic, what I mean is, when it’s just orbiting in a circle for eight hours, I can just monitor it. But if you want me to shoot a weapon off of it, you want me to move it and follow a target, do more things than just let it circle, you still need a pilot to do that. We didn’t gain any savings with MAC because I still needed pilots available in case these missions went dynamic. The autonomy challenge is how do I go autonomous and still allow that dynamic human interface to happen in real time. That’s the leap in technology.”

“What you don’t want is a crew monitoring something here and then something happens here and you’re bouncing back and fourth. That does not make good sense from a pilot, aircrew perspective from how do you direct your efforts on a target piece, especially if you’re going to go kinetic,” said Air Force Combat Command spokesperson Ben Newell.

Many AI researchers, such as Steven K. Rogers, senior scientist for automatic target recognition and sensor fusion at the Air Force Research Laboratory, argue that the current state of technology is so far away from allowing full autonomy in tasks like target recognition as to render the debate meaningless. But others are signatories to the letter, such as longtime Google research director Peter Norvig and Stephen Omohundro, an advocate for designing moral reasoning into future artificial intelligence.

A stern letter from some of the world’s biggest minds in technology may be enough to make headlines, but it might not be enough to stop the tide of history.

“Regardless of what the United States does, other actors, from rising powers such as China to non-state actors, may pursue autonomy in weapon systems if they believe those systems will give them a military advantage,” said Michael Horowtiz, an associate professor of political science at the University of Pennsylvania. Horowitz says that while increasing autonomy is very different from creating autonomous weapons, “greater autonomy in military systems is likely inevitable.”

Close [ x ] More from DefenseOne