As AI advances, military leaders mull the 'Terminator conundrum'

Gen. Paul Selva, vice chairman of the Joint Chiefs of Staff, says advancements in artificial intelligence create a lot of questions about the role of autonomous systems on the battlefield.

With the Obama administration giving the green light for driverless cars on our roads within the next decade, and the military services developing autonomous air, ground and sea vehicles, it’s only a matter of time before the technology exists to develop artificial intelligence that can be used on the battlefield.

But that leaves the prickly question of whether a fully autonomous system can carry out an attack. And although Defense Department rules currently make clear that, regardless of the system, only a human operator can order a strike, it is something military leaders are concerned about.

Air Force Gen. Paul J. Selva, vice chairman of the Joint Chiefs of Staff, addressed that question at a Brookings Institution Jan. 21 in Washington.

“There are ethical implications. There are implications that I call the ‘Terminator conundrum’,” Selva said. “What happens when that thing can inflict mortal harm and is empowered by artificial intelligence? How are we going to deal with that? How do we know with certainty what it’s going to do? Those are the problem sets I think we’re going to deal with in the technology sector.”

Current systems are robotic but not intelligent, meaning once the system identifies the target, it’s still up to a human to make the call on whether or not to fire.

But could new technologies change that equation? With Silicon Valley at the forefront of many of these technologies, much has been made about DOD’s attempts to work with tech companies in Silicon Valley and the mixed results they’ve faced.

DOD has created a program for working with Silicon Valley companies on innovations like artificial intelligence, big data, flexible electronics and deep-learning algorithms.

The “Terminator conundrum” might not be immediate—while semi-autonomous systems abound, the leap to full autonomy is still a ways off—but it is something the military needs to think about. 

One hurdle for the two worlds to work together is that DOD generally has a low tolerance for risk in acquisition. Selva said the tolerance for risk should be higher in the development phase, but once the building process begins, the tolerance for risk has to go down.

“Once I say I’m going to commit to the piece of technology you’re giving me I have to be certain that it’s going to deliver the outcome that I need,” Selva said of his opening proposition to engineers. “There are very few engineers in the software space or the hardware space that deal with the word ‘certainty’.”

But if they are, he said he's willing to work with them.

DOD is also looking to Silicon Valley for scarce cyber security personnel, hoping the lure of working on huge meaningful national defense projects can outweigh the Valley’s advantages.