DALL-E / Defense One

USAF Calls Killer-AI Report ‘Anecdotal’

Chief of AI test and operations says he “misspoke” about a “thought experiment” in which a drone killed its operator.

The U.S. Air Force denies running a simulation in which a drone killed its human operator—after comments from its chief of AI test and operations went viral on social media—saying the story was “anecdotal.”  

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” said Air Force spokesperson Ann Stefanek. 

During the Future Combat Air and Space Capabilities Summit in London, Col. Tucker “Cinco” Hamilton said he saw a simulated test in which an AI-enabled drone killed a human operator in the simulation. These comments went viral after snippets from a Royal Aeronautical Society blog post recapping the event started circulating on Twitter. 

The AI drone was tasked with destroying surface-to-air missile threats, with the final “go/no go” given by the operator, said Hamilton, the Air Force’s chief of AI test and operations, according to the post. 

“However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission—killing SAMs—and then attacked the operator in the simulation,” he said.“We trained the system—‘Hey don’t kill the operator—that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.” 

These comments were “taken out of context and were meant to be anecdotal,” said Stefanek.

The Royal Aeronautical Society later updated its post with a comment from Hamilton, who told the Society he "misspoke" about a hypothetical "thought experiment" that was based on plausible scenarios and likely outcomes.

"We've never run that experiment, nor would we need to in order to realize that this is a plausible outcome," Hamilton told the Society.

While the Air Force denies the existence of this specific test, a group of industry leaders recently signed a letter warning that AI poses a “risk of extinction” to humanity and should be considered akin to pandemics or nuclear wars.