Experts Say AI Could Raise the Risks of Nuclear War

The mushroom cloud of the first test of a hydrogen bomb, "Ivy Mike", as photographed on Enewetak, an atoll in the Pacific Ocean, in 1952, by a member of the United States Air Force's Lookout Mountain 1352d Photographic Squadron.

1352d Photographic Squadron / US Air Force

AA Font size + Print

The mushroom cloud of the first test of a hydrogen bomb, "Ivy Mike", as photographed on Enewetak, an atoll in the Pacific Ocean, in 1952, by a member of the United States Air Force's Lookout Mountain 1352d Photographic Squadron.

A new RAND report says ideas like mutually assured destruction and minimal deterrence strategy offer a lot less assurance in the age of intelligent software.

Artificial intelligence could destabilize the delicate balance of nuclear deterrence, inching the world closer to catastrophe, according to a working group of experts convened by RAND. New smarter, faster intelligence analysis from AI agents, combined with more sensor and open-source data, could convince countries that their nuclear capability is increasingly vulnerable. That may cause them to take more drastic steps to keep up with the U.S. Another worrying scenario: commanders could make decisions to launch strikes based on advice from AI assistants that have been fed wrong information.

Last May and June, RAND convened a series of workshops, bringing together experts from nuclear security, artificial intelligence, government, and industry. The workshops produced a report, released on Tuesday, that underlines how AI promises to rapidly improve Country A’s ability to target Country B’s nuclear weapons. And that may lead Country B to radically rethink the risks and rewards of acquiring more nuclear weapons or even launching a first strike.“Even if AI only modestly improves the ability 
to integrate data about the disposition of enemy missiles, it might substantially undermine a state’s sense of security and undermine crisis stability,” the report said.

North Korea, China, and Russia use mobile launchers (even elaborate tunnel networks) to position ICBMs rapidly for strike. The U.S. would have less than 15 minutes of warning before a North Korean launch, Joint Chiefs of Staff Vice Chairman Gen. Paul Selva told reporters in January.

If U.S. analysts could harness big data and AI to better predict the location of those launchers, North Korea might conclude that it needs more of them. Or Russia might decide that it needs nuclear weapons that are harder to detect, such as the autonomous Status-6 torpedo.

“It is extremely technically challenging for a state to develop the ability to locate and target all enemy nuclear-weapon launchers, but such an ability also yields an immense strategic advantage,” the report said. “The tracking and targeting system needs only to be perceived as capable to be destabilizing. A capability that is nearly effective might be even more dangerous than one that already works.”

Such a capability might employ drones with next-generation sensors, which “could enable the development of strategically destabilizing threats to the survivability of mobile ICBM launchers but also offer some hope that arms control could help forestall threats.”

The workshop also explored how commanders might use artificially-intelligent decision aids when making judgment calls about nuclear strikes. Such aids might help commanders to make much better-informed decisions — or, if penetrated and fed malicious data by an adversary, catastrophically wrong ones.

Absent some means to better verify the validity of data inputs — an ongoing project at the Defense Advanced Research Projects Agency and a key concern of the CIA —  and a better understanding of enemy intent, adversaries could turn the vast U.S. intelligence collection and digestion tools against them, especially as those tools work faster and more efficiently. In other words, fake news, combined with AI, just might bring about World War III.

Close [ x ] More from DefenseOne