Sailors and civilian mariners launch a Wave Glider Unmanned Surface Vehicle from the fantail of USNS Burlington as a part of the UNITAS U.S. Naval Forces Southern Command/U.S. 4th Fleet Unmanned Integration Campaign.

Sailors and civilian mariners launch a Wave Glider Unmanned Surface Vehicle from the fantail of USNS Burlington as a part of the UNITAS U.S. Naval Forces Southern Command/U.S. 4th Fleet Unmanned Integration Campaign. U.S. Navy / Mass Communication Specialist 2nd Class Conner Foy

The Pentagon is already testing tomorrow’s AI-powered swarm drones, ships

DOD pulled off unmanned amphibious landings, self-coding drones, and more just in the last year. What's next?

Autonomous weapons are coming. Recent Pentagon breakthroughs in experimental aerial and naval craft are paving the way for low-cost attack drones and new tactics that feature AI in key roles. Navy and Air Force experiments also highlighted how the U.S. military might employ autonomous weapons differently than China or Russia. 

The Navy, for example, brought swarms of air and sea drones to the annual Unitas exercise, where they collected and shared reconnaissance data that helped the multinational fleet detect, identify, and take out enemy craft more quickly.

“We had an unmanned surface vessel and unmanned air vessel informing each other and then we actually had an international partner’s missiles on board, and we're able to shoot six high speed patrol boats coming at us. And we were six for six,” said Rear Adm. James Aiken, 4th Fleet commander, sharing new details about the July exercise at the Navy Surface Warfare symposium in Virginia recently.

Navy

The 4th Fleet, along with the 5th Fleet halfway around the world, are the Navy’s leaders in emerging AI concepts. Then-CNO Adm. Michael Gilday pushed for experiments in operational waters, which he said might become critical for dealing with grey-zone operations, smuggling, and other threats.

Aiken said unmanned and AI systems could help detect and thwart hostile attempts to interfere with international shipping, in part by scouring video footage and other sensor data. He added that such systems might also make it easier to share information and work with partners, from shipping companies to other governments. 

“We actually use a human-machine interface to make better watchstanders, to better inform the fleet and to move forward,” he said. “How can we...use them in different ways to inform distributed maritime [operations]? To get us a better sight picture of what's going on? And then share that with with some of our key stakeholders around the globe?”

The United States isn’t the only country making new uses of autonomy. While the one-way attack drones that Houthi forces are firing at ships in the Red Sea are crude, earlier this month they launched what U.S. officials called a “complex” attack of more than 20 drones at once. Iran reportedly has plans to build jet versions of its one-way attack drones, weapons that could show up anywhere from Ukraine to the Red Sea.

That highlights the urgent need for cheaper interception technologies but it also validates the Pentagon’s five-month-old Replicator plan to increase production of cheap drones for attack, much as both sides have done in the Ukraine war and Iran has done to arm the Houthis.

U.S. Navy Secretary Carlos Del Toro said that the Navy is contributing.

“These concepts have been brought to fruition in terms of all the advances that we've made in unmanned, whether it be on the surface, whether it be in the air, whether it be underneath the surface,” Del Toro said. “The concepts that we've put forward to Replicator have been very well embraced.”

Air Force

The Air Force last year also demonstrated new capabilities in autonomy and AI, Col. Tucker Hamilton, Operations Commander of the Air Force’s 96th Test Wing, said last week. 

“We are testing things like the XQ-58 high-performance drone that is uncrewed and has AI-enabled functionality which is really cool. We actually, for the first time in the history of aviation, had an AI agent and AI algorithm fly a high-performance drone” last July at Eglin Air Base, Florida,” he said during a Defense News broadcast. “I had the fortune of flying on the wings of this thing. When the AI agent turned on for the first time, I was in an F-15 and it was awesome.” 

Hamilton said previous “autonomous” drones have generally followed simple instructions, say, for returning to a predetermined location after losing contact with its operator. There’s little room for actual elaboration. The directions are simple, he said, like “will fly at this throttle setting at this airspeed. You will turn it 30 degrees… and it's all like very deterministic software.” 

But new experiments, such as with XQ-58, have allowed a more sophisticated form of autonomy. 

“This is where we give it an objective, but it decides what throttle setting, what bank angle, what altitude, what dive angle it's going to do to meet that objective, right? So that's the AI-enabled autonomy that we're talking about. When that turned on, it is great to see,” Hamilton said.

The results are sometimes surprising. The XQ-58, for instance, makes extremely rapid or “crisp” rolls compared to an aircraft with a human pilot. 

“A computer-controlled aircraft…may do things differently than a human. And we need to recognize there's a huge benefit there,” he said.

To realize that benefit, he said, AI systems need a learning space where they can make decisions in a safe way. 

“We have, in simulation, allowed it to rewrite code a little bit to optimize its performance to do that objective. And then we surround that AI algorithm with autonomy code so if, at any point, that AI agent that is flying the XQ-58 asks for—I'm just making up numbers—but say it asks for like a universal bank, but we didn't want it to be able to ask for 80 degrees of bank; we only wanted the maximum to be 70 degrees of bank, it would automatically turn off if it asked for more.”

That human-and-computer collaboration sets U.S. military autonomy research apart from similar research elsewhere.

The Ukraine conflict marks the long-feared arrival of autonomous weapons in combat. Michael Horowitz, deputy assistant defense secretary for Force Development and Emerging Capabilities, told an audience last week that because jammers can sever communications between operator and drones, militaries are building AI-powered ones that don’t need to communicate to execute their missions.

“If you look at the context of Ukraine and in a lot of … sort of articles you see out there about about jamming, about electronic warfare, about all the different kinds of the cat-and-mouse games that Ukraine and Russia are constantly playing with each other, autonomy is one of the ways that you know that a military might seek to to address some of the some of them some of those challenges,” Horowitz said.

A year ago, the Defense Department revised its policy on autonomous weapons to clarify when they would be allowed to shoot.

“We had ended up in a situation where outside the department, the community veterans, thought that DOD was maybe building killer robots in the basement. And inside the department, there was a lot of confusion over what the directive actually said, with some actually thinking the directive prohibited the development of autonomous weapon systems, with maybe particular characteristics or even in general. So we wanted to do with the revision to the directive is make clear what is and isn't allowed in the context of DOD policy surrounding autonomy and weapon systems,” Horowitz said.

“That directive does not prohibit the development of any systems. All it requires is that for autonomous weapons systems, unless they fit a specific exempted category, like, say, protecting a U.S. base from lots of simultaneous missile strikes, that it has to go through a senior review process, where senior leaders in the department take an extra look. In addition to all of the other testing and evaluation…and other requirements that we have.”

That may sound like the Defense Department giving itself permission to build whatever killer robot it wants, so long as that permission comes via a “senior review process”—the sort of self-review that Russia or China might undertake to justify building Terminator knock-offs. 

Horowitz said the policy actually shows how the Pentagon’s development of autonomous weapons (should they undertake it) fundamentally differs from those of Russia and China. 

The U.S. is also looking to set international norms for the responsible military development of AI and bringing in European partner nations, many of whose citizens are highly cautious about AI in military settings. Last November, the United States launched a Political Declaration on Responsible Military Use of AI that already has 51 signatories, Horowitz said. 

“We're proud of the fact it's not just the usual suspects…if you look at the pattern of countries that have endorsed the political declaration,” he said. 

U.S. officials hope that such a large international consensus will compel Russia and China to adhere to some norms on AI development. 

“Because, again, we think of this as good governance so countries can develop and deploy AI enabled military systems safely, which is in everybody's interest. Nobody wants, you know, systems that increase the risk of miscalculation or that behave in ways that you can't predict,” he said. “I think trends are heading in the right direction.” 

That highlights the importance of strong multi-national alliances and institutions in keeping the world safe from new weapons. But that also suggests that we are only as safe as those alliances and institutions themselves.