The Army’s Big Convention Was Full of Armed Robots
For now, Western militaries remain reluctant to buy ground weapons that choose their own targets. But that may change.
The U.S. Army may still be working out rules and concepts for using armed unmanned vehicles, but a wide variety were already on sale at this week’s Association of the U.S. Army show.
Germany’s Rheinmetall, for example, debuted its Mission Master-CXT, a mid-sized, wheeled unmanned vehicle that can be rigged to haul weapons, launch them by remote control—or autonomously find targets and destroy them.
“The restriction on the weapon system is a client decision, not what we're providing,” said Alain Tremblay, the company’s vice president of business development and innovation. “If a client says, ‘I'm comfortable from a rule-of-engagement, a law-of-armed-conflict and Geneva-Convention [angle], to have this thing finding a target and engaging a target’—it can do that.”
More conventionally, Tremblay said, “You can bring your reticle on a tablet and do a tele-operated engagement, but you're not restricted in the level of autonomy.”
The company has sold about three dozen unmanned trucks in four years, but expects the CXT, whose hybrid diesel-and-electric powertrain can haul up to a metric ton, to become its flagship.
“There is no limitation,” he said. “It's all depending on what the client's tactical requirements are, in order to do the engagement.”
So far, Western countries in particular are somewhat reluctant to buy and deploy fully autonomous mobile weapons, Tremblay said.
“Everybody is talking about keeping a man in the loop. That's what they want today. What are they gonna want in two years, or in three years?” he said.
That’s why Rheinmetall is building its unmanned ground vehicles so they can be retrofitted as a client’s desires change, Tremblay said.
The U.S. Army’s robotic combat vehicle program could be one of the biggest clients for future armed ground vehicles. But the program still has a lot of work to do before it starts placing large orders. In theory, Rheinmetall’s approach should match up well with the Army, whose unmanned-vehicle program managers stress that they need open-architecture vehicles that can accept upgrades as technology improves—particularly fast-moving artificial intelligence tools.
The service wants to “decouple hardware from software development,” which will enable the Army to build the brains of future robot combat vehicles separately from the platform, Col. Jeffery Jurand, a U.S. Army program manager for maneuver combat systems.
The Army is in the best position to do that software development because of the important role that soldiers will play in the development of future ground combat robots.
Brig. Gen. Geoffrey Norman, director of the Army’s Next-generation Combat Vehicle Cross Functional Team, described recent experiments that took place in Fort Hood, Texas.
“We learned a lot from soldiers about capabilities that they desire in robotic combat vehicles, things that they would want autonomy to do for them, and then other things that they would prefer to do themselves and they may not need to have AI do for them,” Norman said. “In particular, soldiers are very excited about capabilities [robotic combat vehicles] provide to help detect enemy vehicles…to detect adversaries and to detect anomalies in the environment. Soldiers still want to be in the loop for deciding and assessing the identification of those targets for those anomalies.”
That follows other experiments in which human soldiers tested how well robots fire with human operators in the background, signing off on the decisions.
In June 2021, the Army began a series of live-fire experiments with a test ground robot dubbed Project Origin. In a video produced that month, Todd Willert, the Project Origin program manager, described two versions of the experiments. In the first, the robot fired but the human operator attempted to do the targeting using input from the robot and from an overhead drone.
The second version of the experiment introduced AI targeting software via the drone platform. The software, Wilert said, “automatically determines where that location of the target is. And now we'll just do a script to basically say, ‘it's this far away’ but it was incredible. I mean, we were able to get rounds right on target within eight rounds, and then and then provide immediate suppression, and had that been an enemy. They wouldn't even know where the rounds are coming from.”
The U.S. military has a longstanding doctrine that humans should be “on-the-loop” in targeting decisions. But that policy does allow exceptions, so it’s more of a preference than a hard rule. The Pentagon has also published a long list of ethics principles to guide not only decisions about what robots can fire on and but also how to design and test them to make sure that they behave as predicted. But other sorts of autonomy, such as self-navigation, are certainly allowable.
In July 2021, Gen. John Murray, then head of the Army Futures Command that leads the development of future robotic combat vehicles, recalled his own days training on tanks. What qualified as excellent human performance wouldn’t meet the smell test for a machine, he told NPR’s On Point.
“They’d say you had to have a 90% success rate, which is pretty good on the flashcards to qualify to sit in that gunner seat. But with the right training data and the right training, I mean, I can't imagine not being able to get a system, an algorithm, to be able to do better than 90% in terms of saying ‘this is the type of vehicle that you're looking at,’ and then allowing that human to make that decision on whether the machine is right or not, and to pull the trigger.”
That says a lot about the future role of human operators on the robotic battlefield and how human limitations will one day be a bigger factor than the limitations of AI.
Elizabeth Howe contributed to this post.