Red Cross Calls for More Limits on Autonomous Weapons
Experts said the group’s unique stature might get governments to the negotiating table at last.
The International Committee of the Red Cross is calling for new international rules on how governments use autonomous weapons, warning that such weapons will pose new challenges for international humanitarian law in the future and bring “significant risks of harm to civilians and combatants alike,” Peter Maurer, president of the ICRC, said in a speech Wednesday.
It's a move that will reshape the international discussion on autonomous weapons, experts told Defense One.
Wednesday’s call from the ICRC is significant because the group has a unique standing among governments as a humanitarian organization mitigating the effects of conflict. They refer to themselves as the “guardians” of international humanitarian law.
Autonomous weapons are those that “select and apply force to targets without human intervention...on the basis of a generalized ‘target profile’ and sensors, Maurer said in his speech. “What sets them apart from other weapons is that, after being activated by a person, they fire themselves when triggered by their environment, not by the user. This means that the user does not choose the specific target.”
Such weapons only exist on the battlefield today in extremely limited circumstances, such as the Phalanx weapon system for ships that automatically detects and fires on incoming missile threats. But the interest in these types of weapons is growing, Maurer said, raising the prospect that they will be used infights involving humans in coming years.
“Current technology and military developments are fueling interest in the use of autonomous weapons that attack a wider range of targets, over greater areas and longer durations, and even in urban areas—complex and dynamic environments,” he said.
There is some indication that future is closer than you might think. In January, Gen. John Murray, the head of U.S. Army Futures Command, said that in future engagements, where communications are heavily contested, human operators may not be able to be “on the loop” to supervise autonomous weapons like drone swarms.
“The policy of a human on-the-loop, when you’re defending against a drone swarm, a human may be required to make that first decision but I’m just not sure any human can keep up with a drone swarm, so that’s an area where I think, in the U.S., we can have some conversations going forward in terms of how much human involvement do you actually need when you’re talking about non-lethal decisions from a human standpoint,” he said.
The ICRC is recommending new limitations on such weapons under international humanitarian law. Specifically, they are seeking to mandate that they can’t attack humans, to limit the duration and geography in which they can be used (such as in situations where civilians aren’t present) and to set requirements for how humans interact with them, to ensure “timely intervention and deactivation,” when something goes wrong. “Our view is that unpredictable autonomous weapons should be ruled out, notably because of their indiscriminate effects, and that this would be best achieved through a prohibition of unpredictable autonomous weapons,” Maurer said.
Paul Scharre, vice president and director of studies at the Center for a New American Security, or CNAS, said that “many states are likely to take their cues from the ICRC in adopting their positions on autonomous weapons.” He described the ICRC position as a nuanced approach, incorporating perspectives from advocacy groups (some of which have sought a total ban) and the U.S. government, which has pushed for compliance with existing laws of war.
“If some of their recommendations gain traction with states, expect the debate to pivot to the meaning of various terms, such as what constitutes an ‘unpredictable autonomous weapon’ or what kinds of human-machine interaction are required,” he told Defense One.
Frank Sauer, a senior research fellow at the University of the Federal Armed Forces in Munich and the head of research at the Metis Institute for Strategy and Foresight, called the announcement “potentially game-changing,” especially for the discussion about the use of lethal autonomous weapons in the United Nations.
“The ICRC's word carries a lot of weight. Past arms control and disarmament processes-—such as the prohibitions on blinding lasers or anti-personnel mines—were greatly affected by the ICRC's positioning. It stands to reason that the unambiguous recommendation of new, binding regulation will set the course for future negotiations on weapon autonomy,” he told Defense One.
“After years of outreach and active engagement by academics, civil society representatives, and members of the scientific and technical communities with diplomats and military professionals at the United Nations, it is very encouraging to see that the process is now taking this crucial step forward,” he said.