If a human soldier commits a war crime, he has to face the consequences (at least in theory). The same goes for human operators of drones (again, in theory). But if a fully autonomous war machine with no human operator goes rogue and kills a whole bunch of innocent people, who would be responsible? Its programmers? The manufacturers? Finding someone to blame would be hard enough, and proving it in a court of law would be nearly impossible.
It’s a scenario we’ve seen played out in countless sci-fi movies. But this is not RoboCop or Skynet; this is real life, and these machines are startlingly close to being realized. That’s why Harvard Law School and Human Rights Watch want to ban “killer robots” before they can become a reality. In a new report to the United Nations, they argue that there are serious moral and legal concerns surrounding fully autonomous weapons—and that they must be outlawed.
In an age when so much of modern warfare is carried out by pilotless flying killing machines, it’s not at all farfetched to say that we’ll have battlefield robots that actually make their own decisions in a matter of years. Already, Israel’s Iron Dome defense system, for instance, is pre-programmed to intercept and neutralize rockets and other projectiles coming into Israel. (The US military employs a similar system.)
“Many people question whether the decision to kill a human being should be left to a machine,” the report says. “There are also grave doubts that fully autonomous weapons would ever be able to replicate human judgment and comply with the legal requirement to distinguish civilian from military targets.”
The report was released a few days ahead of an April 13 UN meeting in Geneva that will weigh the costs and benefits of autonomous weapons. Delegates will consider adding “killer robots” to the Inhumane Weapons Convention, which currently outlaws “blinding laser weapons,” and certain uses of incendiary weapons such as flamethrowers, among other weapons of an especially heinous nature.