The Association of the United States Army’s 2021 annual conference introduced some lethal autonomous weapon systems, or LAWS. You might call them killer robots.
One option looked like Spot, the familiar dog-shaped multipurpose robot, with a SPUR or Special Purpose Unmanned Rifle mounted on top.
Human Rights Watch calls for a preemptive ban on the development, production, and use of fully autonomous weapons, including LAWS. The International Committee of the Red Cross wants nations to ban killer robots. United Nations Secretary-General António Guterres has called LAWS “morally repugnant and politically unacceptable.” Pope Francis has also warned against them.
But are there laws in place?
Humanitarians may disapprove, but are there any actual laws forbidding or even regulating the use of robots in wars?
Not in the United States. The U.S. has also blocked international treaties intended to ban the use of LAWS, saying that such resolutions would be “premature.” 30 countries have banned them, but the U.S., Russia, South Korea, China, Israel, and the United Kingdom are all developing LAWS.
The 1949 Geneva Convention, which serves as the internationally recognized laws of war, did not foresee killer robots. The rules of the Geneva Convention assume that human beings will make decisions based on ideas like proportionality and precaution. These ideas are not something robots can use. Nor can robots identify the context of an action in the way humans can. These limitations are the concerns that make many leaders agitate for a ban on LAWS.
The nations which are standing against bans naturally include those that have invested in the technology. But they usually favor some kind of regulation. There is no consensus on the kind of regulation, though, nor on the timeframe for it.
“You simply can’t trust an algorithm – no matter how smart – to seek out, identify and kill the correct target, especially in the complexity of war,” says Noel Sharkey, chairman of the International Committee for Robot Arms Control.
Certainly, it has become clear that machine learning and algorithms for robot actions are susceptible to human frailties.
U.S. official position
But the U.S. State Department made a statement cautioning that there isn’t enough information available at this point to allow sensible rules to be made. “We must not be anti-technology and must be cautious not to make hasty judgments about emerging or future technologies especially given how ‘smart’ precision-guided weapons have allowed responsible militaries to reduce risks to civilians in military operations,” the statement read.
The official U.S. Statement on LAWS suggests that “killer robots” could actually reduce the number of casualties in wars.
Should there be laws limiting the development of LAWS before it takes place, or should regulations come only after it’s clear what the technology could do?