There has been a lot of abstract discussion of AI in warfare recently. With the Russian invasion of Ukraine, however, the conversation has gotten less abstract and more immediate. Wired has written about Russia’s drones, which use AI to identify their targets.
“Advances in AI have made it easier to incorporate autonomy into weapons systems, and have raised the prospect that more capable systems could eventually decide for themselves who to kill,” the article points out.
U.S. drones don’t make those decisions
The U.S. has unmanned drones, and may provide them to Ukraine, but these drones don’t make decisions about which targets to attack. When makers of drones describe them as autonomous, they often mean that the drones can adjust their trajectory to reach a target programmed by a human being.
The U.S. does not support a ban on truly autonomous drones, however.
Back in Ukraine
Russia also has unmanned ground vehicles. Fortune points out that Artificial Intelligence has the capacity to analyze the “crowd-sourced intelligence” coming out of Ukraine (via TikTok, for example) and to coordinate efforts including ground and air weapons.
It’s clear that decisions about the agreements that can or should govern military applications of AI are no longer the stuff of science fiction.