The war in Ukraine accelerated the development of killer robots
Both from Russia and from NATO
A new directive from the US Department of Defense calls for faster development of autonomous weapons systems and is the first in a decade to focus on the artificial intelligence of these systems. NATO had issued a similar directive in October 2022 , to ” maintain the alliance’s technological superiority “.
Both announcements come after important lessons learned by militaries in the Ukraine and Nagorno-Karabakh wars, positioning artificial intelligence as ” the future of military conflict .”
Currently, the majority of autonomous weapons in Ukraine require human intervention to make important decisions. But the military sees the advantage of a fully autonomous weapon, where robots can decide on their own to hunt down and engage their targets, without the need for human supervision.

Russia, for its part, has announced plans to produce an autonomous version of its Marker reconnaissance robot that will be able to hunt down and engage Leopard and Abrams tanks on its own.
Ukraine’s Minister of Digital Transformation emphasized that fully autonomous weapons are the ” logical and inevitable next step of war “, while noting that soldiers on the battlefield will begin to see them within the next six months.
Proponents of fully autonomous weapons systems argue that the technology will work to the benefit of soldiers by keeping them off the battlefield while allowing military decisions to be made at superhuman speed, drastically improving defense capabilities.
Opponents of these systems have been fighting for a decade to ban the research and development of autonomous weapons systems, especially those that target people rather than vehicles or infrastructure. They argue that during a war, the decision of whether to lose someone’s life should remain in the hands of humans, as leaving the decision to an algorithm would lead to the ultimate form of digital dehumanization. Along with human rights committees, the Campaign to Stop Killer Robots stresses that autonomous weapons systems lack the human judgment required to separate a civilian from a military target.
Until now, humans have been responsible for protecting civilians and limiting combat casualties, ensuring that the use of force is commensurate with mission objectives. Although when AI weapons are on the battlefield, who will be responsible for the needless deaths of civilians?