These AI-controlled drones can make autonomous decisions about whether to kill human targets, The New York Times reported.
The use of the so-called “killer robots” is a very disturbing development, say critics. It hands life-and-death battlefield decisions to machines with no human input.
Several governments are lobbying the UN for a binding resolution restricting the use of AI killer drones, but the US, Russia, Australia, and Israel favor a non-binding resolution.
“This is really one of the most significant inflection points for humanity,” Alexander Kmentt, Austria’s chief negotiator on the issue, told The Times. “What’s the role of human beings in the use of force — it’s an absolutely fundamental security issue, a legal issue, and an ethical issue.”
According to a notice published earlier this year, the Pentagon is working toward deploying swarms of thousands of AI-enabled drones.
Frank Kendall, the Air Force secretary, told The Times that AI drones will need to have the capability to make lethal decisions while under human supervision.
“Individual decisions versus not doing individual decisions is the difference between winning and losing — and you’re not going to lose,” he said.
“I don’t think people we would be up against would do that, and it would give them a huge advantage if we put that limitation on ourselves.”
The New Scientist reported in October that AI-controlled drones have already been deployed on the battlefield by Ukraine in its fight against the Russian invasion, though it’s unclear if any have taken action resulting in human casualties.
It sounds like an incredibly bad idea, but an inevitable one.
This is a 3-year-old film.