Hi Welcome You can highlight texts in any article and it becomes audio news that you can hear
  • Fri. Nov 15th, 2024

War ethics: Are drones in Ukraine a step toward robots that kill?

Byindianadmin

Apr 29, 2022
War ethics: Are drones in Ukraine a step toward robots that kill?

Amid the bewildering array of brutality in Ukraine, military ethicists have been keeping a close eye on whether the war could also become a proving ground for drones that use artificial intelligence, or AI, to decide whom to hurt. 

Early in the war, Moscow was rumored to be employing “kamikaze” drones as “hunter-killer robots.” The Russian company that created the weapon boasted of its AI skills – the kind that could potentially enable a machine rather than a human to choose its targets. But the consensus among defense analysts has been that these claims were more marketing hype than credible capability.

Why We Wrote This

At some point, militaries will likely allow artificial intelligence to decide when to pull the trigger – and on whom. Ukraine is showing just how close that moment might be.

Yet there’s no doubt that the demand for AI in drones has been voracious and growing. And if humans for now are pulling the trigger, so to speak, “I don’t think that will last over time,” says Paul Scharre, an expert who formerly worked on autonomous systems policy at the Pentagon. 

The question of whether a weapon is ethical is answered in large part, adds security expert Gregory Allen, on whether it’s in the hands of a military “that has any intention of behaving ethically.” 

Amid the bewildering array of brutality on and off the battlefields of Ukraine, military ethicists have been keeping a close eye on whether the war could also become a proving ground for drones that use artificial intelligence to decide whom to hurt. 

Early in the war, Moscow was rumored to be employing “kamikaze” drones as “hunter-killer robots.” Though the Russian company that created the weapon boasted of its AI skills – the kind that could potentially enable a machine rather than a human to choose its targets – the consensus among defense analysts has been that these claims were more marketing hype than credible capability.

Yet there’s no doubt that the demand for AI in drones has been voracious and growing. The drones on display in Ukraine all have a human pulling the trigger, so to speak – for now. “I don’t think we see any significant evidence that AI or machine learning is being employed in Ukraine in any significant way for the time being,” says Paul Scharre, who previously worked on autonomous systems policy at the Pentagon. 

Why We Wrote This

At some point, militaries will likely allow artificial intelligence to decide when to pull the trigger – and on whom. Ukraine is showing just how close that moment might be.

“But I don’t think that will last over time,” he adds.

That’s because before the war, drones were seen as a useful counterterrorism tool against adversaries without air power, but not as particularly effective against big state actors who could easily shoot them down. The current conflict is proving otherwise. 

Wars have a way, too, of driving technological leaps. This one could teach combatants – and interested observers – lessons that bring the world closer to AI making “kill” decisions on the battlefield, analysts say. “I think of the Ukrainian war as almost a gateway drug to that,” says Joshua Schwartz, a Grand Strategy, Security, and Statecraft Fellow at Harvard University.

“Scary” – but so is all war

In March 2021, a great fear among AI ethicists was realized: A United Nations panel of experts warned that Turkey had deployed a weapon to Libya – the Kargu-2 quadcopter drone – that could hunt down retreating troops and kill them without “data connectivity” between the human operator and the weapon. 

Along with the flood of commentary decrying the use of so-called terminator weapons, a report from the U.S. Military Academy at West Point was more circumspect. It argued that ethical debate surrounding the Kargu-2 should concentrate not on whether it had killed autonomously, but whether in doing so it had complied with laws of armed conflict. 

“The focus of humanitarian concerns should be the drone’s ability to distinguish legitimate military targets from protected civilians and to direct its attacks against the former in a discriminate manner,” wrote the paper’s author, Hitoshi Nasu, a professor of law at West Point. 

Unlike the first iteration of autonomous weapons – land mines, for example – today’s AI systems are typically designed to avoid civilian casualties. For this reason, people should be more inclined to “embrace them,” Mr. Nasu posited in an interview with the Monitor.

But many critics can’t shake sci-fi-f

Read More

Click to listen highlighted text!