—
The Ukraine-Russia war has highlighted a relatively subtle but deeply disturbing trend in warfare: a growing reliance on artificial intelligence in conducting military operations. In defending their country, Ukrainians have used drones that rely on a mixture of human and AI guidance to bomb their targets. Such technology may mark another step toward a world in which fully autonomous drones are allowed to kill people without any human oversight.
The drones currently being used by Ukraine are the Polish-made Warmate and U.S.-made Switchblade 600. These drones are guided by a human operator, who selects a target for bombing. After target selection, the drones can be left hovering over the target for minutes. Once the drones’ AI technology determines that they have a clear shot at the target, they attack without the operator having to make the final decision on whether to do so.
Although not wholly independent of human control, the Warmate and Switchblade point toward future lethal technology that is untethered to direct human oversight. The Ukrainians are using another U.S.-made system that is not meant to kill people, but to knock out enemy drones. This anti-drone system is already fully autonomous. Meanwhile, Israel has a drone, designed to destroy radar installations, that can hover over its target for up to nine hours. One can easily imagine the same type of AI that is currently being used to automatically destroy drones or radar being used to destroy human beings in the same way.
In fact, the CEO of AeroVironment, the manufacturer of the Switchblade 600, has said, “The technology to achieve a fully autonomous mission with Switchblade pretty much exists today.” He predicts such missions will be carried out in a few years.
AI that can independently carry out lethal military missions has been a serious pursuit of military research for some time. Heidi Shyu, the US undersecretary of defense for research and engineering, has identified as a U.S. military goal the deployment of unmanned vehicles that can carry out missions — including attacks — with limited human guidance. “I believe that we need trusted AI and trusted autonomy to be able to operate without GPS,” Shyu has commented.
AI might already have carried out lethal military missions. A United Nations investigation into Libya’s civil war reported an episode in 2020 in which one faction used “lethal autonomous weapons systems… programmed to attack targets without requiring data connectivity between the operator and the munition.” The device used may have been the Turkish Kargu-2 drone, which can fire various types of ammunition and has some capacity for autonomy.
Any new military technology poses the danger of initiating an arms race as various nations seek to acquire their own versions of the technology. The arms race leads to the technology becoming more common, cheaper to make or pirate, and more easily accessible to terrorists and criminals. These dangers apply to the development of drone weaponry as well.
Autonomous drone weaponry poses another, distinctive danger, however. If national militaries deploy drones or other technology programmed to seek out, identify, and kill people designated “enemies,” then life-or-death decisions will be dependent on the quirks of a computer program. Civilians or others who should not be targeted in military operations will almost certainly be killed by mistake.
Law professor Hitoshi Nasu has pointed out some of the problems presented by autonomous weapons. If the AI is programmed to target people with guns, will it kill farmers wielding rifles to protect their livestock? If the AI is programmed to target soldiers wearing uniforms, can it discern when a soldier is trying to surrender? Also, could the AI become confused about which uniforms are specifically military, and kill mail carriers or hospital orderlies instead?
Further, all these problems become even more worrisome if (as has been U.S. policy for over 20 years) drones continue to be used to target alleged terrorists, who presumably would wear civilian clothes and live in civilian settings. Even setting aside all the other legitimate objections to targeted killing, the question arises: how would an AI reliably discern who is the correct intended target?
Granted, having drones or other weapons operated by humans clearly has not guaranteed civilian safety. However, the presence of a human operator offers at least some oversight of operations , however minimal. A human being can reconsider in light of new information (or pangs of conscience) and stop a mission. Would AI do so?
Rather than continuing to develop autonomous weapons, nations should work together to limit the development and spread of these weapons through an international arms control agreement. People should not be exposed to the new danger of automatic killing machines.
Kommentare