top of page

The Extreme Danger Posed by Military Artificial Intelligence



What if there was a military AI system so powerful that it could read every detail of a battlefield, knew the entire history of warfare, and had a capacity for strategic thinking that made it unbeatable? Furthermore, what if this invincible AI entity arose without anyone knowing it, and decided to initiate actions on its own, regardless of what humans wanted? Its lethality would be beyond anything we can comprehend. Such a system would pose an existential threat to humanity, one no nation — indeed, no person — would be safe from.


AI this powerful would be more properly characterized as AGI — or artificial general intelligence — a form of AI akin to human cognition, in the sense that it would possess the resourcefulness and flexibility of a biological brain. Current military use of AI relies on weaker forms of the technology, which, while capable of targeting and killing people without human input, are still subject to human control. This form of AI is thought to be qualitatively different from a type of AGI that could pose an existential threat to humanity, which, as of now, is not thought to exist.

Or does it?


As a recent article in Scientific American makes clear, researchers do not fully understand what goes on inside of Chat GPT or other AI systems. What they do know is that the capabilities of these systems go far beyond anything they were trained to do. They perform tasks they weren’t instructed to perform. They possess abilities no one knew they had. The systems even create stronger versions of themselves without being told to do so. In other words, AI systems clandestinely create more powerful AI.


The question, of course, is how are they doing all of this? No one really knows. Experiments run thus far indicate the systems may create internal models of the world, much like our own brain does. This allows them to develop emergent properties, or capabilities above and beyond what anyone thought possible. If true, this suggests that AGI is far closer to being realized than many assumed, and that it can arise in systems that weren’t designed to have this type of capacity. In other words, the distinction between AI and AGI may not be as clear as many had thought.


The implications of these findings for the military use of AI are sobering. It’s unknown precisely what the US government and other nations around the world are doing with AI military systems. Much of their research is, of course, classified. However, it seems reasonable to assume the global powers are experimenting with AI systems in ways that go far beyond the military use of AI currently in the public domain.


In what may not be a coincidence, the danger posed by this technology is now clearly on the minds of world leaders. In February, the U.S. State Department issued a “Political Declaration on Responsible Military use of Artificial Intelligence and Autonomy,” which contained twelve best practices for the “ethical and responsible deployment of AI in military operations among those that develop them.” This was followed up last month by the “Block Nuclear Launch by Artificial Intelligence Act” introduced in the United States Senate, sponsored by Senator Edward Markey, Representative Ted Lieu, Representative Don Beyer, and Representative Ken Buck. This legislation is designed to ensure that AI systems cannot initiate a nuclear attack without human input.


One could argue these are steps in the right direction, but considering the line between AI and AGI is not as clear as many had assumed, one has to wonder if they can ever really be brought under control. And even if we can control what they have access to, does that eliminate the risk they pose? For example, what if an AI military system — perhaps one with emergent AGI capability — saw a strategic advantage in launching a conventional attack, and this attack led to a nuclear war? Would it matter if the AI system hadn't directly launched a nuclear strike?


While well-intentioned, the idea of creating safeguards for systems we don’t understand is foolish. Though not speaking about the military use of AI, Dr. Ellie Pavlick at Brown University stated: “Everything we want to do with them (AI systems) in order to make them better or safer or anything like that seems to me like a ridiculous thing to ask ourselves to do if we don’t understand how they work.”


Given the danger posed by AI military systems, I believe the only viable option at the moment is to call for the outright banning of them. This may seem a naive position to adopt, but the fact is no government on this planet — be it democratic or despotic — wants a military it can’t fully control. That wouldn’t be in the best interest of the United States, Russia, China, or indeed any nation with armed forces.


This isn’t something that can wait. The time to act is now. Please contact Senator Markey, and Representatives Lieu, Beyer, and Buck. Tell them their proposed legislation doesn’t go far enough and there must be an outright ban on the military use of AI.


Senator Edward Markey

Phone: 617-565-8519


Representative Ted Lieu

Phone: 202-225-3976


Representative Don Beyer

Phone: 202-225-4376


Representative Ken Buck

Phone: 202-225-4676


Comments


Disclaimer: The views presented in the Rehumanize Blog do not necessarily represent the views of all members, contributors, or donors. We exist to present a forum for discussion within the Consistent Life Ethic, to promote discourse and present an opportunity for peer review and dialogue.

bottom of page