leidensecurityand­globalaffairsblog

AI-Integrated Weapon Systems - A Double-Edged Sword? "Predator Drone" by Doctress Neutopia is licensed with CC BY-NC-SA 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-sa/2.0/

AI-Integrated Weapon Systems - A Double-Edged Sword?

When discussing Artificial Intelligence (AI) integrated weapons, images of terminators and killer robots are quick to come to mind. It is true that autonomous machines and weapons are becoming increasingly more common in warfare, but the reality of AI is still a far cry from what Hollywood portrays as “the future machines of war”. In reality, what it often comes down to is improved data collection and data processing, as well as having machines or devices provide accurate suggestions on the best courses of action, whilst keeping in mind potential changes in its environment.

...the reality of AI is still a far cry from what Hollywood portrays as "the future machines of war".

There are various merits to having AI enabled weapons. Such AI-enabled weapons may include, but are not limited to, anti-submarine sensor networks, autonomous drones, electronic warfare capabilities and more. AI applications can make autonomous drone surveillance more accurate and improve existing intelligence and reconnaissance efforts. Backed by machine learning, automatic target recognition could help decrease casualties among non-combatants. Generally, AI applications in early warnings systems could help compress decision-making time frames, allowing for quicker responses in tactical operations. However, the benefits of using AI in weapon systems comes with various risks, one that draws from technological barriers and as well as geopolitical pressures.

Technological barriers and Geopolitical Pressure

While AI systems seem promising for future warfare, at present its main use is to support its human counterpart, and for good reason. At its current stage, AI cannot fully operate autonomously in weapon systems without human supervision. This is because there is a gap between an AI’s ability to read data and to reason with the data. In simpler terms, AI has not matured enough to work on its own. To illustrate, without human supervision autonomous strike-drones may still engage in indiscriminate target selection, or cause collateral damages as a result of not being able to reason with the data the AI is processing. This is especially dangerous with weapons that are designed to target human beings, such as anti-personnel weapons. If such an incident happens, the compressed decision-making cycles for autonomous weapons will make it increasingly more difficult for humans to take corrective actions in a timely fashion, since the machine is simply faster than its human counterpart.

Furthermore, the use of (undeveloped) AI in weapons become a more realistic concern if we consider geopolitical pressures, which could spark a race for supremacy of AI-enabled weapons among states. This is because, according to Maas, the strategic advantage that is obtained with AI is substantial enough for states to want to stay ahead of adversaries and competition. Additionally, geopolitical pressures may also push states into using AI-enabled autonomous weapons prematurely. This goes hand in hand with the perception that an adversary’s AI capabilities are operational, which could then push states to pursue preemptive strikes, in which safety regulations, operational limitations and verification standards are deliberately ignored. The false belief of AI being operational can be as detrimental as one that would actually work. While last race for weapon supremacy, the nuclear arms race, was eventually regulated by international law, the use of AI applications is too diverse to put a clear-cut ban on it, meaning that regulation is another challenge in itself.

What if AI-enabled weapons are functioning?

In the event that AI becomes fully operational with minimal human supervision, there are still risks to keep in mind. According to Johnson, developments such as compressed decision-making can allow for quicker responses, but this may also mean that an AI would accelerate escalation levels, for instance, to launch missiles. After all, an AI seeks to solve a problem, be the action a reasonable one or not. A more subtle, but realistic risk is the idea that with the help of AI sensory technology, it could become easier to detect an adversary’s arsenal or nuclear weapons. This could create a “use it or lose it” situation because adversaries knowing the whereabouts of your ‘trump card’ will cripple its efficacy. In the event that this would come to pass, it would threaten the international community and could cause a nuclear crisis for the international community at large.

One such scenario would be a nuclear conflict between Pakistan and India. Pakistan stated that it would not use nuclear weapons unless its conventional military force could not stop an invasion by India. Theoretically, through the use of AI sensory technology, India may then be able to locate Pakistan’s nuclear arsenal, taking away Pakistan’s nuclear card and push for military invasion. In the same vein, AI technology could also allow for either Pakistan’s or India’s military force to trump the other’s military force, making the use of nuclear weapons as last resort more tempting.

The verdict: a curse or a blessing?

AI can bring a myriad of benefits to conventional warfare, from more precise drone-strikes and reconnaissance to compressed decision making in automated weapon systems. However, in its current state, AI does not have the operational qualities required to omit human supervision. The idea of AI can incite uncertainty and invoke tension between states. While terminators are not exactly around the corner, at present the drawbacks of using AI may outweigh the benefits, in which one can only hope that a full-scale AI war, with or without T-800s, can be avoided.

Recommended readings

Johnson, James. (2019). Artificial intelligence & future warfare: Implications for international security. Defense & Security Analysis, 35(2), 147-169.

Kibria, Mirza Golam, Nguyen, Kien, Villardi, Gabriel Porto, Zhao, Ou, Ishizu, Kentaro, & Kojima, Fumihide. (2018). Big Data Analytics, Machine Learning, and Artificial Intelligence in Next-Generation Wireless Networks. IEEE Access, 6, 32328-32338.

Maas, Matthijs M. (2019). How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons. Contemporary Security Policy, 40(3), 285-311.

Scharre, Paul. (2016). Autonomous weapons and operational risk. Ethical Autonomy Project. Center for New American Security

Yan, Guilong. (2020). The impact of Artificial Intelligence on hybrid warfare. Small Wars & Insurgencies, 31(4), 898-917.