The Evolving Role of AI in Warfare

02 May 2024 /

5 min

AI is taking up an increasingly important place in people's daily lives.

First imagined around the 1960s, it was only 60 years later that Artificial Intelligence (AI) really took off. The line between reality and fiction is sometimes a fine one. Especially the failure to regulate weapons equipped with this technology could have serious consequences. 

According to the Council of Europe, the creation of AI can be attributed to John Von Neumann and Alan Turing. Both can be seen as the founding fathers of the technology that stands behind AI. In fact, the very concept of AI could be attributed to the academic John McCarthy of the Massachusetts Institute of Technology (MIT). 

In the summer of 1956, the Rockefeller Foundation decided to fund a conference for the modest sum of 7,500 US dollars, the Dartmouth Conference on Artificial Intelligence. It was at this conference, attended by just twenty researchers (only 6 of them attended the whole conference) that the AI field itself was created. The idea of this technology quickly became a success, but soon ran out of steam. Mostly because the technological advances of the time were not sufficient to support AI and to make AI a constant reality. 

Since then, the technology has roughly undergone 3 major periods of development, or even 4, if we consider the arrival of ChatGPT as a new turning point. The first period was between the 40s and 60s, the second between the 80s and 90s, and it was in 2010 that the technology took another leap forward.

But what exactly is Artificial Intelligence?

According to the Council of Europe, AI is a discipline that brings together sciences, theories, and techniques. And according to IBM, one of the world’s leading IT companies, AI is a technology that enables computers and machines to simulate human intelligence and problem-solving capacities. In other words, computers are programmed to solve problems that require human intelligence. In a certain way, AI is able to replace the cognitive capacities of a human being in many areas.

AI is taking up an increasingly important place in people’s daily lives. While the degree of exposure to AI varies according to an individual’s social status, there are now more and more devices where AI is integrated. For example, Siri, Alexa or Google Home, as well as Chatbots on websites – programmes that interact with users via live chat software – and of course ChatGPT. Overall, AI is for some a time-saver, and in a world where time is a precious commodity, it has become a valuable tool.

Although the “I” in AI stands for intelligence, the use of this term is contested, as for   researchers, it is important to use precise terminology. And in the case of AI, even though the results in some areas are more than extraordinary, they are still much weaker than human capabilities.

 How does Artificial Intelligence work?

In simple terms, AI embraces what we call “machine learning” and “deep learning”. For this to work, algorithms are needed that are themselves inspired by the decision-making processes of the human brain. The difference between AI and a computer programme is that AI does not receive an order to obtain a result, it simply reproduces a cognitive model – a simplified representation designed to model psychological or intellectual processes.

There are three types of AI based on capabilities: Artificial Narrow Intelligence (ANI), also known as Weak AI. This type of intelligence can be trained to solve tasks that a human could solve, but in less time. Alexa, Siri, etc. are examples of ANI. It is the only type of AI that has gone beyond the theoretical stage. The second type is Artificial General Intelligence (AGI), also known as strong AI. In this case, machines would have an intelligence equal to that of humans. In other words, a mind with consciousness and self-awareness, able to solve problems, learn and even plan for the future. Artificial Super Intelligence or super AI (ASI) is the latest type of artificial intelligence that is strictly theoretical. The idea is that, if achieved, AI will be more “human” than a human being. It will be capable of reasoning, of learning, and of making judgements on its own. In other words, ASI could figure as a kind of superhuman.

What about the use of Artificial Intelligence in the military field?

AI is already well established in the military and security sectors, whether in vehicles or weapons. Each country can use military AI in different ways, depending on its priorities and, above all, on its available financial and technological resources. Artificial Intelligence is currently used for, amongst others, surveillance and reconnaissance, attack prevention or command assistance.

Wars are notorious for accelerating the development of weapons and technology, and the war raging in Ukraine is no exception. The soldiers involved in this war are experiencing situations that make technological development of military resources a necessity. For example, the drones already used in warfare have had to be improved to meet new demands. 

Different kinds of weapons can be transformed into autonomous systems and controlled by AI algorithms. But what does “autonomous” mean? Stuart Russel – a professor at the University of California – explains. Autonomous means that these weapons have the capacity to “locate, select and attack human targets without the intervention of a human being“. This type of weapon seems to have, at first, a number of advantages, including proven effectiveness, low-cost mass production and a relatively attractive human casualty to victory ratio, since it reduces human casualties and limits human errors. They can also be used in dangerous missions where putting soldiers on the ground would pose an extreme risk.

Despite the apparent benefits of this weapon type, its drawbacks outweigh them. More precisely, this new type of weapon comes with a number of ethical problems.

According to Russel, since these interventions do not require human supervision, we can launch as many attacks as we like, “and therefore potentially destroy an entire town or ethnic group in one go“. For the professor, war is a relatively simple application for Al: “The technical capacity of a system to find a human being and kill him is much easier than developing a self-driving car. This is a graduate student project.” 

But what if these weapons were to break down or malfunction and become unresponsive? If a tragedy of this scale were to occur, who would be responsible? The weapons producer, the country that used the weapons, or the weapons themselves? Further, it should not be forgotten that these devices can be hacked, which can lead to potentially disastrous outcomes. What is more, increased use of this type of technology amplifies the risk of dependence on it. This could lead to a loss of human capacity. It is therefore important to control and regulate this area in order to avoid serious accidents – whether intentional or unintentional. 

Part of the problem is that there is no universally accepted definition for such “Lethal Autonomous Weapons (LAW)”. This implies that there is less cooperation between states to find a consensus and satisfy as many states as possible. In the meantime, each definition leaves users free to interpret it as they see fit, resulting in legal uncertainties and a continued risk for civilians.

This is why the UN has taken a stand, and why researchers are actively campaigning for this technology to be effectively controlled. According to Egypt’s representative to the UN, “An algorithm must not be in full control of decisions that involve killing or harming humans. The principle of human responsibility and accountability for any use of lethal force must be preserved, regardless of the type of weapons system involved”.  For the Secretary General, Anonio Guterres, a ban on weapons operating without human supervision should be in place by 2026. While the United Nations has taken a position, the International Committee of the Red Cross (ICRC) has urged “states to establish internationally agreed limits on autonomous weapon systems to ensure civilian protection, compliance with international humanitarian law, and ethical acceptability.”

Against this background, multilateral coordination for the creation of a universal framework regulating LAWs should be at the forefront of international policy efforts, in order to counter the potentially disastrous effects of this technology in a world of rising tensions.

Léa Thyssens is a master student in International Relations and Editor-in-Chief for Eyes on Europe

(Edited by Luka Krauss)

Share and Like :