Technology

# Military Artificial Intelligence Systems: Their Vast Capabilities and Detachment from Humans Leads to Destruction

# Military Artificial Intelligence Systems: Their Vast Capabilities and Detachment from Humans Leads to Destruction

For many, the Gaza War demonstrated that the Israeli military's use of artificial intelligence did not significantly succeed in avoiding widespread civilian casualties. In April 2024, the Israeli magazine 972+ reported an investigation into the extensive use of the AI system "Lavender," designed to identify suspected activists as potential targets. This system's database included approximately 37,000 targets, yet it did not prevent many civilian casualties, particularly with the increasing reliance on its outputs.

In 2021, an 11-day armed conflict termed the first "AI War" took place in Gaza, during which "Amnesty International" noted that Palestinians were subjected to facial recognition scans employing a color-coded mechanism to assist soldiers at checkpoints in their decisions regarding individuals wishing to cross. Military capability is considered an indicator of "state power," defined as the ability to achieve victory in a war or battle or to destroy a set of targets. So, how has artificial intelligence asserted itself in this domain?

AI systems are expected to play a crucial role in military confrontations between nations in the near future. Military systems are often vulnerable to cyber-attacks aimed at accessing confidential information or damaging military equipment. Nevertheless, AI systems contribute to protecting networks, computers, software, and data from various threats. AI applications also include speech recognition, biometric authentication, mobile phone mapping, transportation systems, traffic control, management, manufacturing, supply chain management, data collection, and more.

Given those capabilities, it is not surprising that artificial intelligence and its diverse applications are expanding within the military sector. Intelligent systems gather data from a wide range of resources and are capable of executing complex and diverse tasks, including intelligence collection, constructing virtual models for combat training, and assisting in military decision-making.

States and armies are not oblivious to the potential risks associated with using AI in the military, such as malfunctions, hacking, and cyber-attacks. Intelligent models designed for combat training represent a multidisciplinary field that combines systems engineering, software engineering, and computer science. It is noted that the United States is increasingly investing in virtual simulation applications designed for training.

There is a broad global trend toward integrating AI into various weapon systems used by land, naval, air, and space forces. Gradually, human intervention in those systems is decreasing, raising complicated questions. It may also be worth noting that there is a wide array of AI applications, including chatbots, drones, virtual assistants, cognitive automation, call and communication monitoring, speaker recognition (through voiceprints for monitored individuals), and predictive applications. For instance, intelligent aerial drones can conduct reconnaissance missions, gather specific information about threats, and monitor different movement patterns while relaying that information to relevant authorities.

Based on this data, it becomes evident that the military use of artificial intelligence carries diverse dimensions, including ethical considerations. Ethical concerns encompass the potential for autonomous weapon systems to make kill decisions without human oversight. Reliability also poses a major challenge in military AI systems, as their functionality significantly depends on the quality and quantity of data used in training those systems initially. There is always a risk of errors and biases in that data, which may transfer to AI system outputs and actions. If mistakes occur in military operations, they could result in irreversible errors such as killings and destruction.

Moreover, military AI systems often operate interconnectedly with each other and with non-military applications, making them susceptible to cyber-attacks as well. There are specific types of attacks known as adversarial attacks, which inject specific data into AI systems to derive incorrect conclusions, such as errors in target identification or providing misleading and false information.

Additionally, developing and training AI systems to engage in military actions requires significant resources and expertise. The availability and quality of data for training may be limited, making it challenging to create accurate and reliable AI models.

There is widespread public concern that machines might replace human soldiers, which would also reduce accountability in case of mistakes. With the vast capabilities of artificial intelligence and its penetration into the military technological landscape, its systems, strategies, logistics, and more, the importance of establishing legislative frameworks for it is increasing. The legal and legislative dimensions have become responsibilities that states and governments can no longer evade.

Our readers are reading too