top of page

FROM JUDGEMENT TO ALGORITHM: THE LEGAL VACUUM IN AI-DRIVEN WARFARE

  • Writer: BJIL
    BJIL
  • 12 minutes ago
  • 9 min read

Muhammad Mustafa Arif is a final-year LL.B. (Hons) candidate at Pakistan College of Law (University of London). He is a regular columnist for The News International and The Friday Times, and has contributed legal commentary and opinion pieces to LEAP Pakistan. Demonstrating a strong commitment to legal research and writing, he has interned at leading law firms including Awais Law DRT, Bhandari Naqvi Riaz, and Raja Muhammad Akram & Co.

Photo By Emad El Byed
Photo By Emad El Byed

The rapid development and deployment of autonomous weapons with Artificial Intelligence (AI) represent a great innovation in contemporary warfare. These weapons can recognize and attack targets independently upon activation, without further human intervention, as illustrated by the United States Department of Defense. Often portrayed as drones or robots, they represent a profound shift in warfare. Neil Davison, a senior scientific and policy adviser with the International Committee of the Red Cross (ICRC), emphasized, “Autonomous weapons are not a product of science fiction in a far-off dystopian future. They are an immediate humanitarian concern and need an urgent, international political response.”

 

This problem is particularly salient in Gaza, where the Israel Defense Forces (IDF) have deployed AI-targeting systems, raising grave legal, ethical, and humanitarian issues. The deployment of devices such as “Lavender” and “Gospel” has underlined the need for universal international regulation to protect civilians and comply with International Humanitarian Law (IHL).

 

Deployment of AI in Gaza

 

AI-powered autonomous weaponry has become a key player in modern warfare, raising serious ethical concerns. Israel has extensively incorporated AI into warfare, often using the Gaza Strip to develop and refine advanced military technologies before exporting them globally.

 

Israel has developed AI-based programs called “Lavender,” “Gospel,” and “Where’s Daddy?,” which have played a critical role in identifying assassination targets.

 

Lavender is the most prominent AI-powered targeting system used by the IDF. It is designed to identify individuals associated with militant activities. It has been reported that, in the initial stages of the conflict, Lavender allegedly labeled 37,000 Palestinians as threats. Further reports have indicated that the use of Lavender has contributed to thousands of civilian casualties.

 

Gospel is used primarily by the IDF to target buildings and infrastructure believed to be linked to militant groups. It assists the IDF in determining airstrike targets by analyzing intelligence inputs, satellite imagery, and intercepted communications. Reports suggest Gospel was involved in numerous strikes and contributed to significant casualties, though the exact number remains uncertain.

 

Where’s Daddy assists the IDF in monitoring individuals designated as high-priority targets. The system tracks the movements of such targets and notifies the forces when they reach home, increasing the likelihood of targeted strikes. According to reports from various platforms, the use of this tool has resulted in thousands of deaths, as it facilitates targeted killings that have often resulted in the deaths of family members and non-combatants in the vicinity.

 

In the ongoing conflict, Israel has heavily relied on AI to make life-or-death targeting decisions with minimal human involvement. It is pertinent to note that AI does not directly make decisions or cause the devastating events unfolding in Gaza. However, it heavily influences human decision-making due to cognitive bias. A key example is “automation bias,” where people overly trust machine-generated outputs and neglect to question or verify them, especially under time constraints. Similar to how ChatGPT cautions users that the system can make mistakes and they should double-check information, such caution is crucial in armed conflicts, as per the cardinal principle of distinction under IHL, which requires parties to an armed conflict to distinguish between civilian and military objects at all times. Over-reliance on AI tools in warfare without critical human oversight can lead to disastrous outcomes, including the unintended loss of civilian lives. This misplaced trust in AI magnifies risks and blurs the line between human judgment and machine error in critical situations.

 

Ethical Dilemmas:

 

AI has the potential to enhance nearly all aspects of military operations, from ‘strategic’ planning and troop deployment to personnel training. AI can optimize various warfare systems—including weapons, sensors, navigation, aviation support, and surveillance—by increasing operational efficiency and reducing reliance on human intervention. However, these systems must adhere to best practices and align with their specific functions.

 

Productivity: The United States, being the global leader in AI evolution, recognizes that war places immense physical and mental strain on soldiers, leading to fatigue and adverse medical conditions such as Post Traumatic Stress Disorder (PTSD) that can hinder focus, performance, and decision-making. This fatigue increases the risk of human error, potentially jeopardizing mission success and resulting in injuries or defeat.

 

AI, however, offers a solution by allowing soldiers to conserve their energy and allocate their time more efficiently, reducing the burden on them and improving overall effectiveness in the field. AI enhances decision-making by enabling more accurate data analysis, which improves targeting and reduces errors. Automated systems allow for quick decisions in combat, thereby minimizing mistakes. Drones and AI help identify and communicate potential threats, offering greater precision in detecting distant objects and providing a strategic advantage in preparing for attacks.

 

Over-Reliance: Lavender and Gospel utilize machine learning to differentiate between military targets, civilians, and civilian structures. However, if decision-makers act on their own without adequate scrutiny or supplementary information—an issue that has been reported—this could lead to attacks that harm the civilian population present. Typically, there is a “human in the loop who reviews and approves or rejects AI recommendations. However, Israeli soldiers often treat these AI outputs as if they were human decisions, spending as little as twenty seconds reviewing a target before launching a strike. Army leadership reportedly encourages automatic approval of Lavender’s kill list, assuming its accuracy, even though Lavender has a minimum estimated error rate of 10%.

 

Guidelines for preventing civilian casualties emphasize patience in observation, as deliberate analysis leads to more informed decisions on lethal and non-lethal actions. Slowing decision-making is crucial in complex, high-stakes environments where rushed judgments obscure critical nuances. Military planning gives commanders time to assess the operational landscape, considering enemy forces, allies, civilians, and potential risks. As General Dwight D. Eisenhower observed, “plans are useless, but planning is indispensable,” emphasizing the value of thorough preparation in navigating unpredictable conflict scenarios.

 

Intersection of AI and International Humanitarian Law:

 

In addition to AI systems, Israel has also been deploying Lethal Autonomous Weapon Systems (LAWS) and Semi-Autonomous Weapons (Semi-LAWS) in the ongoing conflict. There is currently no universally agreed-upon definition of LAWS under international law, though some advancements have been made toward establishing one. Broadly, LAWS refer to weapons that, once deployed, can independently identify and engage targets without human input. The IDF has established the use of remote-controlled quadcopters, equipped with machine guns and missiles, to “surveil, terrorize, and kill” civilians sheltering in tents, schools, hospitals, and residential areas. Those in the Nuseirat Refugee Camp in Gaza have reported that Israeli drones often broadcast sounds of crying babies and women to deceive the residents and draw them out to areas where they can more easily be targeted.

 

Under IHL, the IDF is obligated to adhere to the principles of Jus in Bello with distinction, proportionality, and precaution in attacks. The principle of distinction requires belligerents to distinguish between military objectives and civilian objects and between combatants and civilians (Article 48 of Additional Protocol (AP) I to the Geneva Conventions (GC)). The principle of proportionality requires that an attack should not result in excessive civilian harm compared to the anticipated military advantage (Article 51(5)(b) of AP I). The principle of precautions in attack (Article 57 of AP I) requires armed forces to ensure that a target is a legitimate military objective and to suspend or seek delay with an attack if it becomes apparent that what they aim to do violates IHL. These requirements, through Customary International Law, are binding, whether human commanders or AI-Based systems make decisions, as responsibilities under IHL always rests with human operators and states.

 

The application of AI-based decision support systems since October 7th has claimed the lives of approximately 37,000 Palestinians, of whom many were civilians and children. The ICRC has pointed out that decisions on targets taken with the assistance of AI could result in civilian-military misidentification, which would violate the rule of distinction. Micheal Schmitt and Jeffrey Thurnher discuss the challenges of following principles of IHL by decision makers when deploying autonomous weapon systems. They note that reliance on AI for decision-making undermines proportionality, as current AI frameworks cannot generate well-informed human judgment concerning military gain at the expense of civilians. Failure to mitigate these dangers would render such AI-aided attacks illegal under IHL.

 

To clarify the potential dangers of AI-based military operations, the regulation of autonomous vehicles is a good analogy. Most jurisdictions forbid the use of fully autonomous driving systems on account of a 10% accident rate, deemed too high to ensure the safety of civilians. In the same vein, AI-targeting systems bear a 10% error rate, which provokes legitimate doubts about their ability to distinguish between military and civilian targets. Such an error rate significantly undermines respect for the principle of proportionalityformulated in international law. In armed conflict, where decisions have long-term implications and the stakes involve matters of life and death, such errors lead to war crimes as formulated in the Rome Statute of the International Criminal Court.

 

Autonomous weapons systems (AWS) are not regulated directly by current IHL treaties, although their employment remains subject to general principles of IHL, notably through the Martens Clause. Codified in Article 1(2) of AP I and the preamble to AP II, the Clause provides that in situations not regulated by specific law, civilians and combatants continue to be under the protection of “the principles of humanity and the dictates of public conscience.” These principles include, for example, the obligation to protect civilians from direct attack, prohibition of weapons causing unnecessary suffering, and the requirement to respect human dignity, which is challenged when machines are delegated autonomous decision-making without human oversight. Its purpose, confirmed by the International Court of Justice (ICJ) in its Legality of the Threat or Use of Nuclear Weapons Advisory Opinion (1996), is to establish a normative floor where legal loopholes are absent.

 

This norm applies to AWS, which—unlike human combatants—lacks the capacity for moral judgment, situational empathy, or discretion in applying rules like proportionality and distinction. Autonomous weapons can potentially violate the Martens Clause, since autonomous weapons struggle to comply with principles of distinction and proportionality. The ICRC Commentary points out that the Martens Clause is appealing to “elementary considerations of humanity when treaty law is ambiguous. These are precisely the considerations threatened by AWS: their reliance on algorithms based on incomplete or biased data risks arbitrary targeting, civilian casualties, and decision-making untethered from ethical accountability. Thus, whether AWS is lawful under IHL is not an issue of whether they are automatically forbidden, but whether their use undermines the humanitarian safeguards IHL intended to uphold. The Clause acts as a normative anchor, ensuring that even when rules are absent, the conduct of warfare remains bound by fundamental principles of humanity and the dictates of public conscience.

 

Most legal and moral frameworks assume that humans should make decisions involving the taking of life or imposing severe consequences. The Hague Convention (IV) (1907) stipulates that combatants have to be “commanded by a person,” thus underpinning the necessity of human control in warfare. Removing human decision-makers from lethal targeting processes violates adherence to IHL and international human rights law. The position that “removing humans from the loop risks removing humanity from the loop” has the support ofstate practice and opinion juris. In the absence of significant human control, AI-driven targeting platforms can violate the right to life since they lack the necessary decision-making capacity to assess proportionality and necessity.

 

Conclusion:

 

The integration of AI-driven technologies into modern warfare—particularly as witnessed in Gaza—presents profound legal, ethical, and humanitarian dilemmas that challenge the foundational principles of IHL. Given these legal deficiencies, regulatory measures must be implemented to ensure compliance with IHL. The United Nations Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems has proposed a legally binding instrument to govern the use of AI in warfare, emphasizing the necessity of human control over critical targeting decisions. This proposal and current interpretations of existing IHL principles can explicitly address the use of AI in armed conflict. States, too, must also incorporate AI-specific legal safeguards into their military doctrines to prevent violations of distinction, proportionality, and precautions in attack.

 

AI-driven military systems pose unprecedented challenges to the principles of IHL. The evidence suggests that AI’s current error rates and inherent limitations in assessing proportionality render its use in lethal targeting operations highly problematic under existing legal frameworks. Without such reforms, the unchecked militarization of AI threatens not only to erode the protective shield of IHL but also to normalize a future where accountability for war crimes becomes diffuse and the value of human life is subjected to the fallibility of machines.

 

 

 

 

 
 
 

Comentarios


bottom of page