AI 'Kill Lists' and Face-Scanning at Checkpoints: Gaza Targeting Raises International Humanitarian Law Alarm
Reports that Israeli forces have used facial recognition and AI decision‑support systems to generate thousands of targeting recommendations in Gaza — some with high error rates and minimal human vetting — have prompted urgent questions about civilian risk, coercive biometric collection and compliance with the laws of war.
Since the October 2023 outbreak of hostilities, an expanding body of reporting and expert analysis has painted a bleak picture of how emerging surveillance and artificial‑intelligence tools have been used inside Gaza — and what that practice means for civilians trapped in the fighting.
Investigations by journalists, academic researchers and forensic analysts say the Israel Defence Forces have deployed at least two AI decision‑support systems (AI‑DSS) — nicknamed Lavender and Where’s Daddy — alongside a large‑scale facial‑recognition program at checkpoints and in the field. According to published reports cited in a recent legal analysis, Lavender produced tens of thousands of recommended targets in the weeks after October 7, 2023; one estimate put the output at roughly 37,000 recommendations in the first six weeks, with an alleged error rate of about 10 percent.
The systems’ reported operation raises two linked threats. First are the technical flaws and statistical limits of facial‑recognition and pattern‑matching algorithms: poor image quality, crowds, low light and biased training data increase false positives, experts warn. Second is the way human decision‑making has reportedly been reorganised around those outputs — privileging speed and volume over careful, context‑sensitive verification that international humanitarian law (IHL) requires.
Lavender, according to public reporting, assigns each civilian a score intended to represent the probability they are affiliated with militant groups. Where’s Daddy is said to have been used to track flagged individuals and identify when they returned to family homes — information that, in some accounts, was apparently used to time strikes. Forensic Architecture and other investigators have documented the use of biometric cameras at permanent and ad‑hoc checkpoints across Gaza; whistleblower testimony and media reporting say that Palestinians were often required to submit to scans and identity checks as they moved through displacement corridors.
These practices have alarmed lawyers, humanitarian agencies and rights groups. At the heart of the concern is the IHL principle of distinction — the obligation to distinguish at all times between civilians and combatants — and the related rule of proportionality. Automated or semi‑automated identifications, critics argue, cannot substitute for the qualitative, contextual judgments that proportionality and distinction demand. Reported shortcuts in vetting — including testimony that some targets received only cursory review, or that the definition of a “Hamas operative” was broadened — deepen the risk that civilians were misidentified and killed or injured.
Beyond mistaken identifications, experts have flagged the legality and ethics of the mass, involuntary collection of biometric data. The UN Office for the Coordination of Humanitarian Affairs and organisations such as Human Rights Watch and the International Committee of the Red Cross have raised concerns about coerced data collection during displacement operations. Legal scholars point to Geneva Convention protections against coercion and to obligations to respect the person and dignity of people under occupation — questions that become acute when biometric systems are used at scale and under duress.
Technical and cognitive dynamics compound legal risks. Researchers note that confidence scores and algorithmic outputs can create automation bias: commanders and operators may be tempted to “off‑load” judgment to a machine, or to overweight a high‑confidence flag even when contextual evidence is lacking. When rapid throughput becomes the priority, the safeguards that are supposed to prevent unlawful attacks — human verification, intelligence corroboration, proportionality assessments — risk being eroded.
The human cost behind the technocratic terms is stark. UN reporting cited in recent analyses places the death toll in Gaza at tens of thousands since October 2023, including many children; forensic researchers and local testimony have recounted cases where family homes were struck while relatives were present. Whether and how specific strikes were influenced by AI‑generated recommendations remains a matter for independent investigation — but the pattern of use described in public reporting has already prompted calls for urgent oversight.
Accountability advocates and many scholars propose immediate policy responses: suspend or tightly limit the use of AI‑DSS in environments where civilians are present; require independent external audits of datasets and system performance; guarantee meaningful human‑in‑the‑loop verification before any lethal act; and open independent investigations into alleged unlawful targeting and coerced biometric collection. Humanitarian and legal bodies also stress the need for transparency about how databases are compiled, who controls them and how errors are redressed.
As warfare becomes ever more data‑driven, Gaza has emerged as a case study of the dangers that follow when novel surveillance tools meet high‑tempo military operations. The crucial questions now are not only technological — about algorithmic accuracy and bias — but about law, ethics and political will: who will ensure that machines do not become a new instrument for targeting civilians, and who will hold to account the states and units that deploy them?