Critics express growing concerns over recent information regarding the Israeli Defense Forces’ (IDF) use of the Lavender data processing system in the Gaza Strip. According to a report by Stop Killer Robots, an advocacy group focused on the legal, moral, and humanitarian implications of autonomous weapons, the use of such systems raises critical issues, including the risk of dehumanizing digital processes and the potential loss of human control in military operations.
The Lavender system, similar to the Habsora/Gospel system, is not an autonomous weapon. However, its function as a target recommendation tool based on artificial intelligence has been the subject of scrutiny. The system reportedly analyzes various behavioral “features” such as communication patterns, social media connections, and frequent address changes to generate human targets. A source from +972 magazine highlighted the concerning nature of this process, stating, “An individual with several incriminating features attains a high rating, automatically becoming a potential target for assassination.”
While the final decision to strike a target identified by Lavender is made by humans, not machines, the system’s role in target selection underscores significant concerns related to digital dehumanization and the erosion of meaningful human control in military decision-making.
International Humanitarian Law (IHL) emphasizes the protection of civilians, requiring that individuals be considered civilians in cases of doubt regarding their status. The use of Lavender to rank civilians in Gaza based on behavioral profiles, which reportedly have an error margin of 10%, effectively reduces individuals to mere data points. This raises serious questions about compliance with IHL, the potential violation of human dignity, and the ethical implications of such technology.
Additional reports have surfaced about the IDF’s broad authorization for officers to rely on Lavender’s recommendations without a thorough review of the underlying intelligence data or the rationale behind the system’s choices. This lack of in-depth human engagement in decision-making processes highlights a troubling trend towards reliance on automated systems in military operations.
Stop Killer Robots advocates for the development and use of technology to foster peace, justice, human rights, and equality, rather than for autonomous killing or perpetuating inequality and oppression. The organization’s stance is echoed by calls from the UN Secretary-General, the International Committee of the Red Cross, and over 100 countries for a legal framework governing autonomous weapons systems.
The increasing deployment of technology with significant levels of autonomy in warfare underscores the urgent need for clear prohibitions and regulations on autonomous weapons in international law, to address the complex ethical, legal, and humanitarian challenges they present.