In Gaza, we are watching the world's first AI-assisted genocide
Throughout history, marking people for death with a number or tattoo has become a trait of oppressive regimes. It is a process of bureaucratic dehumanisation and humiliation, as well as ruthless efficiency.
And now it's happening to Palestinians.
Palestinians in Gaza are given an AI-generated score of 1-100, based on how likely they are to be linked to a Hamas operative. The closer their score is to 100, the more likely they are to be killed.The key difference is that most Palestinians killed in Israel's AI-assisted genocide will never know their score.
The Israeli army has been using an AI-based program called 'Lavender' to mark tens of thousands of Palestinians as targets for assassination in the ongoing Gaza war, according to an by +972 Magazine and Local Call.
Combined with loosened rules of engagement and minimal human oversight, it has resulted in the deaths of thousands of Palestinian civiliansin Israeli airstrikes throughout the conflict, including entire families- one of the in any conflict this century.
"Palestinians have their humanity stripped away and their fate determined by a few data points like gender, address, age, and social media activity. These all form a 'model'. Using AI, Israel's military can generate thousands of targets in seconds"
Technologies of death and dehumanisation
The advent of the Fourth Industrial Revolution, or 4IR, is changing how wars are fought. New technology brings with it new ways of killing. The industrialisation prior to World War I saw the mass production of shells and machine guns and the development of chemical weapons.
Technological innovation combined with industrialisation has consistently increased the efficiency and creativity of killing.
Now, digital technology, automation, and AI are increasing the possibilities of remote killing, such as drone warfare, and adding to the dehumanisation of conflict. Victims are reduced to blips on a screen.
As French philosopher Grégoire Chamayou , "One is never spattered by the adversary's blood. No doubt the absence of any physical soiling corresponds to less of a sense of moral soiling…".
In using Israel’s Lavender system in Gaza, one operator , "The machine did it coldly. And that made it easier".
Gaza as a laboratory
In the age of 4IR, artificial intelligence (AI) victims that might have been blips on a screen are further dehumanised as mere numbers in a spreadsheet, an outcome of a machine learning model that has decided based on past data that this person, on the basis of probability, deserves to die.
In Gaza, targets are selected based on hundreds of thousands of data points (or 'features'). These include, for example, “being in a WhatsApp group with a known militant, changing cell phone every few months, and changing addresses frequently,” according to .
Palestinians have their humanity stripped away and their fate determined by a few data points like gender, address, age, and social media activity. These all form a 'model'. Using AI, Israel’s military can generate thousands of targets in seconds.
Israel claims this model has, but this is highly misleading. It is 90% accurate only according to whom Israel has determined to be a target.The 90% does not include 'collateral' damage.
The operators themselves stated that even for low-level militants it was acceptable for 15-20 civilians to be killed. If you kill 20 civilians for 1 militant using this system, the accuracy is only 5%.
If this number was extrapolated to the over 33,000 Gazans killed since 7 October, it would mean that 31,350 (95%) are civilians. The Gazan health ministry says over were likely to be women and children, and not militants. In reality, this would give an ‘accuracy’ range of around 25%, not 90%.
Israel has also been famously liberal in who they define as Hamas. For example, the Israeli army has previously claimed that 50% of UNRWA's 30,000 employees are first-degree relatives of a Hamas operative. Will this make them more likely to be bombed?If you factor in second-degree relations, then almost everyone will be indirectly linked to a Hamas operative.
"Operators reported how they were under pressure to find more targets, and would thus lower the threshold of who they considered to be a valid target"
The line between Hamas and non-Hamas is not based on science but on Israel’s politics. Indeed, operators reported how they were under pressure to find more targets, and would thus lower the threshold of who they considered to be a valid target.
While on one day a score of 80 out of 100 might be enough to warrant targeting someone, the political pressure to kill more Palestinians might mean the score is lowered to 70 the next day.
Operators were acting like corporations where killing Palestinians was part of their KPIs (key performance indicators), or in this case, killing Palestinian indicators. For this reason, operators reported how when they did this they were given people like 'civil defence' operators to target - people who might help Hamas but don't endanger soldiers.
So while operators may have been a human element approving the strikes, they were increasingly rubber-stamping decisions to kill based on political pressure.
The myth of the neutrality defence: Double dehumanisation
The defence against such AI systems is often the same, they will remove the human bias. This could not be further from the truth. AI learns from humans. If society has a history of racism, classism, or sexism, without sufficient control, these will be reflected in the output. If the data is 'bad', the output will be bad.
In non-conflict situations, AI systems are known to based on race, gender, education, and background. In the UK, disadvantaged gotten worse grades than wealthier children, and in the US, people of colour are more likely to be victims of and algorithms. In the Netherlands, people were wrongly accused of fraud.
So what happens when an AI system is designed and trained by an occupying power and an apartheid statethat already dehumanises Palestinians? Such questions become particularly alarming in light of statements by ex-Shin Bet head Ami Ayalon, whothat most Israelis believe that "all Palestinians are Hamas or supporters of Hamas".
Not only does the use of the technology represent a dehumanisation in and of itself, but the data fed to that system is already based on a dehumanised interpretation of Palestinians built up over decades of Israeli occupation. Hence, a double dehumanisation.
This is also why Israel’s claims of respecting ‘proportionality’ are immaterial. Twenty civilian casualties for one low-level militant might be considered proportional, but only if you have dehumanised Palestinians to the extent that their lives are worthless.
No accountability
It is also telling that Israel is trying to avoid steps towards accountable use of AI. Last year, the United States initiated a Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.
The agreement that the military application of AI must adhere to international law, especially humanitarian law, and should be "ethical, responsible, and enhance international security".
It also calls for a considered approach to risks and benefits, and "should also minimise unintended bias and accidents." However, Israel, along with Russia, China, and a few other countries, have not endorsed this declaration. Even if Israel had endorsed the agreement, it is not legally binding.
"Not only does the use of the technology represent a dehumanisation in and of itself, but the data fed to that system is already based on a dehumanised interpretation of Palestinians built up over decades of Israeli occupation"
Although AI carries great potential for positive change, it is easy to get lost in the effusive and often uncritical narratives about its power. But it is important to remember that technology is neither good nor evil. Dictators, autocrats, and criminals will weaponise technology for whatever purposes suit them.
As we are seeing in Gaza, the Israeli military is deploying AI to facilitate the efficiency with which they target Palestinians. Unfortunately, this provides the false moral cover of objectivity and impartiality to a process that is neither.
This is perhaps the first AI-assisted genocide in history, and it is happening not because of the technology itself, but because the technology is being used by a state that has, for decades, learned and subsequently taught AI to dehumanise Palestinians.
Marc Owen Jones is an Assistant Professor of Middle East Studies at HBKU and a Senior Non-Resident Fellow at Democracy for the Arab World Now and the Middle East Council for Global Affairs
Follow him on Twitter: