In his short story, Songs of the Birds, Palestinian author tells us the story of Aya, a Palestinian girl from the future who lives in the shadow of her deceased brother, Ziyad, who committed suicide the year prior.
Aya sees Ziyad in her dreams every day and tells him that Palestine was liberated and justice was fulfilled. But, as the story escalates towards a terrifying revelation, Ziyad tells Aya that her perception of a liberated Palestine is nothing but a computer-generated simulation and that her memories are Israel-designed algorithms.
Haddad visualises a future where Israel uses artificial intelligence (AI) to extend its control of Palestinians beyond the physical reality, well into their subconsciousness. What is most disturbing about the story is not its radical dystopianism, but how close it is to todayâs reality.
Israel increasingly perceives AI as the new paradigmatic extension of its security mindset. It has already - over the past two decades - restructured much of its economy around cyber security and other cyber defence technologies.
And today, it is one of the worldâs powerhouses for manufacturing and developing military drones. Most of the hi-tech innovations emerged from the heart of the army institution, particularly the Army Intelligence Unit 8200, or were externally contracted to tech companies to serve the needs of the army and intelligence apparatuses.
In May this year, Israelâs deputy army chief Eyal Zamir Israel was poised to parlay its technological knowledge to become an âAI superpower,â focusing mainly on autonomous warfare and streamlined combat decision-making.
Zamir was hardly predicting. Already during the 2021 onslaught on Gaza, Israel used AI to thwart rocket attacks by Hamas [and other Palestinian resistance groups]. the system was able to identify leaders of Hamas rocket units from a large pool of known fighters and eliminate them.
as the first artificial intelligence war, the Israeli army allegedly gathered and analysed data from human intelligence and 3D geographical models, enhanced by satellite imagery, to determine the suitability of weapons and deploy so-called precision attacks inside the crowded Strip.
The claims of AI-powered precision strikes, nonetheless, neither explain the high death toll among Gazaâs civilians nor the targeting of mostly civilian infrastructures, such as residential towers and media centres. They also do not take into account that Palestinian groups managed to outmanoeuvre the AI-driven Iron Dome and deliver painful blows to the Israeli state.
That said, the operationalisation of Israelâs AI may not be the critical question at this point. What is more critical is that it projects a certain pattern for future warfare in general.
In the Ukraine conflict, for instance, Russia has been using AI-powered drones against Ukrainian troops. The Ukrainian military, meanwhile, has resorted to AI to assist with intelligence gathering and air defence. Ukraine has also used AI-based imaging and facial to identify dead Russian troops through their social media profiles.
A few companies in the developed world have crossed a threshold by developing autonomous aircrafts with the capabilities to fly, detect, monitor, and attack enemy targets without or with little human involvement. One of these is Boeingâs Aircraft.
In all these scenarios, human intervention is variably reduced, decision-making de-emotionalised and reactivity desensitised. That poses serious questions about accountability, particularly when autonomous weapons malfunction or have to independently choose whom to target and when.
We have seen the potential of such âdetachmentâ after Israel âdisengagedâ from Gaza in 2005. The occupation changed from boots on the ground to remote-controlled aggression.
Israel nowadays controls and monitors Gaza without having to be physically there, making it much easier to reduce the Stripâs 2.3 million people into a set of algorithms on the Ministry of Defenceâs computer screens.
With that comes so-called âdigital dehumanisationâ, increased separation between actions and accountability, between crime and guilt. The 16 relentless military operations against the Strip since 2006 attest to this pattern.
Yet, what sets Israel apart from the crowd is its employment of AI to enhance an already tight matrix of control over Palestinians.
Since 1948, Palestinians have been subjected to multiple layers of surveillance by the Israeli state. The system has been likened to the, a mechanism of psychological, physical, and social control suggested by English philosopher Jeremy Bentham 200 years ago.
Others see an uncanny similarity between Israel's grip on Palestinians and the Orwellian notion of draconian control depicted in the novel 1984.
In the digital age, the occupation authorities have extensively used hi-tech to track Palestinian movements and activities, even their private lives, virtually or physically. This is partly facilitated by Israelâs full control over the information and communications technology (ICT) infrastructure in the Palestinian territories and, inevitably, its restrictions on Palestinian access to advanced technology.
Thousands of CCTV cameras have been installed in Jerusalem and the West Bank, some of which are connected to servers that can analyse data. In Gaza, the Israeli army has been using surveillance balloons and a fleet of drones to sweep the narrow strip around the clock from one end to the other.
At the centre of this AI-powered intrusion is an experimental facial recognition system, which categorised as part of âautomated apartheidâ and described it as âbiometric apartheid.â
Palestinians passing through checkpoints, especially in Hebron, are scanned and their faces are added to a vast database without their consent. When a Palestinian is detained at a checkpoint, their so-called âbiomarkerâ is added to the database.
Meanwhile, the soldiers at the checkpoints are instructed to take photos and details of at least 50 Palestinians each shift, and then add the info to the database. Soldiers who fail to meet the quota are ordered to remain on duty until they do.
Israeli AI-powered surveillance and combat management have emerged within a context where prejudice and oppression are the norm. Therefore, the algorithmic infrastructure and data that feeds into it are all biased and only serve to heighten and legitimise the prejudiced reality.
Because the system is already building on an environment of extreme oppression, ethical and moral concerns in the virtual realm may be seen as simply an extension of the status quo. The intensification of the dystopia may not be as readily visible.
Israelâs cruel new measures, in this case, would be implemented undetected and eventually normalised. This is what is particularly draconian about automating apartheid in Palestine.
Dr Emad Moussa is a Palestinian-British researcher and writer specialising in the political psychology of intergroup and conflict dynamics, focusing on MENA with a special interest in Israel/Palestine. He has a background in human rights and journalism, and is currently a frequent contributor to multiple academic and media outlets, in addition to being a consultant for a US-based think tank.
Follow him on Twitter:
Have questions or comments? Email us at: editorial-english@newarab.com
Opinions expressed in this article remain those of the author and do not necessarily represent those of °źÂț”ș, its editorial board or staff.