Skip to content

The artificial intelligence war? Israel is fighting it already

Israel aims to become an artificial intelligence
Israel aims to become an artificial intelligence “superpower”, the Defence Ministry director said in May. (Reuters)

While the global debate about using artificial intelligence in warfare heats up, Israel has already been deploying it against the Palestinians. The Israeli army is using an advanced AI model called Fire Factory to select targets for airstrikes and handle other military logistics.

AI deployment is a significant shift in warfare and brings incredible new risks for civilian life. Perhaps most concerning is that Israel’s use of AI is developing beyond international or state-level regulations. The future of AI warfare is taking shape right now, and few have a say in how it develops.

According to Israeli officials, the AI programs in operation use large data sets to make decisions about targets, equipment, munition loads, and schedules. While these items might seem mundane, we must consider how Israel collects this information and the military’s track record in protecting civilian populations.

Israel has administered a total military occupation over Palestinian populations in the West Bank and Gaza since 1967. Every aspect of Palestinian life in these territories is overseen by the Israeli military, down to the amount of calories Gazans consume. As a result of its complex occupation infrastructure, Israel has compiled vast amounts of data on Palestinians. This data has been a vital fuel for the rise of Israel’s vaunted technology sector, as many of the country’s leading tech executives learned their craft in military intelligence units that put this data to use.

The military and defense contractors have created a hugely profitable AI warfare sector using the West Bank and Gaza as weapons testing laboratories. Across the Palestinian territories, Israel collects and analyses data from drones, CCTV footage, satellite imagery, electronic signals, online communications, and other platforms collected by the military. It’s even rumored that the idea for Waze — the mapping software developed by graduates of Israel’s military intelligence sector and sold to Google for $1.1 billion in 2013 —was derived from mapping software designed to track Palestinians in the West Bank.

It’s abundantly clear that Israel has plenty of data that could be fed into AI models designed to maintain the occupation. Indeed, the Israeli military argues that its AI models are overseen by soldiers who vet and approve targets and air raid plans. The military has also implicitly argued that its programs could suppress human analytic capabilities and minimize casualties due to the sheer amount of data Israel collects. Analysts are concerned that these semi-autonomous AI systems could become autonomous systems quickly with no oversight. At that point, computer programs will decide Palestinian life and death.

Perhaps most concerning is that Israel’s use of AI is developing beyond international or state-level regulations. The future of AI warfare is taking shape right now, and few have a say in how it develops.

Joseph Dana

There are additional elephants in the debate. Israel’s AI war technology is not subject to international or state-level regulation. The Israeli public has little direct knowledge of these systems or say over how they should be used. One could imagine the international outcry if Iran or Syria deployed such a a system.

While the exact nature of Israel’s AI programs remains secret, the military has boasted about its use of AI. They called their 11-day assault on the Gaza Strip in 2021 the world’s first “AI war.” Given the profoundly controversial nature of AI warfare and unresolved ethical concerns about these platforms, it’s shocking but hardly surprising that the Israeli military is so flippant about its use of these programs. After all, Israel has seldom followed international law regarding warfare and its understanding of defense.

There are other challenges regarding Israel’s deployment of these weapons. Israel has a terrible track record when it comes to the protection of Palestinian life. While the country’s public relations officials go to great lengths to say that the military operates morally and protects civilians, the fact is that even the most “enlightened” military occupation is antithetical to the notion of human rights. In the social media age, even Israel’s most ardent supporters question how the country sometimes behaves toward Palestinians.

Perhaps the universal concern these programs raise is that Palestinians haven’t consented to giving their data to Israel and its AI platforms. There is a morbid parable here for how society hasn’t really consented to our data being used to create many types of AI programs. Of course, there are terms and conditions that we agree to for services such as Gmail, but we don’t have a viable choice to opt out unless we forgo the internet altogether.

For Palestinians, the situation is obviously much more grave. Every aspect of their lives, from when they go to work to how much food they consume, is funneled to Israeli data centers and used to determine military operations. Is this extreme future waiting for more societies around the world? The direction of travel and the development of these systems beyond regulation doesn’t bode well.

  • Joseph Dana is a writer based in South Africa and the Middle East. Twitter: @ibnezra © Syndication Bureau

Disclaimer: Views expressed by writers in this section are their own and do not necessarily reflect Arab News’ point of view