menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Opinion – When the Algorithm Becomes the Alibi

28 0
latest

In September 2024, The New York Times published an op-ed by Raj M. Shah and Christopher M. Kirchhoff. They argued that the United States needed to urgently adopt AI-powered weapons systems to keep up with a so-called civilizational race with China. They also that the military needed to overhaul its technological capabilities to keep up with AI-driven warfare. Since then, that vision has ceased to be on the opinion pages and has become a living reality. The United States launched Operation Epic Fury on February 28, 2026, striking over 1,000 targets within the first 24 hours, which CENTCOM attributes in part to AI-assisted systems.

The argument put forward by Shah, Kirchhoff, and the chorus of Western commentators who follow them is based on three interconnected assumptions: AI makes war more precise, American technological leadership is a moral imperative in a world endangered by authoritarian competitors, and a human in the loop offers enough ethical protection against atrocity. These assertions are not merely contestable, but they are refuted by documented facts based on battlefields where this technology has already been used against population centres.

Start with the most alluring argument: that AI will make killing more humane. Tech corporations and their representatives advance the images of sterilized, clean, and bloodless violence. Their demonstrations do not involve civilians or the demolition of civilian infrastructure. In short, the algorithmic warfare is presented as a clean and efficient activity. However, the actual practice of AI targeting systems is a radically different narrative, written in the blood of Palestinians, Iranians and others who never had the privilege to be given a column in the New York Times.

The Israeli military in Gaza had a list of up to 37,000 Palestinians, algorithmically connected to Hamas, under the Lavender system. The number of civilians that the officers were permitted to kill with each junior target was set at up to 20 in the first weeks of the war. This is not precision. It is machine learning-washed industrial-scale probabilistic killing. The army knew that human supervision was minimal and that personnel would not detect errors. They treated errors statistically because of the scale and size of the task. They assumed that even when they did not know whether the machine had made the right decision, it was statistically correct. Officers rubber-stamped targets in an average of twenty seconds to ensure approval of a Lavender-marked target.

The records taken in a secret Israeli military database showed that of the 53,000 Palestinians who were killed in Gaza, only 17 percent were combatants, a figure that is uncommon in contemporary warfare, and has led to international accusations of genocide. These are not exceptions. These are the foreseeable outcomes of the implementation of AI against systems with little human supervision and a political calculus that considers non-Western lives as valid data points.

In Iran, the attack on the Shajareh Tayyebeh girls’ school in Minab, killing 168 people, mostly schoolchildren, raised urgent inquiries among US Senate Democrats who wanted to know whether AI was involved in the choice of targets. No responsibility has ensued. This is exactly what AI can do on a systemic scale: not accuracy, but plausible deniability at scale. A human in the loop whose only role is to accept the recommendations of a machine is not a protective measure but a design flaw. What is left is the semblance of oversight and not its reality.

The appeal to a civilizational race against China as the final defence of the dumping of guardrails by Shah and Kirchhoff is not novel. This framing appears thinly veiled in the broader discourse. It is most explicit in arguments like that of Anduril founder Palmer Luckey, who told a TED audience that “if the United States doesn’t lead in this space, authoritarian regimes will — and they won’t be concerned with our ethical norms.” This reflects the oldest colonial playbook argument as seen through the prism of the Global South. It suggests that Western violence is a civilizational necessity and that the monopoly of lethal force is a form of moral stewardship. It also implies that questioning this monopoly is naive or even treasonous.

Asia, Africa, Latin America, and the Middle East have heard it before. They listened to Britain using the Maxim gun against Sudanese tribesmen. It is the same historical analogy that Shah and Kirchhoff themselves used to glorify the transformative power of AI warfare. They heard it in decades of drone attacks in Pakistan, Somalia, Yemen, and Libya, where algorithmic “signature attacks” murdered men not for who they were but for their behaviour patterns were identified by a machine. The future of civilian deaths in the name of errant attacks and loose principles of who may be targeted is a long-running issue. It is evident in the application of so-called more precise systems, as demonstrated by the drone attacks of this century.

What the Global South sees more clearly than those who are part of Western strategic culture is a troubling contradiction. The very nations that claim to champion responsible AI and ethical frameworks are also weakening the international legal framework that could hold them accountable. The Pentagon has drastically reduced efforts to test and evaluate large weapons systems. It has also scaled back attempts to determine the risks of civilian casualties from these systems. Regulations that the Biden administration proposed to manage AI risk were already weak. Those regulations have now been further undermined. In the meantime, efforts to control AI in the military remain highly disjointed.

There is still no binding international convention governing these technologies. The United States is not leading any serious negotiations on verifiable restrictions for autonomous targeting. It has not taken meaningful steps to push such restrictions forward. Whenever China or Russia present proposals for international conventions on autonomous weapons at the United Nations, Western delegations respond cautiously. They often water down the proposals. At other times, they postpone discussions or derail the process altogether. The lesson that the Global South should draw from this trend is not that the West is perfecting the ethics of AI warfare in good faith. The lesson is that Western powers are determined to maintain their monopoly on algorithmic lethality. They also appear intent on ensuring that any future treaty remains too weak to meaningfully bind them.

The Shah-Kirchhoff argument has a dimension that Western editors seldom bring to light: the financial interests embedded in the urgency they prescribe. Raj Shah is now a venture capital firm that invests in defence tech startups – another form of the revolving door between the Pentagon and the weapons industry. Their essay is more of a sales pitch on the military-industrial complex than a serious consideration of the dangers of these technologies. Venture capital firms have poured tens of billions of dollars into defence tech startups. The industry involved in AI-driven systems has hired dozens of former military officials to make their case in Washington. What appears as strategic analysis, in material sense, is a pitch to investor returns.

The infrastructure of AI warfare is costly to develop, with the US Stargate Program alone projected to cost $500 billion further widening what analysts have termed the “intelligence divide.” A world where only the Global North and their Silicon Valley contractors are able to construct, implement, and read AI target systems is a world where Western military hegemony is hard-coded into the software of violence itself. The Global South is not a passive spectator to this process. It is its main target theatre.

The most perilous notion in this discussion, perhaps, is that algorithmic systems can adhere to the International Humanitarian Law and do so at the speed at which no human can effectively oversee. The most impactful impact of AI-driven war is the time squeeze. Although human control may be formally necessary, the context within which such a decision is made is becoming more and more pre-programmed by machines. The legal frameworks established at Geneva—distinction between combatants and civilians, proportionality, precaution in attack, and individual criminal responsibility—assume the presence of a human decision-maker. That decision-maker can be identified, interrogated, and prosecuted. AI targeting does not simply make this framework more complex; it disintegrates it in a systematic way. UN Special Rapporteur Ben Saul has said that in case the reports regarding the use of AI by Israel in Gaza are true, a significant number of Israeli attacks would be considered war crimes. There has been no prosecution. There is no accountability mechanism.

The reaction of the Global South to the AI warfare doctrine is not naive pacifism, or rejection of the legitimate security interests of states. It is a request of uniformity. When AI warfare is indeed the precision tool that its proponents purport it to be, they should not object to the imposition of binding international verification regimes, independent investigations of casualties, and universal treaty commitments to the same standards that Western governments have placed on others in the case of chemical and biological weapons. The answer to whether this technology is being used responsibly is simple. It lies in the refusal to accept meaningful constraints. The algorithm is not unbiased. There are owners of the kill chain. And the bodies it generates are not statistics; they are the undeniable testimony that what is being marketed as accuracy is in reality a novel and more effective architecture of impunity.

Further Reading on E-International Relations

Civilizational Nationalism: Concept, Cases, and Global Implications

India’s Civilizational Imagination of Southeast Asia

Westphalianism to Civilizational Statism: Trendy Mirage or Foundational Shift?

Artificial Intelligence, Weapons Systems and Human Control

Neo-Ottomanism as Civilizational Nationalism: Turkey’s Quest for Identity

Iran’s Nuclear Ambitions under the Shah and Ayatollahs: Strikingly Analogous but More Dangerous

Muhammad Saad is a Researcher at the Centre for Aerospace and Security Studies (CASS) in Islamabad.


© E-International