When No One Really Pressed the Button: AI in War |
In recent weeks, a growing number of reports have pointed to the expanding role of artificial intelligence in modern warfare. From dynamic targeting in Iran to the rapid generation of strike options in Gaza, AI systems are increasingly embedded in how militaries identify and act on threats. What stands out in these accounts is not just technological sophistication, but speed: the ability to process vast amounts of data and translate it into operational decisions in seconds rather than hours or days.
This shift is not unique to Israel. The United States has been moving in a similar direction for years. Programs like Project Maven, launched in 2017, were designed to integrate AI into military intelligence and targeting processes, with the explicit goal of accelerating decision-making on the battlefield. More recently, reports suggest that advanced AI systems have been used to compress the entire “kill chain” – from identifying a target to approving a strike – into minutes or even seconds.
In that sense, what we are witnessing is not a local development but a broader transformation – the gradual reconfiguration of warfare around data, algorithms, and speed.
Israel offers one of the clearest case studies of this shift. According to multiple analyses, systems such as “Lavender” and “The Gospel” process intelligence from surveillance, communications, and other sources to generate targeting recommendations at scale. These tools can identify thousands of potential targets and dramatically increase the pace at which military options are produced – sometimes reaching levels that human analysts alone could never sustain.
Crucially, these systems are not described as autonomous weapons. Militaries, including the IDF and the U.S. Department of Defense, emphasize that human decision-makers remain “in the loop”. The algorithm suggests; the human decides.
But that description, while formally accurate, no longer captures the full picture.
As AI systems become more capable, they do not simply assist decisions; they reshape them. When a system generates hundreds of potential targets a day, the human role inevitably changes. Decision-makers are no longer starting from scratch; they are selecting from a pre-structured set of options. And when timeframes shrink, from hours to minutes, or even seconds, the space for independent human judgment narrows accordingly.
This is not necessarily a failure. It is, in many ways, the logical response to a battlefield defined by complexity and information overload. No modern military can operate effectively without computational tools that process data at scale. The integration of AI is therefore not an anomaly – it is an adaptation.
Yet this adaptation creates a quieter, less visible problem – one that is legal rather than technological. Modern legal frameworks governing armed conflict are built around a simple premise, that decisions are made by humans. Responsibility can therefore be traced back to an identifiable decision-maker – someone whose intent, knowledge, and judgment can be evaluated. This is what allows legal systems to function – the ability to ask, after the fact, who decided and on what basis.
But AI-assisted decision-making complicates that premise.
When a targeting decision emerges from a chain that includes data collection, algorithmic processing, model design, and rapid human approval, it becomes harder to isolate a single point of decision. Responsibility does not disappear, but it becomes distributed. It stretches across engineers, analysts, commanders, and systems that are often opaque even to those who operate them.
Recent debates in the United States reflect this growing concern. As private technology companies become more deeply embedded in defense systems, questions are emerging not only about state responsibility, but also about the role of developers and contractors in shaping battlefield outcomes. At the same time, analysts warn that AI systems can produce outputs faster than they can be meaningfully verified, further complicating oversight.
Israel’s experience highlights these tensions in a particularly visible way, but it is far from alone in facing them. This is why the central question is not whether AI should be used in warfare. That question, in practice, has already been answered. The strategic incentives are too strong, and the technological trajectory too clear. The real question is whether existing legal frameworks are equipped to handle a world in which decisions are no longer made at a single point in time, by a single actor.
Because as the speed and scale of decision-making increase, the structure of responsibility begins to shift. And that shift matters. Not because it eliminates accountability, but because it makes accountability harder to locate. A system designed around identifiable human judgment begins to strain when judgment itself becomes embedded in processes that are collective, probabilistic, and partially opaque.
In that sense, the most important transformation is not the rise of AI on the battlefield, but the quiet gap opening between how decisions are made and how responsibility is assigned. And in a legal system that ultimately depends on answering one question – who decided – that gap is only likely to grow.