menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

The Clash Over the Use of AI in Military Decision-Making

24 0
yesterday

Anthropic opposed AI in military use for mass surveillance and autonomous lethal decisions.

The Pentagon argues that military AI should follow only U.S. law, not private company ethics.

The U.S. military prefers "unrestricted lawful use" of AI, but very few actual laws or guardrails exist.

Research on human decision-making suggests AI should be trained to act with human-like ethical responsibility.

Historically, only humans made life-or-death decisions about using force and lethal weapons. But now, society is beginning a new age in which machines with artificial intelligence may be equipped to make autonomous lethal decisions without humans in the decision-making chain. Does creating autonomous robots for lethal military use suggest that we have entered a brave new world—or a virtual minefield?

Recent reports suggest that the Pentagon was unhappy with the AI company Anthropic, Inc., because the company leadership didn’t agree with the Pentagon's request for Anthropic to abandon certain guidelines for its AI model used by the military.1 Anthropic has a long track record of its AI model being used by the military on classified cloud and other intelligence and military applications. Anthropic was clearly OK with monetizing its AI model for most military applications, but the company wanted to limit the military’s use of its AI in two distinct ways;

Mass surveillance. Anthropic argues that AI-driven mass surveillance presents serious risks to fundamental liberties. Anthropic did not wish to allow its AI model to be used for any large-scale domestic surveillance in the U.S.

Fully autonomous weapons. Anthropic wished to restrict the use of its AI model for fully autonomous weapons that take humans out of the decision-making loop entirely and that automate engaging targets with lethal action.

Let’s examine each of these issues and the serious questions raised.

It sounds Orwellian, but we know that the government has the capacity to be watching us. For example, there’s been an explosion of video cameras stationed throughout society. Furthermore, as a result of digitization, much of our personal data is available on the digital cloud. AI models can comb this vast video and data universe using facial recognition software and profiling algorithms. AI computers thus have the means to comb huge amounts of public data with a potent mix of tools that can potentially be used for wide-scale surveillance of the citizenry.

Video cameras serve many useful functions, including giving law enforcement the ability to access video for preventing and solving crimes. Databases are likewise critical components of many organizations, and our personal data—from photos and documents to our data consumption patterns—details our identity, habits, travel, spending, and phone calls. Discovering online patterns may be great for preventing crimes or marketing products, but these vast volumes of personal data could potentially be misused to violate privacy, steal identities, facilitate censorship, or worse.

Recently, we appear to have crossed a threshold at which society is grappling with possible government surveillance and privacy issues. Anthropic balked at the military’s reluctance to accept limits on how it could use AI models. Pentagon officials argued that the military should be bound only by U.S. law, not by a private company's ethical policies.

Fully Autonomous Weapons

A second key area in which Anthropic sought restrictions on the military's use of its AI model was in the use of autonomous AI military drones for making lethal decisions. Psychology has studied how humans make critical decisions under duress, from athletics to law enforcement to military contexts.2 For example, risk-sensitivity theory offers a framework for understanding the complex cognitive processes of humans making life-or-death decisions.

Can we be certain that AI is capable of making life-or-death decisions, balancing all the variables? Research on human decision-making suggests that AI should be trained not merely to aim for the highest reward (e.g., a greater number of “kills”) but to actively avoid catastrophic outcomes and manage uncertainties with human-like ethical responsibility.

Anthropic did not want its AI model to be used for lethal autonomous decision-making without a human included in the final decision-making command chain. The company suggested that only humans—not AI—should be ethically capable of making a lethal decision to “pull the trigger.”

The Pentagon did not agree to the limitations, and Anthropic was released as the Pentagon’s AI provider. Other companies soon offered their AI models to the Pentagon instead—but the debate about whether private companies developing AI models can define their own guardrails and limitations on the military's use of their software will continue. The U.S. Congress has not yet created laws or even clear rules to guide AI companies or the Pentagon on the ethical limits of AI for military use.

The debate is complicated by the possibility that competitors from other nations may not be limited by ethical guidelines. In the global race to militarize AI, and with the rapid invention of military drones and robots, it’s ever more likely that machines will be created to make autonomous targeting and killing decisions in the future. Will the U.S. military forgo such tools in its arsenal while adversaries, unrestrained by ethics, develop autonomous lethal drones? Will the U.S. unilaterally commit to restrictions on the use of AI for lethal military actions?

The consequences of any such decisions are concerning. As with many technologies, the genie is probably already out of the bottle. Autonomous lethal military drones are already being developed and deployed on battlefields around the world. AI may be intelligent enough to distinguish an adversary tank from a friendly one, but the question is, should the AI decide to pull the trigger?

Within the fog of war, mistakes will happen, and fatal errors can occur. Are we ready to allow robots and drones equipped with AI to decide friend versus foe, who to terminate and who to spare? Ultimately, psychology can study how humans make critical decisions. Yet we must decide as a society how to train and manage AI's decision-making, including how it is used to sift through our personal data and whether it is allowed to equip lethal autonomous drones and robots to make life-or-death decisions.

1. https://www.anthropic.com/news/statement-department-of-war

https://www.npr.org/2026/02/27/nx-s1-5727656/what-to-know-about-the-showdown-between-ai-company-anthropic-and-the-pentagon

2. Sandeep Mishra (2025). Decision-Making Under Risk: Integrating Perspectives From Biology, Economics, and Psychology, Personality and Social Psychology Review, Volume 18, Issue 3, https://doi.org/10.1177/1088868314530517

There was a problem adding your email address. Please try again.

By submitting your information you agree to the Psychology Today Terms & Conditions and Privacy Policy


© Psychology Today