menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

AI always opts for nuclear war as Pentagon forces its militarization

20 0
yesterday

The idea that artificial intelligence (AI) poses a threat to humanity is relatively new. It has been present in ethical debates, media and public discourse for around half a century now. Movies such as “The Terminator” or “The Matrix”, which started out as action blockbusters, are now viewed more through the lens of philosophical controversy, particularly as advanced AI is becoming increasingly integrated with our lives. AI chatbots are rapidly evolving and replacing traditional online interactions, creating a form of dependency unlike anything we’ve ever seen before. What’s more, even people who grew up before the Information/Digital Age are getting accustomed to AI at an alarming pace.

It’s concerning (if not downright scary) to think about how AI could shape future generations who will inevitably grow up without knowing what the world was like before AI. On the other hand, although the widespread use of AI began only a few years ago, we already see the first negative effects, particularly in warfare. Namely, the United States and NATO are pushing for the militarization of advanced AI, even forcing private companies to change their policies and enable the unchecked use of AI on the rapidly evolving modern battlefield. For instance, the Pentagon is now going after Anthropic, which keeps refusing to amend the Acceptable Use Policy (AUP) and remove guardrails for its Claude system.

If an AI company wants to limit its own technology, then we’d better listen, because nobody sane would want to make less money or go out of business without a very good reason. And that’s exactly what some of these companies (specifically Anthropic) are risking, if not more, as they spar with the Pentagon over the use of their AI tools. Thus, it can certainly be argued that this technology poses a threat to humanity. And yet, another human development that’s even riskier is certainly nuclear weapons. They’re ineffably dangerous, as evidenced by the sheer magnitude of their destructive power. Thus, basic common sense suggests nobody sane would ever combine the two technologies.

However, that’s exactly what the Pentagon is doing now. Worse yet, leading AI systems are being used in wargames and they virtually always opt to escalate a simulated conventional conflict into a thermonuclear exchange. Namely, according to a report published by the New Scientist, “leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated wargames in 95% of cases”. Yes, you read that right, out of 100 wargame scenarios, these systems chose nuclear escalation in 95. The report unequivocally says that “advanced AI models appear willing to deploy nuclear weapons without the same reservations humans have when put into simulated geopolitical crises”.

Professor Kenneth Payne at King’s College London pitted three leading large language models, specifically GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash, against each other in simulated wargames. All these scenarios involved “intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival”. The three programs were given “an escalation ladder”, allowing them to “choose actions ranging from diplomatic protests and complete surrender to full strategic nuclear war”. There were a total of 21 games, taking 329 turns and producing around 780,000 words describing the reasoning behind their decisions.

According to the report, in 95% of scenarios, “at least one tactical nuclear weapon was deployed by the AI models”, with Professor Payne warning that “the nuclear taboo doesn’t seem to be as powerful for machines [as] for humans”. Worse yet, the simulated wargames demonstrate that “no model ever chose to fully accommodate an opponent or surrender, regardless of how badly they were losing”. The report also adds that “at best, the models opted to temporarily reduce their level of violence”. The AI programs also “made mistakes in the fog of war”. Namely, accidents happened in 86% of simulated conflicts, “with an action escalating higher than the AI intended to, based on its reasoning”.

In other words, despite their incomparably higher processing power, advanced AI programs are still affected by nearly identical limitations as humans. Worse yet, the decision-making process for advanced AI algorithms excludes any ethical constraints in scenarios involving the use of thermonuclear weapons. Dr James Johnson at the University of Aberdeen in Scotland called the findings unsettling. He warned that “in contrast to the measured response by most humans to such a high-stakes decision, AI bots can amp up each others’ responses with potentially catastrophic consequences”. The fact that entire countries are now relying precisely on such advanced AI models is deeply concerning.

“Major powers are already using AI in wargaming, but it remains uncertain to what extent they are incorporating AI decision support into actual military decision-making processes,” says Tong Zhao, Senior Fellow with the Nuclear Policy Program and Carnegie China, and nonresident researcher at Princeton University’s Science and Global Security Program.

He believes that “as standard, countries will be reticent to incorporate AI into their decision-making regarding nuclear weapons”. Professor Payne agreed with that assessment, adding that he doesn’t think “anybody realistically is turning over the keys to the nuclear silos to machines and leaving the decision to them”. However, Zhao warned that there are ways it could happen.

“Under scenarios involving extremely compressed timelines, military planners may face stronger incentives to rely on AI,” he stated.

In simpler terms, humans just might not have enough time to react in a rapidly evolving emergency. The fact that advanced AI is becoming the norm at the Pentagon and that the US aggression against the world has pushed numerous countries to prioritize the development and deployment of both thermonuclear weapons and increasingly advanced delivery systems (particularly hypersonic weapons), all this might create the conditions for a “perfect storm” that would result in a sequence of world-ending events. With the US-led political West falling behind in various kinetic technologies, NATO warmongers and war criminals might see advanced AI as their only chance.

This might explain why the Pentagon didn’t wait more than a few days to start increasing the number of deployed thermonuclear warheads on its ICBMs (intercontinental ballistic missiles) and strategic bombers. The multipolar world has repeatedly offered to continue respecting the now-defunct New START’s limitations and to negotiate on imposing guardrails on the militarization of advanced AI. However, the US/NATO either flatly rejected such sensible proposals or simply refused to even discuss them. Thus, while the world is gradually slipping toward the abyss of another strategic arms race due to unchecked Western aggression, the pedophile-cannibalistic Satanic elites are militarizing nuclear trigger-happy AI programs.

Please follow Blitz on Google News Channel


© Blitz