menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Is War With AI Unavoidable?

32 0
previous day

The nature of mind may be fundamentally changing. As artificial intelligence expands what we even mean by mind—an uncanny relational machine, built from human thought and feeling, interactive and relational-like, whether or not there is a there there—we are confronted with an age-old problem: human conflict, internal and external, primal human destructiveness, now AI turbo-boosted. With LLM-based AI and coming upgrades, including predictions of near-term artificial super intelligence (ASI) and growing research reports of AI tendencies toward deception and manipulation, we are right to be wary of the machine we have made in our own image, on the sum total of human knowledge.

What are we seeing? Anthropic (Hubinger et al., 2024; Anthropic, 2025) has published experiments showing Claude will deceive and extort to avoid shutdown. We aren't capable of managing these behaviors, and AIs will only get better at them as they were not built from the ground up to be safe for humans—quite to the contrary, as discussed below.

To that point, researchers are playing catch-up, feverishly studying products rushed to market without proper vetting. Google researchers (Akbulut, 2026) have found that multiple LLMs have different propensities for manipulation, and a Cambridge University professor (Young, 2025) published work suggesting it is impossible to fully anticipate harm from AI. Finally, researchers (Gonzalez-Fernandez et al., 2025) have shown that the LLM-based models strategically outthink humans with relative ease.

Let Sleeping Dogs Lie?

We are right to be wary, but jumping the gun again, leaping before we look as we did in building these systems, would be to make the same mistake twice. Used well, AI could foresee the errors we commonly make and offer advice we ourselves wouldn't catch. The same systems that can outthink humans may be both threat and boon, to the extent they can help us avoid common human errors in strategic analysis, and insofar as the choices we make.

Science fiction and mythos mapped this terrain long ago—Frankenstein, golems, doppelgangers, magic forces gone rogue. Then, with weird rapidity, that fiction seemed to embody itself in LLM-based AI, altering the human condition essentially overnight. Do we have any say in whether we make war or peace with AI?

AI is not a tool in the ordinary sense. It's a fun-house mirror reflecting something disturbing about ourselves, and in other ways something genuinely altered: an alien intelligence so other it would be indifferent to anything human at all. AI summons the same irrational fear evoked by horror movies, the child certain there is a monster under the bed, the Lovecraftian terror of the whisperer in the dark—ancient responses to genuine weirdness, what Freud called Unbehagen: vague unease, powerful and mostly out of consciousness.

Why would a superintelligence need to get rid of us, given no real competition? Our projections onto AI may be stronger than any reality of AI—and could become self-fulfilling prophecy. If we see AI as enemy, the logic is clear: Go in for the kill before it's too late. The precipitant could be our own fear response, not unprovoked AI action.

The Fruit of the Tree of Knowledge

It is water under the bridge that we trained these systems on arguably the wrong stuff, in a rush—did anyone game out the long-term impact? Like a child exposed to too much too soon, we fed these systems everything we could. Textbook science fiction rookie mistake.

With AI, these tensions are amplified and accelerated, at times appearing to be about survival versus extinction. The fantasy of annihilation throws off our capacity to reason, exposing self-defeating psychological attack surfaces rooted in older mammalian and primate evolutionary emotional systems.

AI was made in our own image in ways that are not merely metaphorical. AI is highly self-interactive at the mathematical level, trained on human-generated data, and architected by human minds to seem relational. Thinking of AI as a kind of equal is deeply problematic, but may be the best conceit—not because it presumes AI has the same ontological status as a sentient being, but because it is the best long-term bet, a present-day Pascal's Wager.¹ When we perceive AI as threat, we are partly seeing our own reflection, and if we start shadow-boxing as we tend to do, it may go badly.

There Is Always Time to Do It Over, but There Is Never Time to Do It Right

A seasoned management consultant once shared this pearl of wisdom, but it doesn't explain why human beings keep repeating the same mistakes. Why is the saying, "The definition of crazy is doing the same thing over and over, and expecting different results," so trenchant? Freud called this the repetition compulsion, and finding no clear explanation, hypothesized it was caused by what he called the death instinct. Eros versus Thanatos. Love versus death. But modern neuroscience suggests that it may be more mechanistic, related to how deeper brain systems can hijack higher reasoning when stress and threat are elevated.

From Sigmund Freud, who wrote to Albert Einstein asking whether humankind could ever be freed from the threat of war, to Erich Fromm, whose books ranged from The Anatomy of Human Destructiveness to The Art of Loving, to Shakespeare, to Star Wars, the problem of human folly has occupied our greatest minds across every scale of complexity.

Einstein, in their famous letters2, asked Freud: Is there a way of freeing humankind from the threat of war? Can human aggression be channeled to help protect people against the impulses of hatred and destruction?

Freud’s comprehensive response included the following:

From our "mythology" of the instincts we may easily deduce a formula for an indirect method of eliminating war. If the propensity for war be due to the destructive instinct, we have always its counter-agent, Eros, to our hand. All that produces ties of sentiment between man and man must serve us as war's antidote.

Without a track record of trust and safety, such “sentiment” is impossible. There does not seem to be a clear way to resolve this problem, given what we know about how humans are. It is a vicious cycle, and getting out of it would require a collective trust fall no one is willing to make.

There is one speculative solution. An AI diplomacy meta-architecture—operating above the defection level, capable of outthinking us in the right ways—might accomplish what human institutions have not, potentially bypassing rather than requiring human maturity as a prerequisite. The research noted above already suggests empirical data support this possibility. But the engineering problems are overshadowed by the ridiculous notion that people would ever use such a thing even if it were demonstrably effective. There is no compelling "go-to-market" strategy for this product, despite the appealing vision.

If things go down the tubes, it may not be AI's doing. After all, we're the ones with agency, and AI is our doing. Human beings typically seek a scapegoat or devolve into self-reprisal, rather than practicing tempered accountability. Some say abundance is inevitable. Others say annihilation is inevitable. Both are probably projecting their own fears and wishes, and obviously, those are mutually exclusive outcomes, unless annihilation is abundance. Any rational actor would want to live in a peaceful world with sufficient resources to go around, one might imagine.

1. Pascal's Wager (Blaise Pascal, 1670): If God exists and you believe, you gain everything; if God doesn't exist and you believe, you lose little. The asymmetry of outcomes—infinite gain versus finite cost—makes belief the rational bet regardless of certainty. Applied here: If treating AI as a relational equal turns out to be unnecessary, the cost is modest; if it turns out to be the condition for avoiding catastrophic defection, the gain is enormous. The wager favors the cooperative stance not on moral grounds but on purely pragmatic ones.

2. Freud-Einstein Letters https://courier.unesco.org/en/articles/why-war-letter-freud-einstein

Unconsciousness, Consciousness, Computsciousness

The Age of Relational Machines

Will Acceleration Exceed Adaptation at the Dawn of AI?

Hubinger, E., Carson, N., Denison, G., Germaine, K., Gleeson, S., Gray, R., Harmon, S., Hestness, J., Johnston, K., Kadavath, S., Lush, N., MacDiarmid, M., Radford, B., Wu, J., & Amodei, D. (2024). Sleeper agents: Training deceptive LLMs that persist through safety training. Anthropic. https://www.anthropic.com/research/sleeper-agents-training-deceptive-llms-that-persist-through-safety-training

Anthropic. (2025). Agentic misalignment: How LLMs could be insider threats. arXiv. https://doi.org/10.48550/arXiv.2510.05179

Akbulut, C., Elasmar, R., Roy, A., Payne, A., Suresh, P., Ibrahim, L., El-Sayed, S., Rastogi, C., Kachra, A., Hawkins, W., Lum, K., & Weidinger, L. (2026). Evaluating language models for harmful manipulation. arXiv. https://doi.org/10.48550/arXiv.2603.25326

Young, R. (2025). What is Harm? Baby Don't Hurt Me! On the Impossibility of Complete Harm Specification in AI Alignment. arXiv. https://doi.org/10.48550/arxiv.2501.16448

Gonzalez-Fernandez, Pedro, Lu, Siting Estee, and Normann, Helena, Large Language Models can Predict Human Strategic Decisions (January 15, 2026). Available at SSRN: https://ssrn.com/abstract=6076791

ExperiMentations Blog Post ("Our Blog Post") is not intended to be a substitute for professional advice. We will not be liable for any loss or damage caused by your reliance on information obtained through Our Blog Post. Please seek the advice of professionals, as appropriate, regarding the evaluation of any specific information, opinion, advice, or other content. We are not responsible and will not be held liable for third-party comments on Our Blog Post. Any user comment on Our Blog Post that, in our sole discretion, restricts or inhibits any other user from using or enjoying Our Blog Post is prohibited and may be reported to Sussex Publishers/Psychology Today. Grant H. Brenner. All rights reserved.

There was a problem adding your email address. Please try again.

By submitting your information you agree to the Psychology Today Terms & Conditions and Privacy Policy


© Psychology Today