menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

The New Politics of the AI Apocalypse

22 12
02.02.2026

The first few weeks of 2026 saw yet another swing in popular narratives about AI. Toward the end of last year, notable AI insiders were expressing skepticism about the path ahead for research, and a financial bubble had become a topic of open conversation, even among implicated CEOs. Soon, though, updates to some major models would shift the industry’s focus again: Suddenly, they were much better at writing software.

A lot of the surge in enthusiasm for AI coding was driven by Claude Code, a product popular with developers and built by Anthropic, a company that long ago bet on automating the tech industry as its path to making money. This meant attention for Anthropic and also for its co-founder and CEO, Dario Amodei. Now, he had the industry’s mic. He wanted to talk about Claude Code, sure. More than that, though, he wanted to talk about how he thinks things might go very, very wrong.

He does this a lot. In 2024, Amodei published “Machines of Loving Grace,” a long, cautiously optimistic, and widely read essay about the possibilities of what he termed “powerful AI,” analogizing its deployment to a “country of geniuses in a datacenter” and making the case that, with a great deal of effort, it might be harnessed to achieve “the defeat of most diseases, the growth in biological and cognitive freedom, the lifting of billions of people out of poverty to share in the new technologies, a renaissance of liberal democracy and human rights.” Written in part as a response to perceptions that Amodei was an AI “doomer,” the essay was an attempt to sketch longer-term AI outcomes if, as he writes, “everything goes right.” Despite this objective, “Machines” is a piece of writing haunted by threats and “risk,” in both the general and more specialized, AI-centric meaning of the word. In the time since, he’s spent a lot of time at conferences and on TV talking about China, job disruption, and more generally being prepared. This week, he published a follow-up to “Machines” called “The Adolescence of Technology,” which is unapologetically intended as a warning about what might happen along the path to prosperity, positioned as an attempt to “confront the rite of passage itself” and outline the “risks that we are about to face.”

On AI development and safety, it’s provocative, interesting, and still fundamentally optimistic, a tall set of wild premises stacked by someone who argues for them persuasively, from a position of authority, and as a representative of his industry’s moment. Its calls to action, though, amount to something much different: A reluctant political manifesto that can seem profoundly out of step with the wider world around it, a string of half-convinced shoulds and maybes sitting in awkward contrast with an account of AI progress and safety full of wills and cans. As a work of speculative fiction about extraordinary technologies that are difficult to think about, it makes a case for a weird, extreme, but manageable future. As an exhortation about what needs to be done in response to such technologiesand as a preview of how American politics might process them — it’s about as grim as it gets.

A few caveats and some background here: Amodei is the leader of a for-profit company reportedly valued at around $350 billion after a raise that closed earlier this month, so anything he says should be read with that in mind. Anthropic has also long identified itself as the responsible lab and has gone to great lengths to demonstrate — and publicize — its dedication to mitigating risk and avoiding undesirable outcomes, often through published research and public statements by Amodei and others. (His writing is clearly influenced by the effective altruism movement, with which Anthropic has many connections.) Amodei has donated to liberal causes and candidates, publicly criticized Donald Trump, and mentioned “the horror we’re seeing in Minnesota” in his introduction of “Adolescence” on X; in turn, he has been cast by White House AI tsar David Sacks as part of a group of “committed leftists” and “doomers” who are either getting in the way of AI progress or trying to shape it ideologically. His company is currently at odds with the Defense Department, with which it contracts, over how its models can be used.

Long-running debates among researchers, rationalists, and tech leaders have begun to sort — or, more accurately, been forcefully sorted — into an inapt but mandatory American partisan frame, in which Amodei (AI essayist) and perhaps DeepMind’s Demis Hassabis are associated with liberalism; Musk (AI tweeter) and perhaps Zuckerberg the MAGA accelerationists; and Sam Altman (AI blogger) is just whatever he needs to be at a given place and time. In reality, their respective approaches to AI development — spending as much money as they can raise and engaging in an explicit race, donating to whoever they need to — have a great deal in common. (In the even dumber and yet similarly important frame of the hyperonline AI industry’s X feeds and their overlapping factions and bizarre fandoms, you might say that Dario is the Mario to Musk’s Wario? Anyway, back to the matter of avoiding the end of the world.)

Amodei opens “Adolescence” with numerous caveats of his own, acknowledging that there are “plenty of ways in which the concerns I’m raising in this piece could be moot,” either as a result of AI advances stalling or not occurring “anywhere near as fast” as he imagines or by dangers simply failing to “materialize” for other reasons. Indeed, while it’s expansive as a summary of AI risks........

© Daily Intelligencer