menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

How to kill a rogue AI

10 3
02.01.2026
They’re here. | Costfoto/NurPhoto via Getty Images

It’s advice as old as tech support. If your computer is doing something you don’t like, try turning it off and then on again. When it comes to the growing concerns that a highly advanced artificial intelligence system could go so catastrophically rogue that it could cause a risk to society, or even humanity, it’s tempting to fall back on this sort of thinking. An AI is just a computer system designed by people. If it starts malfunctioning, can’t we just turn it off?

Key takeaways

  • A new analysis from the Rand Corporation discusses three potential courses of action for responding to a “catastrophic loss of control” incident involving a rogue artificial intelligence agent.
  • The three potential responses — designing a “hunter-killer” AI to destroy the rogue, shutting down parts of the global internet, or using a nuclear-initiated EMP attack to wipe out electronics — all have a mixed chance of success and carry significant risk of collateral damage.
  • The takeaway of the study is that we are woefully unprepared for the worst-case-scenario AI risks and more planning and coordination is needed.

In the worst-case scenarios, probably not. This is not only because a highly advanced AI system could have a self-preservation instinct and resort to desperate measures to save itself. (Versions of Anthropic’s large language model Claude resorted to “blackmail” to preserve itself during pre-release testing.) It’s also because the rogue AI might be too widely distributed to turn off. Current models like Claude and ChatGPT already run across multiple data centers, not one computer in one location. If a hypothetical rogue AI wanted to prevent itself from being shut down, it would quickly copy itself across the servers it has access to, preventing hapless and slow-moving humans from pulling the plug.

Killing a rogue AI, in other words, might require killing the internet, or large parts of it. And that’s no small challenge.

This is the challenge that concerns Michael Vermeer, a senior scientist at the Rand Corporation, the California-based think tank once known for pioneering work on nuclear war strategy. Vermeer’s recent research has concerned the potential catastrophic risks from hyperintelligent AI and told Vox that when these scenarios are considered, “people throw out these wild options as viable possibilities” for how humans could respond without considering how effective they would be or whether they would create as many problems as they solve. “Could we actually do that?” he wondered.

In a

© Vox