The World Is Not Prepared for an AI Emergency |
Picture waking up to find the internet flickering, card payments failing, ambulances heading to the wrong address, and emergency broadcasts you are no longer sure you can trust. Whether caused by a model malfunction, criminal use, or an escalating cyber shock, an AI-driven crisis could move across borders quickly.
In many cases, the first signs of an AI emergency would likely look like a generic outage or security failure. Only later, if at all, would it become clear that AI systems had played a material role.
Some governments and companies have begun to build guardrails to manage the risks of such an emergency. The European Union AI Act, the United States National Institute of Standards and Technology risk framework, the G7 Hiroshima process and international technical standards all aim to prevent harm. Cybersecurity agencies and infrastructure operators also have runbooks for hacking attempts, outages, and routine system failures. What is missing is not the technical playbook for patching servers or restoring networks. It is the plan for preventing social panic and a breakdown in trust, diplomacy, and basic communication if AI sits at the center of a fast-moving crisis.
Preventing an AI emergency is only half the job. The missing half of AI governance is preparedness and response. Who decides that an AI incident has become an international emergency? Who speaks to the public when false messages are flooding their feeds? Who keeps channels open between governments if normal lines are compromised?
Advertisement
Governments can, and must, establish AI emergency response plans........