menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

What is the worst-case scenario for AI? California lawmakers want to know.

4 0
12.09.2025

When it comes to AI, as California goes, so goes the nation. The biggest state in the US by population is also the central hub of AI innovation for the entire globe, home to 32 of the world’s top 50 AI companies. That size and influence have given the Golden State the weight to become a regulatory trailblazer, setting the tone for the rest of the country on environmental, labor, and consumer protection regulations — and more recently, AI as well. Now, following the dramatic defeat of a proposed federal moratorium on states regulating AI in July, California policymakers see a limited window of opportunity to set the stage for the rest of the country’s AI laws.

This week, the California State Assembly is set to vote on SB 53, a bill that would require transparency reports from the developers of highly powerful, “frontier” AI models. The models targeted represent the cutting-edge of AI — extremely adept generative systems that require massive amounts of data and computing power, like OpenAI’s ChatGPT, Google’s Gemini, xAI’s Grok, and Anthropic’s Claude. The bill, which has already passed the state Senate, must pass the California State Assembly before it goes to the governor to either be vetoed or signed into law.

AI can offer tremendous benefits, but as the bill is meant to address, it’s not without risks. And while there is no shortage of existing risks from issues like job displacement and bias, SB 53 focuses on possible “catastrophic risks” from AI. Such risks include AI-enabled biological weapons attacks and rogue systems carrying out cyberattacks or other criminal activity that could conceivably bring down critical infrastructure. Such catastrophic risks represent widespread disasters that could plausibly threaten human civilization at local, national, and global levels. They represent risks of the kind of AI-driven disasters that have not yet occurred, rather than already-realized, more personal harms like AI deepfakes.

Exactly what constitutes a catastrophic risk is up for debate, but SB 53 defines it as a “foreseeable and material risk” of an event that causes more than 50 casualties or over $1 billion in damages that a frontier model plays a meaningful role in contributing to. How fault is determined in practice would be up to the courts to interpret. It’s hard to define catastrophic risk in law when the definition is far from settled, but doing so can help us protect against both near- and long-term consequences.

By itself, a single state bill focused on increased transparency will probably not be enough to prevent devastating cyberattacks and AI-enabled chemical, biological, radiological, and nuclear weapons. But the bill represents an effort to regulate this fast-moving technology before it outpaces our efforts at oversight.

SB 53, explained

SB 32 is the third state-level bill to try to specifically focus on regulating AI’s catastrophic risks, after California’s SB 1047, which passed the legislature only to be vetoed by the governor — and New York’s Responsible AI Safety and Education (RAISE) Act, which recently passed the New York legislature and is now awaiting Gov. Kathy Hochul’s approval.

SB 53, which was introduced by state Sen. Scott Wiener in February, requires frontier AI companies to develop safety frameworks that specifically detail how they approach catastrophic risk reduction. Before deploying their models, companies would........

© Vox