Can California show the way forward on AI safety?
Last week, California state Senator Scott Wiener (D-San Francisco) introduced a landmark new piece of AI legislation aimed at “establishing clear, predictable, common-sense safety standards for developers of the largest and most powerful AI systems.”
It’s a well-written, politically astute approach to regulating AI, narrowly focused on the companies building the biggest-scale models and the possibility that those massive efforts could cause mass harm.
As it has in fields from car emissions to climate change, California’s legislation could provide a model for national regulation, which looks likely to take much longer. But whether or not Wiener’s bill makes it through the statehouse in its current form, its existence reflects that politicians are starting to take tech leaders seriously when they claim they intend to build radical world-transforming technologies that pose significant safety risks — and ceasing to take them seriously when they claim, as some do, that they should do that with absolutely no oversight.
What the California AI bill gets right
One challenge of regulating powerful AI systems is defining just what you mean by “powerful AI systems.” We’re smack in the middle of the present AI hype cycle, and every company in Silicon Valley claims that they’re using AI, whether that means building customer service chatbots, day trading algorithms, general intelligences capable of convincingly mimicking humans, or even literal killer robots.
Defining the question is vital, because AI has enormous economic potential, and clumsy, excessively stringent regulations that crack down on beneficial systems could do enormous economic damage while doing surprisingly little about the........
© Vox
visit website