One quote about AI I think about a lot is something that Jack Clark, a co-founder of the artificial intelligence company Anthropic, told me last year: “It’s a real weird thing that this is not a government project.”
Clark’s point was that the staff at Anthropic, and much of the staff at major competitors like OpenAI and Google DeepMind, genuinely believe that AI is not just a major innovation but a huge shift in human history, effectively the creation of a new species that will eventually surpass human intelligence and have the power to determine our fate. This isn’t an ordinary product that a company can sell to willing customers without bothering anybody else too much. It’s something very different.
Maybe you think this viewpoint is reasonable; maybe you think it’s grandiose, self-important, and delusional. I honestly think it’s too early to say. In 2050, we might look back at these dire AI warnings as technologists getting high on their own products, or we might look around at a society governed by ubiquitous AIs and think, “They had a point.” But the case for governments to take a more active role specifically in case the latter scenario comes true is pretty strong.
I’ve written a bit about what form that government role could take, and to date most of the proposals involve mandating that sufficiently large AIs be tested for certain dangers: bias against certain groups, security vulnerabilities, the ability to be used for dangerous purposes like building weapons, and “agentic” properties indicating that they pursue goals other than the ones we humans give them on purpose. Regulating for these risks would require building out major new government institutions and would ask a lot of them, not least that they not become captured by the AI companies they need to regulate. (Notably, lobbying by AI-related companies increased 185 percent in 2023 compared to the year before, according to data gathered by OpenSecrets for CNBC.)
As regulatory efforts go, this one is high........