Can an A.I. Company Ever Be Good?
Can an A.I. Company Ever Be Good?
Mr. Ford is an essayist and a technologist.
Artificial intelligence can be wondrous, but the technology underneath is more than a little monstrous. It eats up all the words in the world, from blogs to books, often without permission. It burns whole forests’ worth of energy, digesting that raw material into its models, and gulps billions of gallons of water to cool down. These are the same qualities we perceive in Godzilla, but distributed. Is it any wonder that the Japanese word “kaiju,” or strange beast, has “AI” smack in the middle?
Mere greed didn’t get us here. In fact, ethics did. The big A.I. labs’ starry-eyed founders believed that the only way to stop the looming threat of a superintelligence that might kill us all was to create an aligned A.I. that would remain fond of humans. A friendly Godzilla could stop bad Godzillas before they got to Tokyo Bay. Sam Altman, Elon Musk and others came together to build the world’s defense squad, which they called OpenAI. They built safety teams on which employees spent their days poking at the Godzilla eggs (testing chatbots) to make sure they wouldn’t kill everyone when hatched. (One of the heads of those OpenAI safety teams was Dario Amodei, who left in 2020 to help found an even more aligned company, Anthropic.)
Companies are companies. They will, eventually, be expected to turn a profit. Humanistic goals will become subsumed by data-driven metrics. The idea of doing good brings everyone together — but somehow, “good” ends up a conflicted border, with angry people on either side. An A.I. company may want to do good, but it cannot do so on its own. It needs to be guided through rules and regulations.
A good portion of the earlier crop of A.I. thinking came out of the effective altruism movement, which calls for maximizing the good you can do by pursuing research-driven philanthropy. (One prominent practitioner is the incarcerated crypto entrepreneur Sam Bankman-Fried.) It’s not a simple credo, but a big part of the ideology is oriented around preventing long-term threats to humanity. For instance, if you believe that A.I. can become self-aware enough to modify itself and get smarter, then you might imagine it would modify itself into a state of total control of all the world’s digital resources.
What can you, a bright, nerdy person, do to stop this? Only one thing: Build your superintelligence first and make it good, like you. Whatever your methods, they will seem valid — spinning up crypto schemes, possibly breaking copyright laws so you can feed your model every pirated text on the internet, blowing a hole in the labor market or raising the earth’s temperature. Preventing an A.I. calamity is just that important.
Ten years ago, this was normal Silicon Valley conversation — solipsistic and nerdy, a big, expensive thought experiment in which people like Mr. Musk and Stephen Hawking would opine about how A.I. must be built to serve humans.
Subscribe to The Times to read as many articles as you like.
