The 2,000-year-old debate that reveals AI’s biggest problem |
Almost 2,000 years before ChatGPT was invented, two men had a debate that can teach us a lot about AI’s future. Their names were Eliezer and Yoshua.
No, I’m not talking about Eliezer Yudkowsky, who recently published a bestselling book claiming that AI is going to kill everyone, or Yoshua Bengio, the “godfather of AI” and most cited living scientist in the world — though I did discuss the 2,000-year-old debate with both of them. I’m talking about Rabbi Eliezer and Rabbi Yoshua, two ancient sages from the first century.
According to a famous story in the Talmud, the central text of Jewish law, Rabbi Eliezer was adamant that he was right about a certain legal question, but the other sages disagreed. So Rabbi Eliezer performed a bunch of miraculous feats intended to prove that God was on his side. He made a carob tree uproot itself and scurry away. He made a stream run backward. He made the walls of the study hall begin to cave in. Finally, he declared: If I’m right, a voice from the heavens will prove it!
What do you know? A heavenly voice came booming down to announce that Rabbi Eliezer was right. Still, the sages were unimpressed. Rabbi Yoshua insisted: “The Torah is not in heaven!” In other words, when it comes to the law, it doesn’t matter what any divine voice says — only what humans decide. Since a majority of sages disagreed with Rabbi Eliezer, he was overruled.
Key takeaways
Experts talk about aligning AI with human values. But “solving alignment” doesn’t mean much if it yields AI that leads to the loss of human agency.Fast-forward 2,000 years and we’re having essentially the same debate — just replace “divine voice” with “AI god.”
Today, the AI industry’s biggest players aren’t just trying to build a helpful chatbot, but a “superintelligence” that is vastly smarter than humans and unimaginably powerful. This shifts the goalposts from building a handy tool to building a god. When OpenAI CEO Sam Altman says he’s making “magic intelligence in the sky,” he doesn’t just have in mind ChatGPT as we know it today; he envisions “nearly-limitless intelligence” that can achieve “the discovery of all of physics” and then some. Some AI researchers hypothesize that superintelligence would end up making major decisions for humans — either acting autonomously or through humans that feel compelled to defer to its superior judgment.
As we work toward superintelligence, AI companies acknowledge, we’ll need to solve the “alignment problem” — how to get AI systems to reliably do what humans really want them to do, or align them with human values. But their commitment to solving that problem occludes a bigger issue.
Yes, we want companies to stop AIs from acting in harmful, biased, or deceitful ways. But treating alignment as a technical problem isn’t enough, especially as the industry’s ambition shifts to building a god. That ambition requires us to ask: Even if we can somehow build an all-knowing, supremely powerful machine, and even if we can somehow align it with moral values so that it’s also deeply good… should we? Or is it just a bad idea to build an AI god — no matter how perfectly aligned it is on the technical level — because it would squeeze out space for human choice and thus render human life meaningless?
I asked Eliezer Yudkowsky and Yoshua Bengio whether they agree with their ancient namesakes. But before I........