AI Has No Soul — Unless We Demand It |
Recently, the Defense Department tried to strong-arm Anthropic, the maker of the AI system Claude, into removing ethical constraints on its technology. Anthropic had drawn clear lines: no domestic mass surveillance, no fully autonomous lethal weapons. The administration wanted those lines erased, insisting on what it called an “any lawful use” standard — meaning Claude should do whatever the law technically permits, with no additional ethical constraints. When Anthropic refused, the company was threatened with being designated a “supply chain risk,” a move that could effectively destroy it. The administration then signed a new AI contract with OpenAI, maker of ChatGPT, whose leadership has cultivated warmer ties with the White House.
Meanwhile, reports emerged that Claude had already been used in military operations against Iran, even after the administration had reportedly banned its use. Reportedly among the possible early targets: an Iranian elementary school where at least 175 people, mostly children, were killed.
After reading that Anthropic had refused the Pentagon’s demands, I switched from ChatGPT to Claude — which I sometimes use as a research tool, including, in full disclosure, for this article. Yet within days came the reports from Iran. Which left me with a question I haven’t resolved: did I make the more ethical choice — or merely the less bad one? Or perhaps not even that?
Anthropic’s ethical commitments appear genuine — but they exist only because its CEO, Dario Amodei, decided they should, and will last exactly as long as he decides they’re worth keeping. The administration is in one sense more accountable — it was elected. But democratic accountability requires democratic institutions that function, and those are precisely what the current administration has been undermining and dismantling. Neither actor can be trusted to act first and foremost in the public interest. And the dangers of AI extend far beyond military use and surveillance: AI raises profound concerns about labor exploitation, environmental damage, intellectual property theft, disinformation, and the concentration of unprecedented power in a handful of private hands.
Given these concerns, some advocate refraining from using AI, or even banning the technology altogether. But unplugging the machines — or at least unplugging ourselves from them — is no longer a practical option for most of us. The technology is already deeply embedded in modern life. Realistically, the question is no longer whether we should use AI but how to use it as responsibly as possible.
Jewish legend offers an instructive precedent. The Maharal of Prague created a creature of extraordinary power called a golem to protect his community from violence. Wise enough to know that even a protective creation could become dangerous, he built in a mechanism of reversal: on the golem’s forehead he inscribed emet — truth, one of the names of God. Erase the aleph, the first letter, and emet becomes met — death. The golem falls. We, however, were not so wise. We have animated a golem of our own — powerful, useful, and yet also dangerous — without building in any mechanism of reversal or even determining who would have the authority to use one. The question before us now is not how to unmake what we have made, but how to live with it without being destroyed by it. Jewish tradition offers two kinds of guidance.
The first is the concept of mesaye’ah yedei ovrei aveirah — a prohibition on strengthening the hands of wrongdoers (Bavli Avodah Zarah 55b; Mishneh Torah, Laws of Murder 12:14). A seller who abandons ethical constraints gains a competitive advantage; if buyers reward it, every competitor faces pressure to follow — which is why we are forbidden from patronizing such sellers even when our individual choice won’t change their behavior. The race to the bottom becomes self-fulfilling.
Critically, mesaye’ah does not require proof of wrongdoing — only reasonable presumption from what is publicly known. Before adopting an AI tool, this principle would require one to investigate: What does the company permit its technology to be used for? What are its policies on issues from military use to labor practices, environmental impact, and data privacy? When tested, has the company held its ethical lines — or abandoned them? Where the answers give grounds for concern, mesaye’ah says: find another seller. Even in the absence of a morally pure choice, we can still refuse to participate in a race to the bottom.
The second kind of guidance addresses what mesaye’ah cannot — a situation in which the powerful have captured the rules of the marketplace itself. The biblical book of Nehemiah speaks to that. When Nehemiah discovered that wealthy elites were exploiting their neighbors during the very crisis in which the community was rebuilding Jerusalem’s walls — seizing fields, charging predatory interest, taking children as debt collateral — he didn’t appeal to individual conscience. He convened a public assembly, named what was happening, and bound the powerful with a public oath (Nehemiah 5:1-13). A handful of powerful actors exploiting a collective crisis for private gain: the parallel to our moment is not subtle.
We should call for legislation establishing what companies may build, what the military may deploy, and what protections citizens are guaranteed — with enforcement mechanisms that survive changes in administration. And since this technology transcends borders, we need the kind of internationally agreed-upon framework we once built around nuclear weapons. These demands may not pass. But Nehemiah did not wait for favorable conditions. He convened the assembly anyway.
I will, in all likelihood, keep using Claude — still uncertain whether that is the ethical choice or merely the least bad one. But I am striving to make my choices about AI with open eyes: investigating before I use, asking the questions mesaye’ahrequires, and refusing to mistake personal choices for a substitute for the harder collective demands this moment requires. That is the work Jewish tradition demands of us — and I fear we have not yet begun it seriously.