Can machines suffer? |
Listen to this essay
‘Seal Pup on Beach.’ That’s what the makeshift sign read, on a wooden post along Ireland’s eastern coastline. At the water’s edge, I discovered a pair of volunteers watching over a small, sentient being. It lay hairless, blinking slowly as if struggling to see the world around it. Its pale, almost translucent skin blended with the white rocks on the stony beach. The volunteers told me that the pups’ fur is not waterproof in their first few weeks of life, so they cannot enter the sea. This leaves them entirely vulnerable to predation (often from roaming dogs) as they wait for their mothers to return with food.
And so, the volunteers were standing guard, protecting a soft, fragile being capable of distress, fear and suffering. This may seem an obvious thing to do. But not long ago, the greatest threat to seal pups was humans themselves. Across northern Europe and Canada during the 19th and 20th centuries, workers roamed coastlines and pack ice, beating infant seals to death with clubs. White ice was smeared red. Annual kill rates for seals are estimated in the hundreds of thousands, all done with the purpose of turning them into raw industrial materials for the production of clothing, oil, and meat. Even if their suffering was real, many considered it inconsequential. Seals, like many other creatures, were seen as little more than tools or resources.
However, views of animal suffering and pain soon shifted. In 1881, Henry Wood Elliott reported on the ‘wholesale destruction’ of seals in Alaska, leading to international outcry and the establishment of new treaties. And in ‘The White Seal’ (1893), Rudyard Kipling told a sympathetic story from the perspective of the hunted animals themselves. If someone killed a seal pup today, they would likely be charged in a court of law.
This widening of the so-called moral circle – the slow extension of empathy and rights beyond the boundaries we once took for granted – has changed many of our relationships with other species. It is why factory-farmed animals, who often lead short and painful lives, are killed in ways that tend to minimise suffering. It is why cosmetic companies offer ‘cruelty free’ products, and why vegetarianism and veganism have become more popular in many countries. However, it is not always clear what truly counts as ‘suffering’ and therefore not always clear just how wide the moral circle should expand. Are there limits to whom, or what, our empathy should extend?
In the 2020s, the widening moral circle faces challenges that take us beyond biological life. ‘Thinking machines’ have arrived and, with them, new problems that ask us to consider what suffering might look like when it is entirely disconnected from biology.
In 1780, Jeremy Bentham framed the moral status of animals around a simple criterion: ‘The question is not, Can they reason? nor, Can they talk? but, Can they suffer?’ Today, as we encounter the possibility of AI ‘others’, that question must be stretched further: can they suffer without a body? If they can, we may need to radically rethink our relationships with artificial minds.
To avoid repeating past moral errors, we must first understand how humans historically misjudged sentience. In the past, moral error has often stemmed from confident ignorance – from assuming that the absence of proof was proof of absence.
In the 17th century, René Descartes advanced a new form of rationalism that became known as Cartesianism, and with it a view of animals as ‘automata’: complex machines with bodies, but without consciousness, which were therefore incapable of genuine feeling or pain. This mechanistic picture aligned neatly with the scientific revolution’s fascination with physical laws and mechanical analogies, but it had ethical consequences. If an animal’s cries were no more morally significant than the creaking of a hinge, then cruelty could be reframed as nothing more than a manipulation of matter. For centuries, this idea provided an intellectual alibi for practices that today would be recognised as causing severe suffering.
This pattern – the denial of moral status to embodied beings on the grounds of an assumed absence of inner life – has repeated across human history. In the context of slavery, many societies maintained that enslaved peoples lacked the same moral worth, autonomy or mental capacities as their free counterparts. These claims were not only false; they were often actively maintained against mounting counterevidence, serving as justifications for continuing exploitation. It took social movements, political struggle, and an accumulation of lived testimony to force a recognition that should have been morally obvious from the beginning.
Flesh, blood and a nervous system are treated as preconditions for experience
The 20th century saw a new extension of this moral conversation: the question of rights for nonhuman animals. Extending Bentham’s ideas, the philosopher Peter Singer argues in Animal Liberation (1975) that the relevant criterion for moral consideration is the capacity to suffer, not membership of a particular species. Singer’s utilitarian framing challenged readers to confront the parallels between past moral exclusions and the routine suffering of animals in industrial farming, laboratory testing and other human enterprises.
Singer’s work built on a long philosophical lineage, but it gave public voice to a principle that had been largely absent from mainstream discourse: if a being can suffer, its suffering matters morally. This history frames the lens through which we now might consider the moral status of AI.
The moral through-line across........