What’s more likely to be sentient: an ant or ChatGPT? |
The context you need, when you need it
When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.
We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?
What’s more likely to be sentient: an ant or ChatGPT?
Please don’t overthink this one.
Sentience is hot these days. Partly because of the development of impressive new AI systems, everyone seems to be asking: How do we know if something is sentient?
While consciousness means simply having a subjective point of view on the world — a feeling of what it’s like to be you — sentience is the capacity to have conscious experiences that are valenced, meaning they feel bad (pain) or good (pleasure). It matters for ethics, because a lot of people think that if an entity is sentient, it deserves to be in our moral circle: the imaginary boundary we draw around those we consider worthy of moral consideration.
While our moral circle has expanded over the centuries to include more people and more nonhuman animals, there are some edge cases we’re collectively unsure about. Should insects have moral rights? What about future AI systems that could potentially become sentient?
The philosopher Jeff Sebo is an expert on this; he literally wrote a book called The Moral Circle. And he argues that it’s helpful to investigate all potentially sentient beings — from bugs to future AIs — in broadly similar ways. So, after receiving a lot of reader questions on how we should consider both bugs and AIs, and responding to both in recent installments of my Your Mileage May Vary advice column, I reached out to him to talk about how we assess sentience, whether it’s hypocritical to worry about AI welfare while at the same time killing insects without a second thought, and why he developed a thought experiment called “the rebugnant conclusion.” Our conversation, edited for length and clarity, follows.
What if absolutely everything is conscious?
How can we go about assessing whether some creature — say, an insect — is sentient?
Our understanding of insect sentience is still limited, in part because we still lack a settled theory of sentience. But we can make progress through “the marker method.”
The basic idea [for this method] is that we can look for features in animals that correlate with feelings in humans. For example, behaviorally, we can ask: Do other animals nurse their wounds? Do they respond to analgesics like we do? And anatomically, we can ask: Do they have systems for detecting harmful stimuli and carrying that information to the brain?
This method is imperfect — the presence of these features is not proof of sentience, and the absence is not proof of non-sentience. But when we find many of these features together, it can count as evidence.
What do we find when we look for these features in insects? In at least some insects, there are systems for detecting harmful stimuli, pathways for carrying that information to the brain, regions in the brain for integrating information and flexible decision-making. For example, some insects become more sensitive after an injury, and they also weigh the avoidance of harm against the pursuit of other goals. Some insects also engage in play behaviors — you can find cute videos of bumblebees playing with wooden balls — suggesting that they may be able to experience positive states like joy. Again, none of this is proof of sentience. None of it establishes certainty. But it does count as evidence.
You’ve said that you think insects are about 20-40 percent likely to be sentient. How do you personally deal with bugs that come into your home?
For me, taking insect welfare seriously means reducing harm to insects........