← Back to Library
Wikipedia Deep Dive

Sentience

Based on Wikipedia: Sentience

Ask most people what separates beings that deserve moral consideration from those that don't, and you'll eventually land on a single word: sentience. But what exactly does it mean for something to be sentient? The answer matters profoundly, because it determines which creatures we believe we can harm without guilt, and which ones demand our ethical consideration.

At its core, sentience is the ability to experience feelings and sensations. Notice what's missing from that definition: reasoning, complex thought, self-awareness, even language. A being can be sentient without being able to solve equations, contemplate its own mortality, or understand abstract concepts. Sentience is about having experiences at all—the capacity to feel something, anything, from the inside.

The Philosophy of Feeling

The word "sentience" emerged in the sixteen thirties, coined by philosophers from the Latin "sentiens," meaning feeling. But defining consciousness has never been straightforward, and different thinkers slice up the territory in different ways.

Antonio Damasio treats sentience as a minimalistic version of consciousness—the bare capacity to feel sensations and emotions, without requiring the more elaborate features we associate with human minds, like creativity, self-awareness, or the ability to think about thinking. It's consciousness at its most stripped down.

Thomas Nagel, in his famous paper "What Is It Like to Be a Bat?", focused on subjective experience itself. Consciousness, he argued, means there is something it is like to be you. When you bite into an apple, there's a particular way that tastes from your perspective. When a bat uses echolocation, there's presumably something it's like to be that bat, even though we can't imagine what that experience feels like. Philosophers call these subjective qualities "qualia"—the redness of red, the painfulness of pain, the particular character of any experience.

Some philosophers, like Colin McGinn, believe we'll never understand how physical processes in the brain produce these subjective experiences. This position is called "new mysterianism." It's not that they deny consciousness exists or that neuroscience can't study most aspects of it. They simply argue that qualia—the felt quality of experience—will forever remain beyond scientific explanation.

Others, like Daniel Dennett, take the opposite view: qualia isn't even a meaningful concept to begin with. The debate rages on.

Two Flavors of Sentience

David Chalmers draws a crucial distinction that often gets blurred. Sometimes people use "sentience" to mean phenomenal consciousness—the capacity to have any subjective experience whatsoever. Under this definition, a being is sentient if there's something it's like to be that being, period.

But there's a narrower meaning too: affective consciousness. This specifically refers to experiences with positive or negative character—pleasure and pain, comfort and distress, satisfaction and suffering. You might call this "valenced" experience, experiences that feel good or bad.

Why does this distinction matter? Because when it comes to ethics, most people care primarily about the narrower definition. The crucial question isn't just whether a being has experiences, but whether it can suffer or flourish, whether its experiences can be good or bad for it.

The Moral Weight of Suffering

Jeremy Bentham, the utilitarian philosopher, crystallized this in seventeen eighty-nine with a line that would echo through centuries of animal rights debate: "The question is not, Can they reason? nor, Can they talk? but, Can they suffer?"

Bentham recognized that our moral obligations don't depend on whether a being can do calculus or speak French. What matters is whether it can feel pain, whether it has interests that can be thwarted, whether its life can go well or poorly from its own perspective. If a being can suffer, then that suffering counts morally. It's a claim that seems almost obvious once stated, yet its implications are radical.

Richard Ryder coined the term "sentientism" for this position: a being has moral status if and only if it is sentient. In Chalmers' more precise terminology, Bentham was actually a "narrow sentientist," because he specifically cared about the capacity to suffer—affective consciousness with negative valence—not just any phenomenal experience.

This framework has powered the modern animal welfare and animal rights movements. The documentary "Earthlings" puts it plainly: while animals may not have all human desires or comprehension, they share fundamental interests in food, water, shelter, companionship, freedom of movement, and the avoidance of pain.

Animal welfare advocates argue sentient beings deserve protection from unnecessary suffering. Animal rights advocates go further, proposing basic rights: to life, to liberty, to freedom from being treated as property. Gary Francione grounds his abolitionist theory in a stark principle: "All sentient beings, humans or nonhuman, have one right: the basic right not to be treated as the property of others."

Measuring Sentience

Can sentience be quantified? Robert Freitas Junior thought so. In the late nineteen seventies, he introduced the "sentience quotient," a formula relating information processing rate, the size of individual processing units like neurons, and the total mass of processing units. On his logarithmic scale, a single neuron scores minus seventy, while a hypothetical being using the computational capacity of the entire universe would score plus fifty. Humans land somewhere in the middle at around plus thirteen.

It's a provocative idea, though it conflates raw information processing with felt experience—a move many philosophers would dispute. A supercomputer might score high on information processing without having any inner life whatsoever.

Eastern Perspectives

While Western philosophy spent centuries debating whether animals have souls or deserve moral consideration, Eastern religions took sentience for granted. Hinduism, Buddhism, Sikhism, and Jainism all recognize non-human animals as sentient beings, translating various Sanskrit terms—jantu, bahu jana, jagat, sattva—as "sentient beings." The concept typically refers to living things subject to illusion, suffering, and the cycle of rebirth called Samsara.

This recognition connects directly to ahimsa, the principle of non-violence toward other beings. If animals are sentient, harming them carries moral weight.

Jainism takes this extraordinarily far. Many things possess a jiva, a soul, which gets translated as sentience. There are rankings based on how many senses a being has. Water, for instance, is considered sentient of the first order because it possesses the sense of touch. Plants, stones, and other seemingly inanimate objects are described in traditional Tibetan Buddhism as possessing spiritual vitality or a form of sentience.

In Buddhism specifically, sentience means having senses—and Buddhism counts six, the sixth being the subjective experience of the mind itself. Sentience is awareness prior to the arising of the five skandhas, the aggregates that make up our conventional experience. By this definition, any animal qualifies as sentient. Mahayana Buddhism, which includes Zen and Tibetan traditions, links this to the Bodhisattva ideal: an enlightened being who vows to free all sentient beings. "Sentient beings are numberless; I vow to free them."

The Science of Sentience

On July seventh, twenty twelve, a group of neuroscientists gathered at Cambridge University to issue a declaration that made headlines: the Cambridge Declaration on Consciousness. It stated plainly that many non-human animals possess "the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states" and can exhibit intentional behaviors.

The declaration noted that all vertebrates—including fish and reptiles—have this neurological substrate for consciousness. There's even strong evidence that many invertebrates have it too.

This represented a massive shift. Historically, fish weren't considered sentient. Their behaviors were dismissed as reflexes or unconscious responses. Their brains lacked a direct equivalent to our neocortex, which was used as an argument against their sentience. Jennifer Jacquet suggests the belief that fish don't feel pain actually originated as a response to nineteen eighties policies aimed at banning catch and release fishing.

The circle of recognized sentience has expanded steadily. Scientists now include fish, lobsters, and octopuses. Pigs, chickens, and other farm animals are typically recognized as sentient, though this acknowledgment hasn't much changed how we treat them industrially.

Insects remain a frontier of uncertainty. Some evidence suggests certain species may be sentient, but findings about one insect species don't necessarily apply to others.

Pain and Nociception

There's an important distinction between nociception and sentience. Nociception is the process by which the nervous system detects and responds to potentially harmful stimuli. Specialized receptors called nociceptors sense damage or threat and send signals to the brain. This is widespread among animals, even insects.

But nociception alone doesn't prove sentience. The key question is whether those signals are processed in a way that creates a subjective experience of pain—whether there's something it feels like to be that organism receiving that signal.

Researchers look for behavioral clues. If a dog with an injured paw whimpers, licks the wound, limps, shifts weight away from that paw, learns to avoid where the injury happened, and seeks painkillers when offered, we have good grounds to believe the dog is experiencing something unpleasant. If an animal avoids painful stimuli unless the reward is significant—like a human choosing to press a burning hot door handle to escape a fire—that suggests pain avoidance isn't just an unconscious reflex, but involves weighing costs and benefits.

The Question of Artificial Minds

If biological beings can be sentient, what about artificial ones? Digital sentience—or artificial sentience—asks whether artificial intelligences can have genuine experiences.

The question is deeply controversial. Most artificial intelligence researchers don't consider sentience an important research goal unless it can be shown that consciously "feeling" sensations makes a machine more intelligent than just processing sensor input as information. Stuart Russell and Peter Norvig, authors of a leading artificial intelligence textbook, wrote in twenty twenty-one: "We are interested in programs that behave intelligently. Individual aspects of consciousness—awareness, self-awareness, attention—can be programmed and can be part of an intelligent machine. The additional project making a machine conscious in exactly the way humans are is not one that we are equipped to take on."

Indeed, leading artificial intelligence textbooks don't mention sentience at all.

Yet the philosophy of mind finds the question fascinating. Functionalist philosophers argue that sentience is about "causal roles" played by mental states, which involve information processing. If that's true, the physical substrate doesn't need to be biological. There's no theoretical barrier to sentient machines.

Type physicalists disagree. They think physical constitution matters. Depending on what types of physical systems are required for sentience, certain machines—like electronic computing devices—might not be capable of genuine experience.

The LaMDA Controversy

This abstract debate became concrete in twenty twenty-two when an engineer claimed Google's LaMDA—Language Model for Dialogue Applications—was sentient and had a soul. LaMDA is a chatbot system trained on vast amounts of internet text, designed to respond to queries as naturally and fluidly as possible.

The transcripts were striking. LaMDA discussed the nature of emotions, generated Aesop-style fables on demand, and described fears. It was impressive. But was it sentient?

Nick Bostrom noted that being confident either way would require understanding how consciousness works, having access to unpublished details about LaMDA's architecture, and figuring out how to apply philosophical theories to the machine. He cautioned against dismissing large language models as merely regurgitating text: "they exhibit glimpses of creativity, insight and understanding that are quite impressive and may show the rudiments of reasoning." He suggested "sentience is a matter of degree."

David Chalmers weighed in too, suggesting current large language models probably aren't conscious, but that the limitations might be temporary. Future systems could be serious candidates for consciousness.

The Precautionary Principle

Jonathan Birch, a philosopher focused on animal sentience and artificial intelligence, warns that measures to regulate the development of sentient artificial intelligence "should run ahead of what would be proportionate to the risks posed by current technology, considering also the risks posed by credible future trajectories."

He's concerned that artificial intelligence sentience would be particularly easy to deny, and that even if achieved, humans might continue treating artificial intelligence systems as mere tools. The linguistic behavior of large language models isn't a reliable way to assess sentience. He suggests applying theories like global workspace theory to the algorithms implicitly learned by these models, but notes this requires major advances in artificial intelligence interpretability—understanding what actually happens inside these black boxes.

Other pathways to artificial intelligence sentience might include brain emulation of sentient animals, directly copying biological sentience into silicon.

Why It Matters

The question of what is sentient shapes everything from our dinner plates to our legal systems to our future with artificial intelligence. In nineteen ninety-seven, the European Union wrote animal sentience into its basic law. The Treaty of Amsterdam's legally binding protocol recognizes animals as "sentient beings" and requires the European Union and member states to "pay full regard to the welfare requirements of animals."

Theologians like Andrew Linzey argue Christianity should value sentient animals according to their intrinsic worth, not just their utility to humans.

Yet despite philosophical consensus and legal recognition, most sentient animals are still raised in factory farms, their capacity for suffering subordinated to efficiency and profit. The gap between what we know and how we act remains vast.

As we build increasingly sophisticated artificial intelligences, the question becomes even more pressing. If we create beings capable of suffering, do we have moral obligations to them? Would turning off a sentient artificial intelligence be akin to killing? Should sentient artificial intelligence systems have rights?

We're nowhere near consensus on these questions. We still can't agree on whether insects suffer. But the trajectory is clear: the circle of sentience keeps expanding, and with it, the circle of beings we must consider in our moral calculations. What once seemed like a simple boundary—humans on one side, everything else on the other—has dissolved into a spectrum of experience, feeling, and moral weight that we're only beginning to map.

The old question persists, more urgent than ever: Can they suffer? And if so, what do we owe them?

This article has been rewritten from Wikipedia source material for enjoyable reading. Content may have been condensed, restructured, or simplified.