Utilitarianism
Based on Wikipedia: Utilitarianism
In 1863, a young man in St. Petersburg murders an elderly pawnbroker with an axe. He believes he's done the math: one worthless old woman's life weighed against the good he could do with her money—helping his impoverished mother, finishing his education, perhaps saving dozens of lives in his future career. The greatest good for the greatest number. It's a calculation that seems almost reasonable, until you watch Raskolnikov unravel across Dostoevsky's pages, haunted not by police but by something philosophy struggles to name.
This is the problem that utilitarianism—perhaps the most influential moral philosophy of the modern age—must confront. And it's a problem the philosophy's founders thought they had solved.
The Radical Simplicity of an Idea
Utilitarianism makes a bold claim: the right action is simply the one that produces the most happiness for the most people. That's it. No divine commands to interpret, no categorical imperatives to puzzle through, no virtue to cultivate. Just add up the pleasure, subtract the pain, and do whatever maximizes the difference.
The appeal is obvious. In a world riven by religious wars and philosophical disputes, here was a moral system that seemed as objective as arithmetic. Jeremy Bentham, the philosophy's founder, thought you could literally calculate ethical decisions using what he called the "hedonic calculus"—measuring each potential action by the intensity of pleasure it would produce, its duration, how certain you were it would occur, how soon it would arrive, whether it would lead to more pleasures, and how many people it would affect.
It sounds almost comically mechanical. And yet something like utilitarian thinking shapes nearly every major policy debate today. When governments weigh the economic costs of environmental regulations against the benefits of cleaner air, when hospitals decide how to allocate scarce organs, when tech companies argue their platforms connect billions despite some harms—they're all doing utilitarian math, whether they call it that or not.
Two Sovereign Masters
Bentham opens his most famous work with a declaration that reads like a manifesto:
Nature has placed mankind under the governance of two sovereign masters, pain and pleasure. It is for them alone to point out what we ought to do.
This is more radical than it first appears. Bentham isn't just saying that humans happen to seek pleasure and avoid pain—he's saying that pain and pleasure are the only things that matter, morally speaking. Everything else we think we value—justice, rights, virtue, duty—is either reducible to pleasure and pain or it's an illusion, a superstition left over from less enlightened ages.
The implications ripple outward. If an action maximizes overall happiness, it's right—regardless of whether it involves lying, breaking promises, or violating what we'd normally call rights. If torturing one innocent person would genuinely prevent the torture of a thousand, the utilitarian math seems clear.
This is where most people's intuitions rebel. Surely some things are just wrong, regardless of consequences?
The Long Prehistory
Bentham didn't invent the idea that actions should be judged by their results. That intuition is as old as philosophy itself.
In ancient China, the philosopher Mozi—born around 470 BCE, roughly contemporary with Socrates—developed a comprehensive theory arguing that actions should maximize benefit and eliminate harm. His system was remarkably sophisticated, advocating for political stability, population growth, and collective welfare. But Mozi wasn't concerned with individual happiness the way later utilitarians would be. His was a communitarian vision: the good of the state, the prosperity of the realm.
In ancient Greece, Aristippus of Cyrene and later Epicurus argued that pleasure was the highest good. But they conceived of this largely as advice for individual living—how to achieve tranquility, how to avoid unnecessary suffering—rather than as a system for making collective decisions.
In medieval India, the Buddhist philosopher Śāntideva wrote in the eighth century that humanity ought "to stop all the present and future pain and suffering of all sentient beings, and to bring about all present and future pleasure and happiness." This sounds strikingly utilitarian, though Śāntideva embedded it in a religious framework of karma and enlightenment that Bentham would have dismissed as mysticism.
The pieces were scattered across centuries and continents. It took the peculiar conditions of eighteenth-century Britain to assemble them into a system.
The Scottish Prelude
Before Bentham, there was Francis Hutcheson.
In 1725, Hutcheson published a treatise that introduced a phrase that would echo through centuries of moral philosophy: "the greatest happiness for the greatest numbers." He even developed mathematical formulas to calculate the morality of actions—algorithms he later removed because, he admitted, they "appear'd useless, and were disagreeable to some readers."
There was also John Gay, who in 1731 argued that all human action ultimately aims at happiness, and that this isn't just a description of how we behave but a foundation for how we should. Gay gave this a theological spin: God, being infinitely happy and infinitely good, could only have created humanity for the purpose of our happiness. Therefore, God wills our happiness, and therefore whatever produces happiness is what God wants.
David Hume, the great Scottish skeptic, approached the matter more empirically. In 1751, he wrote that in all moral questions, "public utility is ever principally in view." When we argue about what's right, Hume observed, we're really arguing about consequences—about what actually benefits people.
But it was William Paley who made utilitarian ideas mainstream. His 1785 textbook, "The Principles of Moral and Political Philosophy," became required reading at Cambridge. American colleges used it as widely as they used spelling primers in elementary schools. Paley isn't much remembered today, but in 1874, a book comparing major utilitarian thinkers was titled "Modern Utilitarianism: or the Systems of Paley, Bentham and Mill Examined and Compared." Paley came first.
Bentham's System
Jeremy Bentham was not a modest man. He designed a prison called the Panopticon, proposed embalming himself to be displayed at dinner parties (his "auto-icon" still sits in a cabinet at University College London), and believed he had discovered the key to reforming all human institutions.
His book "An Introduction to the Principles of Morals and Legislation" was printed in 1780 but not published until 1789—possibly because Bentham was waiting to see how Paley's similar work was received. When it finally appeared, it wasn't an immediate sensation. But Bentham's ideas spread anyway, partly through a French translation that was later retranslated back into English, creating a strange echo chamber of utilitarian thought bouncing between languages.
Bentham's innovation wasn't the basic idea—maximize happiness—but his insistence on systematic, quantifiable application. The hedonic calculus was meant to be used, not just contemplated. When considering any action, you should evaluate the pleasures and pains it would produce according to seven dimensions:
- Intensity — How strong is the pleasure or pain?
- Duration — How long will it last?
- Certainty — How likely is it to actually occur?
- Propinquity — How soon will it arrive?
- Fecundity — Will it lead to further pleasures?
- Purity — Is it likely to be followed by pains?
- Extent — How many people will it affect?
Run any moral dilemma through this algorithm, Bentham suggested, and the right answer will emerge.
Except, of course, it doesn't. The variables are impossible to measure precisely. How do you compare the intensity of your pleasure in eating cake to my pleasure in reading poetry? How do you quantify the duration of grief? Bentham's calculus promised scientific objectivity but delivered only the appearance of it.
The Problem of Criminals and Beggars
Bentham wasn't naive about the difficulties. He knew that a simple-minded application of utilitarian math could justify things that seem obviously wrong.
Consider his example of the hungry beggar who steals a loaf of bread from a rich man. The immediate calculation seems to favor the theft: the beggar's gain (survival) vastly outweighs the rich man's loss (one loaf among many). By first-order effects alone, the theft maximizes happiness.
But Bentham distinguishes between "evils of the first order" and "evils of the second order." The second-order effects spread through society: if theft is tolerated, everyone becomes insecure in their property. Alarm spreads. Trust erodes. The social fabric frays. These diffuse, widespread harms outweigh the beggar's immediate benefit.
This is how Bentham justifies laws that seem to punish people for actions that, considered in isolation, produce more good than harm. It's the precedent that matters, the rule that's being broken, the signal that's being sent.
Critics have noted that this reasoning can justify almost anything. Want to punish an innocent person to prevent a riot? Second-order effects. Want to maintain an unjust social hierarchy? Second-order effects. The concept is so flexible it can be stretched to cover whatever conclusion you wanted to reach anyway.
Mill's Reformation
John Stuart Mill was raised to be a utilitarian. His father, James Mill, was one of Bentham's closest disciples, and young John was educated according to strict Benthamite principles—learning Greek at three, Latin at eight, reading advanced philosophy by his early teens. He was, essentially, a designed human being, engineered to carry forward the utilitarian cause.
At twenty, he had a nervous breakdown.
Mill later wrote that he fell into a profound depression when he asked himself whether he would be happy if all his utilitarian goals were achieved—and found the answer was no. The philosophy that was supposed to explain all of human motivation couldn't explain his own.
He recovered, eventually, but his utilitarianism was never quite the same as Bentham's. In his 1861 book "Utilitarianism," Mill introduced a crucial modification:
It is quite compatible with the principle of utility to recognize the fact, that some kinds of pleasure are more desirable and more valuable than others. It would be absurd that while, in estimating all other things, quality is considered as well as quantity, the estimation of pleasures should be supposed to depend on quantity alone.
This seems like a minor adjustment, but it's revolutionary. Bentham had been committed to what he called "pushpin as good as poetry"—meaning that the childish game of pushpin, if it produced equal pleasure, was morally equivalent to reading Shakespeare. A pleasure is a pleasure is a pleasure.
Mill disagreed. Some pleasures are higher than others. The pleasures of the intellect, of imagination, of moral sentiment, are more valuable than mere bodily satisfactions. "It is better to be Socrates dissatisfied than a fool satisfied."
But how do you know which pleasures are higher? Mill's answer is that anyone who has experienced both kinds will prefer the higher ones. The person who has known both intellectual and merely physical pleasures will choose intellectual pleasures, even if they come with more difficulty and less intensity.
This is either a profound insight or a convenient prejudice. Mill was a Victorian intellectual; of course he thought intellectual pleasures were superior. A hedonist might reply that Mill simply hadn't encountered the right physical pleasures, or that his nervous breakdown proved his intellectual life wasn't actually making him happy.
The Splitting of the Tradition
After Mill, utilitarianism fragmented into competing schools. The basic question they disagreed about: Should you calculate utility for each individual action, or should you follow rules that generally maximize utility?
Act utilitarianism says: evaluate each action on its own merits. Every time you face a decision, add up the pleasures and pains that each option would produce, and choose the one that maximizes happiness. If lying in this particular case would produce more good than telling the truth, lie.
Rule utilitarianism says: follow rules that, if everyone followed them, would maximize happiness. Even if lying in this particular case might produce more good, you should tell the truth because a rule against lying produces better outcomes overall than a rule permitting case-by-case lying.
The distinction matters practically. An act utilitarian might conclude that it's right to frame an innocent person if doing so would prevent a riot that would kill many. A rule utilitarian would say that a rule permitting the framing of innocents would produce terrible consequences—nobody would trust the justice system, everyone would live in fear—so even in this case, framing the innocent is wrong.
But rule utilitarianism has its own problems. If you follow the rule because it maximizes utility, aren't you just doing act utilitarianism with extra steps? And what happens when the rules conflict? What level of specificity should the rules have?
Total Versus Average
Here's a puzzle that might seem academic but has enormous implications: Should we maximize total happiness or average happiness?
Imagine two possible futures. In Future A, there are one billion people, each with a happiness level of one hundred units. In Future B, there are ten billion people, each with a happiness level of fifty units.
Total utilitarianism says Future B is better: ten billion times fifty equals five hundred billion units of happiness, versus one hundred billion in Future A.
Average utilitarianism says Future A is better: an average of one hundred units per person beats an average of fifty.
This isn't just a thought experiment. It affects how we think about population policy, immigration, poverty reduction, and existential risk. Should we bring more people into existence if their lives would be worth living? Should we prioritize making existing people happier over creating new happy people?
The philosopher Derek Parfit explored these questions exhaustively and found that every consistent position leads to conclusions that seem deeply counterintuitive. Total utilitarianism implies that a world of billions of people barely worth living is better than a world of millions flourishing—what Parfit called "the repugnant conclusion." Average utilitarianism implies that we should avoid having children if their happiness would be below the current average, even if their lives would be very good.
There may be no satisfying answer.
Who Counts?
Bentham was radical in one crucial respect: he believed that the capacity to suffer was the only relevant criterion for moral consideration. Not rationality, not language, not membership in a particular species.
The question is not, Can they reason? nor, Can they talk? but, Can they suffer?
This was written in 1789, in a footnote about animal welfare. Bentham argued that the day would come when humanity would extend moral consideration to all sentient creatures, that the "number of legs, the villosity of the skin, or the termination of the os sacrum" were as irrelevant to moral status as skin color was increasingly recognized to be.
The contemporary philosopher Peter Singer has made this insight central to his influential version of utilitarianism. If suffering is what matters, Singer argues, then the suffering of a chicken in a factory farm matters in the same way as the suffering of a human. Not necessarily equally—Singer doesn't claim that a chicken's suffering is identical to a human's—but it can't be dismissed simply because the sufferer isn't human.
This leads to conclusions that many find uncomfortable. The global meat industry, which produces immense suffering for billions of animals, might be one of the greatest moral catastrophes in history. The suffering prevented by saving one human life might be vastly outweighed by the suffering caused by that person's meat consumption over their lifetime.
Or consider this: if what matters is the capacity to suffer, might future artificial intelligences deserve moral consideration? If a sophisticated AI could genuinely suffer—not just simulate suffering, but actually experience it—wouldn't utilitarian math require us to factor its suffering into our calculations?
The Demandingness Problem
Utilitarianism, taken seriously, is exhausting.
If you should always maximize happiness, then every moment you spend on something other than maximizing happiness is a moral failing. Reading a novel for pleasure? You could be working and donating the money to prevent malaria deaths. Spending time with your children? Perhaps that time would be better spent on activities with larger positive effects.
Peter Singer has argued that affluent people in wealthy countries are morally obligated to give away most of their income until they reach the point where giving more would cause them more suffering than it would alleviate. If you can prevent a child from dying of a preventable disease by donating a hundred dollars, and if failing to do so isn't meaningfully different from walking past a drowning child without helping, then spending that hundred dollars on anything else is morally equivalent to letting a child drown.
This is logical. It's also impossible to live by. Humans aren't calculating machines. We have relationships, loyalties, personal projects, moments of leisure that make life worth living. A moral system that condemns all of these as failures seems to have lost contact with what morality is supposed to be for.
Utilitarians have various responses. Some bite the bullet: yes, we're all failing morally, all the time, and the appropriate response is to do better. Others argue for satisficing rather than maximizing—doing "enough" good rather than the most possible good. Still others distinguish between the criteria of right action and the decision procedures we should use—perhaps utilitarian math tells us what's objectively right, but trying to calculate constantly would itself produce bad outcomes, so we should follow simpler rules in daily life.
The Return to Raskolnikov
Let's return to that young man with the axe in St. Petersburg.
Raskolnikov's crime wasn't a failure of utilitarian calculation—his math was plausible enough. One elderly pawnbroker against the good he could do, the lives he could save. The numbers might actually add up.
What Dostoevsky captures is something the hedonic calculus can't measure: the psychological reality of killing another human being. Raskolnikov thought he could perform a calculation, execute an action, and move on. Instead, he finds that the act has changed him in ways he didn't anticipate and can't undo. He hasn't just killed someone; he's become a killer. And that transformation can't be reversed by any subsequent good deeds.
This might be where utilitarianism reaches its limits. The philosophy treats actions as discrete events with measurable consequences. But humans aren't discrete-event processors. We're beings with continuity, with identities that are shaped by what we do. Some actions don't just produce consequences—they transform the person who performs them.
A utilitarian might respond that these transformative effects are just another form of consequence, to be factored into the calculation. The psychological harm to Raskolnikov is a "pain" to be weighed against the supposed benefits. But this seems to miss something. It's not just that Raskolnikov suffered after the crime. It's that the crime revealed something about what kind of person he could become—and that revelation is terrifying in a way that "pain" doesn't capture.
The Living Legacy
Despite these criticisms, utilitarian thinking pervades modern life. When economists conduct cost-benefit analyses, they're doing utilitarian math. When public health officials calculate quality-adjusted life years, they're using Bentham's framework with more sophisticated statistics. When effective altruists try to determine which charities produce the most good per dollar, they're carrying forward the utilitarian project.
Climate change policy debates are fundamentally utilitarian arguments: how do we weigh the welfare of current generations against future ones? How do we compare certain costs today against uncertain catastrophic risks tomorrow? What discount rate should we apply to future suffering?
Even critics of utilitarianism often find themselves using utilitarian arguments. When human rights advocates argue against torture, they frequently point to its ineffectiveness, its tendency to produce false information, its corrosive effects on the institutions that practice it. These are consequentialist arguments, utilitarian in structure if not in explicit allegiance.
Perhaps this is the philosophy's greatest achievement: not providing final answers, but providing a framework for asking certain questions. Before utilitarianism, moral philosophy was often about interpreting texts, understanding divine will, or cultivating personal virtue. After utilitarianism, it became possible to ask: What actually happens when we do this? Who is affected, and how much? Is there a better way?
Those questions don't have easy answers. But they might be the right questions to ask.
The Unresolved Questions
Two and a half centuries after Bentham, the fundamental problems of utilitarianism remain unsolved.
How do we measure happiness? How do we compare one person's pleasure to another's pain? How do we weigh certain small benefits against uncertain large harms? How do we account for beings who don't yet exist, or who might exist depending on our choices? How do we prevent the tyranny of the majority, where maximizing aggregate happiness means sacrificing minorities? How do we preserve space for individual rights, personal projects, and human dignity within a framework focused entirely on outcomes?
These aren't merely academic puzzles. They're the questions that emerge every time we try to make policy, allocate resources, or decide what we owe each other. Utilitarianism doesn't answer them definitively. But it forces us to confront them honestly, to count what can be counted, and to remember that our abstract principles have concrete effects on real creatures capable of pleasure and pain.
That may be enough. Or it may be the beginning of a longer conversation that humanity is still learning how to have.