Consequentialism
Based on Wikipedia: Consequentialism
The Philosophy That Judges You By Your Results
Here's a thought experiment that might keep you up at night: imagine you could save five lives by pushing one person in front of a trolley. The math seems simple—five is greater than one. But something feels deeply wrong about it, doesn't it?
Welcome to consequentialism, the philosophical tradition that says the only thing that ultimately matters is what happens as a result of your actions. Not your intentions. Not whether you followed the rules. Just outcomes.
It sounds almost too simple to be a serious ethical theory. And yet some of history's greatest minds have argued that this is exactly how we should think about right and wrong. They might be onto something—or they might be missing something crucial about what it means to be human.
What Consequentialism Actually Claims
At its core, consequentialism makes a deceptively straightforward claim: an action is morally right if and only if it produces the best possible consequences. That's it. Everything else—your motivations, whether you broke a promise, whether you told the truth—matters only insofar as it affects outcomes.
Think about what this means in practice. If lying saves a life, the lie was the right thing to do. If keeping a promise leads to disaster, you should have broken it. The moral worth of everything you do is determined entirely by what happens afterward.
This puts consequentialism in direct conflict with how most of us naturally think about ethics. We tend to believe some actions are just wrong, period—regardless of consequences. Torturing innocent people, breaking solemn promises, betraying trust. These feel inherently bad, not just instrumentally problematic.
But consequentialists push back: isn't that just squeamishness? If you could prevent a catastrophe by doing something that feels wrong, shouldn't preventing the catastrophe take priority?
The Main Rivals: Duty and Character
To understand consequentialism properly, you need to understand what it's arguing against. The two major alternative approaches to ethics couldn't be more different in their emphasis.
Deontological ethics—from the Greek word for duty—says that certain actions are right or wrong in themselves, regardless of consequences. The German philosopher Immanuel Kant famously argued you should never lie, even to a murderer asking where your friend is hiding. For Kant, the wrongness of lying doesn't depend on what happens afterward; it's built into the very nature of deception.
Virtue ethics takes yet another approach. Instead of asking "what should I do?" it asks "what kind of person should I be?" A virtue ethicist focuses on developing character traits like courage, honesty, and compassion. The right action is whatever a virtuous person would do in your situation—not because of the consequences, but because that's what good character demands.
These three approaches often reach different conclusions. A consequentialist might say it's right to steal medicine to save a dying child. A deontologist might say stealing is simply wrong. A virtue ethicist might ask what a compassionate but honest person would do—perhaps find another way to help without compromising their integrity.
The Unexpected History of a Word
Here's something surprising: the word "consequentialism" is relatively new. It was coined in 1958 by Elizabeth Anscombe, a British philosopher, in her influential essay "Modern Moral Philosophy."
What's even more surprising is that Anscombe meant something different by it than we do today. She classified John Stuart Mill—whom we now consider the quintessential consequentialist—as a non-consequentialist, and W.D. Ross—whom we now place firmly in the deontological camp—as a consequentialist.
This isn't because anyone changed their minds about what Mill and Ross believed. The meaning of the word itself shifted over time. It's a useful reminder that philosophical terms aren't fixed stars; they're more like ships at sea, drifting with intellectual currents.
The ideas behind consequentialism, however, are much older than the word. They trace back at least to ancient China, where they took forms quite different from the Western versions we're most familiar with.
The First Consequentialists Weren't Western
Most discussions of consequentialism start with Jeremy Bentham in 18th-century England. But there's a strong case that the world's first consequentialists lived in China more than two thousand years earlier.
Mozi, a philosopher who lived in the 5th century BCE, developed what scholars now call Mohist consequentialism. His followers, the Mohists, judged actions by their contribution to three basic goods: social order, material wealth, and population growth.
This might sound strange to modern ears. Why would population growth be a moral good? But remember the context: Mozi lived during the Warring States period, a time of almost constant warfare and famine. More people meant more workers, more production, more resilience against disaster. It was practical ethics for desperate times.
What makes Mohist consequentialism fascinating is how different it is from the Western versions. Bentham and Mill focused on individual pleasure and pain. Mozi focused on collective welfare—the good of the state and society, not the happiness of particular people.
"It is the business of the benevolent man to seek to promote what is beneficial to the world and to eliminate what is harmful," the Mohist texts declare. "What benefits he will carry out; what does not benefit men he will leave alone."
There's no concern here with maximizing pleasure or satisfying preferences. The goal is a stable, prosperous society where people have their basic needs met. Individual happiness is assumed to follow naturally from collective flourishing.
Pain, Pleasure, and the Utilitarian Turn
The version of consequentialism most people encounter first is utilitarianism, developed by Jeremy Bentham in the late 1700s. Bentham opened his major work with a claim that still provokes debate:
Nature has placed mankind under the governance of two sovereign masters, pain and pleasure. It is for them alone to point out what we ought to do, as well as to determine what we shall do.
This is hedonism applied to ethics. Bentham argued that pleasure is the only intrinsic good—the only thing valuable for its own sake—and pain the only intrinsic bad. Everything else matters only as a means to pleasure or a cause of pain.
The ethical implications are radical. If pleasure is all that matters, then the right action in any situation is whatever produces the greatest total happiness. Not your happiness specifically, but everyone's happiness added together. Bentham called this the "greatest happiness principle."
Notice what this does to traditional morality. There are no absolute rules against lying, stealing, or even killing. There's only the calculation: does this action increase or decrease the total amount of pleasure in the world?
John Stuart Mill refined this framework in the 19th century, adding a crucial modification. Not all pleasures are equal, Mill argued. The pleasures of intellectual achievement, artistic appreciation, and deep friendship are qualitatively superior to mere bodily pleasures. "It is better to be Socrates dissatisfied than a fool satisfied."
This creates obvious problems—who decides which pleasures are "higher"?—but it was Mill's attempt to save utilitarianism from what he saw as vulgar conclusions. Without the hierarchy, a society of people hooked up to pleasure machines might be the moral ideal. Mill found that prospect disturbing.
The Preference Revolution
Modern utilitarians often avoid the whole pleasure question by focusing on preferences instead. Preference utilitarianism, associated with philosophers like Peter Singer, says we should maximize the satisfaction of preferences rather than the experience of pleasure.
This subtle shift has significant implications. Your preferences might include things you'll never directly experience—like wanting your children to flourish after you're dead, or wanting justice to be done even when you won't know about it. Preference satisfaction captures these concerns in a way that pure hedonism doesn't.
It also sidesteps the thorny question of comparing pleasures across different people. How do you know your pleasure from eating chocolate is more or less intense than mine? With preferences, you simply try to satisfy as many strongly-held preferences as possible, without needing to measure subjective experiences.
But preference utilitarianism has its own problems. What about preferences that are themselves harmful or irrational? Should we count the preferences of sadists to see others suffer? Of addicts for their next fix? Of bigots for discrimination? Once you start excluding some preferences as illegitimate, you need a theory of which ones count—and that theory might end up doing all the real moral work.
The Rules Problem
One persistent challenge for consequentialism is that direct application of its principles can lead to conclusions that seem monstrous. If killing one innocent person would save five, straightforward consequentialist math says to do it. If framing someone for a crime would prevent a riot that would kill dozens, frame away.
Rule consequentialism is an attempt to escape this trap. Instead of asking "what action produces the best consequences right now?" it asks "what rules, if generally followed, would produce the best consequences?"
This might lead to very different answers. A world where people feel free to kill innocents whenever they calculate it would help more people is probably worse, overall, than a world with a strong prohibition against killing. Even if breaking the rule would help in this particular case, having the rule helps more in general.
The philosopher Brad Hooker, in his book "Ideal Code, Real World," developed perhaps the most sophisticated version of rule consequentialism. Derek Parfit, one of the most important moral philosophers of the 20th century, called it "the best statement and defence, so far, of one of the most important moral theories."
But rule consequentialism faces a devastating objection: isn't it incoherent? The whole point of consequentialism is that consequences are what matter. If you know that breaking a rule would produce better consequences than following it, how can a consequentialist justify following the rule anyway?
Hooker's response is clever but controversial. He argues that the best case for rule consequentialism isn't that it maximizes good consequences—it's that it does the best job of capturing our moral intuitions while helping us navigate disagreement. This is a significant retreat from pure consequentialism, and some critics argue it's effectively conceding defeat.
The Egoist Challenge
There's a version of consequentialism that most people find even more disturbing than utilitarianism: ethical egoism. This view says that the right action is whatever produces the best consequences for you specifically—not for everyone, just for you.
Most philosophers reject ethical egoism as a serious moral theory. It seems to license any behavior, no matter how harmful to others, as long as you personally benefit. But it's worth understanding why some have defended it.
Henry Sidgwick, a 19th-century philosopher, argued that a certain degree of egoism actually promotes general welfare. People know their own needs and desires better than anyone else does. If everyone focused primarily on their own flourishing, the overall result might be better than if everyone tried to be perfectly altruistic.
This argument has some modern echoes in economics, where the idea of self-interested actors producing good outcomes through market mechanisms has been influential. Adam Smith's "invisible hand" is, in a sense, a consequentialist argument for egoism—though Smith himself was far more subtle than many of his followers.
The Two-Level Solution
How should a consequentialist actually live? You can't spend every moment calculating the global impact of every possible action. That would be paralyzing—and probably counterproductive, since all that deliberation time could be spent doing actual good.
The philosopher R.M. Hare proposed an elegant solution: two-level thinking. In everyday life, you follow moral rules that generally produce good outcomes. Don't lie. Keep promises. Help people in need. You don't stop to calculate each time; you just follow the rules.
But when you're facing a genuine moral dilemma—when the rules conflict, or when the stakes are unusually high, or when you have time to reflect carefully—you step back to the critical level. Here you think like a pure consequentialist, weighing all the factors and calculating the best outcome.
Peter Singer, perhaps the most famous living utilitarian, endorses this two-level approach. It's a way of being consequentialist in theory while maintaining the psychological stability and social trust that come from rule-following in practice.
Critics argue this is an unstable compromise. If you know the rules are just useful heuristics, won't you be constantly tempted to override them whenever you think you can do better? And won't that undermine the whole point of having rules?
The Observer Problem
Here's a puzzle that any consequentialist must answer: from whose perspective are consequences evaluated?
Your action might be good for you but bad for others. Good in the short term but bad in the long run. Good for people alive today but bad for future generations. Good for humans but bad for animals. Which consequences count? And who counts them?
One common answer is to invoke an "ideal observer"—an imaginary being who can see all consequences and weigh them impartially. The right action is whatever this observer would approve of.
But this creates obvious problems. No one is an ideal observer. We don't know all the consequences of our actions. We're biased toward our own interests and the interests of people close to us. How can a theory that requires omniscient perspective be action-guiding for limited beings like us?
Some consequentialists respond by relativizing to what agents can reasonably be expected to know. You're morally responsible for the foreseeable consequences of your actions, not the unforeseeable ones. If you give money to charity and the charity turns out to be corrupt, you haven't necessarily done wrong—as long as you did reasonable due diligence.
But this raises new questions. How much investigation is enough? Aren't there some consequences you should have foreseen but didn't because you weren't paying attention? Is reckless ignorance an excuse?
Acts and Omissions
One of the most contested implications of consequentialism concerns the distinction—or lack thereof—between acting and failing to act.
For a pure consequentialist, there's no fundamental difference. If you could save someone's life by donating money and you don't, the person dies just as surely as if you'd killed them directly. The consequences are the same, so the moral status should be the same.
This "acts and omissions doctrine" denial leads to uncomfortable conclusions. If letting someone die is morally equivalent to killing them, then every time you spend money on luxuries while people are dying from poverty, you're doing something comparable to murder. Peter Singer has pressed this argument relentlessly, using it to argue that most of us in wealthy countries are living profoundly immoral lives.
Many people resist this conclusion. There seems to be something different about actively doing harm versus merely failing to prevent it. Traditional medical ethics, many religious traditions, and common moral intuition all maintain the distinction.
But consequentialists push back: isn't this just psychological squeamishness? You feel worse about pulling a trigger than about not donating to famine relief, but should feelings determine morality? The person who dies from your inaction is just as dead as the person who dies from your action.
The Theory That Won't Stay Pure
Here's something curious about consequentialism: in practice, it keeps generating theories that look less and less consequentialist.
Rule consequentialism adds rules that constrain direct consequence-maximization. Motive consequentialism judges actions by the motives that produced them, not just outcomes. Two-level thinking separates everyday morality from pure calculation. Robert Nozick proposed "side-constraints"—absolute limits on what you can do even if breaking them would produce better consequences.
What's going on? Perhaps pure consequentialism is too demanding or too counterintuitive to be livable. Perhaps there's something important about rules, motives, and constraints that pure outcome-focus misses. Or perhaps these modifications represent consequentialism evolving into something more sophisticated—not abandoning its core insight but refining it.
The philosopher Derek Parfit spent much of his career arguing for convergence: that properly understood, consequentialism, Kantian deontology, and contractualism all end up prescribing the same actions. If he's right, the great debate between ethical theories might be less a disagreement about substance than a disagreement about emphasis and explanation.
Living With Consequences
So what should you make of consequentialism? Is it the right way to think about ethics?
One response is to take the theory seriously as a corrective. We naturally pay too much attention to dramatic single actions and too little to systemic effects. We care more about identifiable victims than statistical ones. We weight nearby suffering more than distant suffering. Consequentialist thinking can correct these biases, pushing us to consider the actual impact of our choices.
Another response is to treat it as one voice among several. Maybe consequences matter a lot—but so do rights, duties, relationships, and character. Maybe pure consequentialism is too simple for the complexity of moral life, but its central insight—that what happens because of our actions matters—deserves a place in any adequate ethical theory.
And perhaps the deepest response is to see consequentialism as raising questions rather than answering them. What does make life good? Whose wellbeing counts, and how much? Can we really compare different people's interests? What do we owe to future generations? These questions don't go away just because you reject consequentialism. They're the questions any serious ethical thinking must eventually face.
The trolley problem remains unsolved. Five lives versus one. The math is simple. The morality isn't. And maybe that's exactly the point.