Mere addition paradox
Based on Wikipedia: Mere addition paradox
The Paradox That Makes Philosophers Lose Sleep
Imagine you could design the future of humanity. You have two choices. Behind door number one: a small population of extraordinarily happy people, each living lives of profound fulfillment, deep relationships, and meaningful work. Behind door number two: a vastly larger population where everyone's life is just barely worth living—not miserable, but containing only the slightest sliver of positive experience above the threshold of "I'm glad I exist."
Which future is better?
Most people instinctively reach for door number one. Obviously a world of deeply flourishing humans beats a world of billions barely scraping by, right?
Here's the problem: through a series of seemingly reasonable steps, each one almost impossible to reject, the philosopher Derek Parfit showed in 1984 that we're logically committed to choosing door number two. He called this the "repugnant conclusion"—and the name stuck because that's exactly how it feels.
The Trap Parfit Set
Parfit's argument works like a logic puzzle, the kind where each step seems obviously true until you realize where you've ended up. Let me walk you through it.
Picture four different possible populations. We'll call them A, A+, B−, and B. Think of each group of people as a bar on a graph, where the width of the bar represents how many people there are, and the height represents how happy each person is.
Population A is simple: a group of very happy people. Everyone's flourishing. Life is good.
Now consider A+. It contains all the same people from A, still just as happy. But we've added a second group—people whose lives are less wonderful but still genuinely positive. They're glad to be alive. Their existence doesn't affect the original group at all.
Here comes the first seemingly innocent step: Is A+ worse than A?
Think carefully. The original people are exactly as happy as before. We've simply added more people whose lives are worth living. How could adding happy people—even if they're less happy than the originals—make the world worse? That seems almost mathematically impossible. So A+ is at least as good as A, maybe better.
Now look at B−. This is another complex population with two groups. One group is moderately happy, and the other is slightly less so. Here's the key: if you average everything out, B− has more total happiness than A+, distributed more equally. The inequality in A+ (very happy people alongside merely happy people) seems unfair compared to the more egalitarian B−.
Most ethical frameworks would say B− beats A+. More happiness overall? Check. More equality? Check. Surely B− is better.
Finally, population B. This is just B− with the artificial line between the two groups erased. Same people, same happiness levels—we've simply stopped dividing them into separate categories. Since nothing actually changed about anyone's welfare, B must be exactly as good as B−.
Do you see the trap closing?
The Walls Close In
Let's connect the dots:
- A+ is no worse than A (we just added happy people)
- B− is better than A+ (more happiness, more equality)
- B is exactly as good as B− (same people, same happiness)
Therefore, B is better than A.
But wait. Look at what B actually is compared to A. Population A was our small group of extremely happy people. Population B is a larger group where everyone is less happy. We've somehow concluded that replacing deep flourishing with mediocre-but-positive existence is an improvement.
And here's where Parfit twists the knife: this process can repeat. From B, we can construct B+, then C−, then C. Each step is as reasonable as the last. We keep adding more people with slightly lower but still positive welfare, and each time we can't find the logical flaw.
Eventually, we arrive at population Z: an almost unimaginably vast number of people, each living a life that's just barely worth living. By the same logic that got us from A to B, we must conclude that Z is better than A.
That's the repugnant conclusion. A philosophical checkmate that says we should prefer trillions of barely-satisfied people over a smaller population living rich, meaningful lives.
Why Can't We Just Reject It?
The obvious response is: clearly something's wrong with this argument. Let's just reject the conclusion and move on.
But here's why philosophers have been wrestling with this since 1984: which step do you reject? Each individual step seems rock-solid.
Can adding genuinely happy people to the world make it worse? That seems cruel—we'd be saying some lives aren't worth creating even though the people living them are glad to exist.
Can we reject the idea that more total happiness with more equality is better? That throws out most of our moral intuitions about fairness and wellbeing.
Can we say that erasing an arbitrary line between groups changes the moral calculation? That seems absurd—we haven't changed anyone's actual experience.
Every escape route seems to lead somewhere even more troubling than the repugnant conclusion itself.
The Attempted Escapes
Philosophers have spent decades looking for the exit. Here are the main attempts:
Reject Transitivity
Some philosophers, including Larry Temkin and Stuart Rachels, have proposed a radical solution: maybe "better than" doesn't work the way we assume.
In mathematics, transitivity is a basic property. If A is greater than B, and B is greater than C, then A must be greater than C. We assume moral comparisons work the same way. If world A is better than world B, and B is better than C, then A should be better than C.
But what if that's wrong? What if moral value is more like rock-paper-scissors than a number line? Perhaps A+ beats A, and B− beats A+, but somehow A beats B− when we compare them directly. This would dissolve the paradox—but at the cost of making moral reasoning much stranger and less intuitive than we thought.
Switch to Average Utilitarianism
Classical utilitarianism says we should maximize total happiness—add up everyone's welfare and make that number as big as possible. Under this view, more happy people always means more value, which is exactly what creates Parfit's trap.
Average utilitarianism offers an alternative: we should maximize average happiness per person, not total happiness. Under this framework, A+ is actually worse than A because those additional people drag down the average.
Problem solved?
Not quite. Parfit pointed out that average utilitarianism has its own absurd implications. Imagine a world of ten people living wonderful lives. Now suppose we could add a new person whose life would be merely good—positive, but below the current average. Average utilitarianism says we shouldn't create this person. Their existence would be a net negative because it lowers the average.
That seems just as troubling. We'd be saying that perfectly good lives shouldn't exist because other people are doing better. It implies that in a world of millionaires, a happy middle-class person is somehow a bad thing.
Embrace the Conclusion
Some philosophers, including Torbjörn Tännsjö and Michael Huemer, have argued that we should stop resisting and accept the repugnant conclusion. Maybe it's not actually repugnant—maybe our intuition is simply wrong.
Tännsjö's argument runs like this: yes, each individual in population Z is worse off than individuals in population A. But there are so many more of them. The collective value—the sum of all those lives being lived, all those moments of mild satisfaction—adds up to more than the intense flourishing of a smaller group.
We might compare it to asking whether you'd rather have one magnificent diamond or a mountain of decent-quality gems. Your first instinct might favor the single stunning stone, but when you actually do the math...
This approach requires us to trust logic over intuition. Our gut reaction against population Z might be a cognitive bias, not a moral truth.
Abandon the Search
The most pessimistic response comes from Gustaf Arrhenius, who proved something remarkable: no theory of population ethics can satisfy all of our reasonable intuitions simultaneously. We're not just failing to find the right answer—there may be no right answer to find.
Arrhenius showed that any ethical framework for comparing populations must violate at least one of several extremely plausible principles. It's not that we haven't been clever enough. The problem is mathematically impossible to solve in a way that satisfies our full set of intuitions.
This suggests that population ethics might be a domain where human moral intuition simply breaks down, the way our physical intuitions break down in quantum mechanics or at relativistic speeds.
The Very Repugnant Conclusion
As if the original paradox weren't troubling enough, philosophers have discovered an even darker variant.
The very repugnant conclusion shows that some ethical theories imply something worse than population Z. According to these frameworks, for any population of flourishing people, there exists a "better" population consisting of two groups: a substantial number of people living in genuine misery, plus an even larger number of people with lives barely worth living.
In other words, it's not just that we should prefer billions of mediocre lives to millions of excellent ones. We should potentially prefer a world with significant suffering, as long as there are enough barely-positive lives to "outweigh" it.
This extension pushes our intuitions even harder. Most people find it nearly impossible to accept that adding suffering could be offset by adding sufficient quantities of minimal satisfaction.
A Statistical Twist
In 2010, philosopher Nicole Hassoun identified a completely different version of the mere addition paradox—one that shows up in economics and statistics rather than ethics.
Consider this scenario: a village of 100 people collectively controls $100 in resources. The average wealth per capita is $1. Now a billionaire moves to the village, bringing $1,000,000.
What's the new average wealth per capita? The village now has 101 people controlling $1,000,100. That's an average of $9,901 per person.
The statistics suggest the village has experienced a dramatic increase in prosperity. By many measures, it would appear that poverty has been virtually eliminated. But nothing has actually changed for the original 100 villagers. They're still living on their same $100.
This is "mere addition" in a different sense: merely adding a rich person to a population can make poverty statistics look better without improving anyone's actual situation. Hassoun proposed that any sensible measure of poverty should satisfy a "no mere addition" axiom—adding a wealthy person shouldn't, by itself, be counted as reducing poverty.
While this version lacks the philosophical depth of Parfit's puzzle, it has practical implications for how we measure and address economic inequality.
Why This Matters Beyond Philosophy Departments
The mere addition paradox might seem like an abstract game for academics, but it has real implications for how we think about some of humanity's biggest questions.
Consider climate change and future generations. We're making decisions today that will affect how many people can live on Earth and how well they'll live. Should we prioritize creating a smaller population with high quality of life, or is there value in maximizing the total number of future lives, even at some cost to individual welfare?
Consider reproductive ethics. If adding people with worthwhile lives is always at least neutral (as step one of Parfit's argument suggests), does that mean we have obligations to create more people? Some philosophers have argued exactly this—that there's something wrong with choosing not to bring happy people into existence.
Consider existential risk. If the repugnant conclusion is correct, then preventing human extinction becomes extraordinarily important. A future with vast numbers of people living barely-positive lives would be better than no future at all—which means we should be willing to accept significant costs now to ensure humanity's long-term survival.
These aren't just thought experiments. They connect to live debates about population policy, environmental ethics, and how much we should sacrifice for future generations.
Where Does This Leave Us?
After forty years, the mere addition paradox remains unsolved. Philosophers have proposed dozens of approaches, each with its own uncomfortable implications. The uncomfortable truth may be that our moral intuitions about population ethics are simply inconsistent—that there's no coherent way to rank possible futures that matches all of our gut feelings about what matters.
Perhaps the real lesson is humility. When our best reasoning leads to conclusions that feel deeply wrong, we face a choice: trust the reasoning or trust the feeling. Neither option is obviously correct.
Derek Parfit himself never found an answer he was satisfied with. He continued working on population ethics until his death in 2017, convinced that the question mattered enormously but uncertain of the solution.
Some problems in philosophy get resolved. Others get dissolved—we realize we were asking the wrong question. But some, like the mere addition paradox, may simply be hard in a way that resists resolution. They're not puzzles waiting to be solved but genuine dilemmas at the edge of what moral reasoning can handle.
The next time someone asks you whether you'd prefer a world of deep flourishing or vast mediocrity, you'll know the answer is far from obvious. And that uncertainty itself might be the most honest response philosophy can offer.