Survivorship bias
Based on Wikipedia: Survivorship bias
The bullet holes were in all the wrong places.
During World War II, the American military faced a critical problem: too many bombers weren't coming back from missions over Europe. The obvious solution was to add armor plating, but armor is heavy. Add too much and the plane becomes sluggish, burns more fuel, carries fewer bombs. The question wasn't whether to add protection—it was where.
So the military did what seemed reasonable. They examined bombers returning from combat, carefully cataloging where the bullet holes clustered. The fuselage and wings showed the most damage. The engines, relatively few hits. The conclusion seemed obvious: reinforce the fuselage and wings.
Abraham Wald, a mathematician working with Columbia University's Statistical Research Group, saw it differently. The planes they were examining, he pointed out, were the ones that made it back. The bullet holes showed where a bomber could absorb damage and still fly home. The areas with no damage? Those were precisely where hits were fatal. The planes struck in those spots were at the bottom of the English Channel or scattered across French farmland. They weren't in the sample at all.
Reinforce the engines, Wald recommended. Armor the places that look fine on the survivors, because the planes hit there didn't survive to be studied.
This is survivorship bias: the logical error of drawing conclusions only from things that made it through a selection process, while ignoring everything that didn't. It's one of the most pervasive mistakes in human reasoning, and once you learn to see it, you'll find it everywhere.
The Telepaths Who Weren't
In the early twentieth century, a respected researcher named Joseph Banks Rhine believed he had discovered people with genuine psychic abilities. His method seemed scientifically rigorous. He would show subjects a series of Zener cards—simple symbols like circles, squares, and wavy lines—hidden from view, and ask them to guess which card was being displayed. Pure chance would yield correct guesses about twenty percent of the time. Rhine found subjects who performed significantly better.
The catch? He started with hundreds of potential subjects.
Think about what happens when you test hundreds of people on a guessing task. Most will score around what probability predicts. But some, through pure luck, will guess better than expected in their first session. Rhine kept testing those lucky few while dismissing the rest as lacking "strong telepathic ability." He then repeated the process, keeping only those who continued to perform well.
What Rhine had actually discovered was a statistical certainty dressed up as a paranormal phenomenon. Given enough initial subjects, someone will always get lucky. Given enough rounds of testing, the luckiest of the lucky will accumulate an impressive track record—impressive, that is, until you remember all the people eliminated along the way.
Science writer Martin Gardner illustrated this beautifully. Imagine one hundred psychology professors each decide to test Rhine's claims. Each runs their own experiments. Ninety-nine find nothing interesting—their subjects guess at chance levels—and those ninety-nine professors shrug and move on. But one professor, through sheer probability, gets a subject who keeps guessing correctly.
Neither the professor nor the subject knows about the other ninety-nine failed experiments. From their perspective, something remarkable is happening. The professor writes an enthusiastic paper. The subject becomes convinced they have a gift. The paper gets published, the readers are amazed, and no one ever hears about the ninety-nine experiments where nothing happened.
This isn't fraud. It's something more insidious: a structural feature of how we filter information that systematically misleads us.
The Graveyard of Failed Funds
Wall Street has built an entire industry on survivorship bias.
When you look at the performance of mutual funds, you're typically looking at funds that still exist. This seems obvious enough to be almost tautological—how can you measure the performance of something that's gone? But the implications are profound.
Funds that perform poorly don't just underperform. They disappear. They get quietly merged into better-performing funds, their dismal track records absorbed and hidden. They close entirely, their histories erased from the databases that researchers and investors use to make decisions. The funds that remain are, by definition, the ones that did well enough to survive.
In 1996, researchers Edwin Elton, Martin Gruber, and Christopher Blake tried to quantify this effect. They found that survivorship bias inflated the apparent performance of the mutual fund industry by about 0.9 percent per year. That might sound small, but compounded over decades, it's the difference between a comfortable retirement and running out of money.
The bias hits smaller funds hardest. Small funds have a higher probability of folding, so the survivors represent an even more elite sample. When you hear about the spectacular returns of a small-cap fund, you're hearing about the winner of a lottery where most tickets were quietly incinerated.
Here's a statistic that should make you deeply skeptical of fund marketing: in theory, seventy percent of currently existing funds could truthfully claim to have performance in the top quarter of their peer group—if that peer group includes funds that have closed. The peer group shrinks. The survivors look better by comparison. And investors, seeing these impressive numbers, pour money into funds that may simply have been lucky.
The Dropout Billionaire Illusion
For every Mark Zuckerberg, there are thousands of college dropouts whose startups no one ever heard of.
This observation, from writer Alec Liu, captures something essential about how we misunderstand success. Popular culture loves the story of the determined individual who bucks convention and beats the odds. Steve Jobs dropped out of Reed College. Bill Gates left Harvard. The implication seems clear: formal education is optional, maybe even an obstacle, on the path to greatness.
But this is survivorship bias in its purest form. We hear about the dropouts who became billionaires because they became billionaires. We don't hear about the dropouts who became broke, who struggled to find jobs, who spent their lives regretting the decision—because there's no magazine cover for "Person Who Made Statistically Unwise Decision and Got Statistically Predictable Result."
Economist Larry Smith of the University of Waterloo has pointed out that the entire advice industry runs on survivorship bias. The people giving advice on how to succeed are, definitionally, people who succeeded. They attribute their success to their methods—waking up at 5 AM, meditating, reading fifty books a year, whatever their particular formula is—without any way of knowing how many people followed the same methods and failed.
Journalist David McRaney put it starkly: the advice business is a monopoly run by survivors. When something becomes a non-survivor, its voice is muted to zero. The dead don't write memoirs.
Silent Evidence
The philosopher Nassim Nicholas Taleb, in his book "The Black Swan," coined a term for this phenomenon: silent evidence. It's the data that doesn't make it into our analysis because it has, in some sense, ceased to exist.
Taleb traced this insight to an ancient source. The Greek philosopher Diagoras of Melos was shown paintings of people who had survived shipwrecks after praying to the gods—clear evidence, his hosts argued, of divine providence. Diagoras was unimpressed. "Why," he asked, "are there no paintings of those who drowned despite their prayers?"
The survivors could commission paintings. The dead could not. The evidence we have is systematically biased toward those who were around to create it.
This affects even how we understand history. Historian Susan Mumm has noted that researchers tend to study organizations that still exist—groups like the Women's Institute, with accessible archives and ongoing institutional memory—while neglecting smaller charitable organizations that may have done equally important work but didn't survive to tell their story. Our understanding of the past is shaped not by what was most significant, but by what left traces we can still find.
High-Altitude Cats and Hidden Casualties
In 1987, a veterinary study made a surprising claim: cats who fell from heights of six stories or less were more likely to be severely injured than cats who fell from greater heights. The proposed explanation was elegant. Cats reach terminal velocity—the speed at which air resistance balances gravity—after about five stories of falling. Once they hit this speed and stop accelerating, they relax their bodies, which supposedly allows them to absorb impact better.
It's a nice theory. It's also probably wrong.
The problem, as "The Straight Dope" newspaper column pointed out in 1996, is survivorship bias. The study was based on cats brought to veterinarians. But not all cats who fall from buildings get brought to vets. The ones that die instantly from high falls are likely to be left where they land or buried in the backyard, not rushed to an animal hospital.
So the study wasn't comparing all cats who fell from low heights to all cats who fell from high heights. It was comparing cats who fell from low heights and survived long enough to see a vet to cats who fell from high heights and survived long enough to see a vet. The cats killed by high falls weren't in the sample. They were silent evidence.
This might mean that high falls are actually more dangerous, not less. We just don't have data on the cats who didn't make it.
The Immortal Time Problem
Survivorship bias can be subtle enough to fool peer-reviewed journals.
In 2001, a study published in the Annals of Internal Medicine claimed that Academy Award-winning actors and actresses lived almost four years longer than their non-winning peers. The implication seemed to be that success itself conferred health benefits—perhaps the reduced stress of financial security, or the psychological benefits of recognition and accomplishment.
But the statistical method was flawed in a sneaky way. Winners were given credit for all their years of life, including the years before they won. A performer who won an Oscar at age 40 and died at 80 was counted as surviving 40 years after winning. But they also contributed their pre-winning years to the "winning" pool, since they were, after all, a future winner during those years.
Meanwhile, non-winners who died before they had a chance to win anything were counted entirely in the losing column. The winners had an unfair advantage: they were required to survive at least until the moment of winning. The non-winners had no such requirement.
When researchers reanalyzed the data using methods that avoided this "immortal time bias"—a variant of survivorship bias—the survival advantage shrank to about one year and was no longer statistically significant. The Oscar didn't add years to lives. The analysis was just biased toward survivors.
What Survivorship Bias Is Not
To understand survivorship bias clearly, it helps to distinguish it from related concepts.
Survivorship bias is not the same as cherry-picking, though they're cousins. Cherry-picking is deliberately selecting favorable examples to support a predetermined conclusion. Survivorship bias can happen without any deliberate selection at all. It's a structural feature of how information reaches us—the successful, the surviving, the still-existing are simply more visible than their failed counterparts.
It's also not the same as confirmation bias, though they often work together. Confirmation bias is the tendency to favor information that supports what we already believe. Survivorship bias can create false beliefs in the first place, which confirmation bias then protects from scrutiny.
The opposite of survivorship bias might be called "failure bias" or "negativity bias"—the tendency to focus disproportionately on failures while ignoring successes. This is less common in contexts like business and achievement, where success is celebrated and failure is hidden, but it can dominate in areas like news coverage, where disasters and problems attract more attention than quiet successes.
Seeing Through the Bias
Once you recognize survivorship bias, certain questions become reflexive. When someone tells you about a successful strategy, you ask: how many people tried this strategy and failed? When a study reports on a sample, you ask: what determined who made it into the sample? When an industry touts its track record, you ask: what happened to the companies, funds, or products that aren't around anymore?
The bias is particularly dangerous when it comes to risk assessment. The ventures we see are the ones that didn't blow up. The strategies we hear about are the ones that worked. The gamblers who lost everything aren't writing books about their methods. This systematically makes risky approaches look safer than they are.
It's also dangerous when making inferences about causation. The successful college dropouts aren't successful because they dropped out—they're successful despite dropping out, and probably would have been successful anyway. The mutual funds with great track records might not have better managers—they might just have been lucky, and the unlucky funds with identical strategies have been quietly erased.
Abraham Wald's insight about the bombers wasn't just about armor plating. It was about a fundamental principle of reasoning: the sample you see may be systematically different from the population you care about, and the selection process itself can be the most important thing to understand.
The bullet holes tell you where a plane can take damage and survive. The missing bullet holes tell you where damage is fatal. In life, as in war, what you don't see may be exactly what kills you.