Optimism bias
Based on Wikipedia: Optimism bias
Here is something strange about the human mind: if you ask a room full of people whether they're more likely than average to get into a car accident, most will say no. Ask them if they're more likely than average to live past eighty, and most will say yes. The math, of course, doesn't work. By definition, roughly half of any group must be below average. And yet we persist in believing we're the exception.
This is optimism bias, and it is one of the most pervasive and stubborn features of human cognition. It shows up everywhere—across cultures, genders, ages, and nationalities. It shapes how we plan projects, how we assess health risks, how we invest money, and how we navigate relationships. It may even be hardwired into our brains.
The bias works in two directions, though not equally. We tend to overestimate our chances of experiencing good things—getting promoted, finding love, winning the lottery. But the effect is actually stronger in the other direction: we systematically underestimate our chances of experiencing bad things. Cancer, divorce, car accidents, heart disease—we assume these happen to other people.
Measuring a Bias That Everyone Has
Studying optimism bias presents a peculiar challenge. How do you prove that someone is being unrealistically optimistic rather than simply making an accurate assessment of their own situation? After all, any individual person might genuinely have lower-than-average risk for a given event. Maybe they really do eat better, exercise more, and drive more carefully than most people.
Researchers have developed two main approaches to get around this problem. The first is absolute risk assessment: you ask people to estimate their likelihood of experiencing something negative, like developing heart disease, then compare their estimate against objective statistical data. The trouble here is that it's genuinely difficult to know what any particular person's actual risk is—population averages don't necessarily apply to individuals.
The second approach is comparative risk assessment. Instead of asking people to estimate their absolute risk, you ask them to compare themselves to others of the same age and sex. Are you more likely, less likely, or equally likely to experience this event compared to someone else like you? This sidesteps the problem of knowing actual risk statistics, because you're just looking for systematic differences between how people view themselves versus how they view others.
Both direct and indirect versions of this comparison exist. In a direct comparison, you simply ask: is your risk higher, lower, or the same as someone else's? In an indirect comparison, you ask people to separately estimate their own risk and the risk of others, then look for gaps between the two numbers.
When researchers aggregate these responses across groups, a consistent pattern emerges. For negative events, people rate their own risk as lower than the risk of their peers. The bias can only be defined at the group level—any individual's self-assessment might be accurate—but when you average across many people, the systematic tilt toward optimism becomes undeniable.
Why We Think Bad Things Won't Happen to Us
What explains this persistent lean toward rosy self-assessment? Researchers have identified four main categories of contributing factors: our desired outcomes, our cognitive shortcuts, the information asymmetry between ourselves and others, and our underlying emotional states.
We Want Good Things to Be True
The most straightforward explanation is that optimistic beliefs feel good. Thinking that positive events will happen to you is inherently satisfying. Believing you're less likely than others to suffer misfortune helps manage anxiety and other negative emotions. In psychological terms, this is called self-enhancement—we construct beliefs that make us feel better about ourselves and our futures.
There's also a social dimension to this. People who present themselves pessimistically tend to be less accepted by others. Nobody wants to be around someone who constantly predicts their own doom. So we may unconsciously maintain optimistic self-presentations partly because pessimism carries social costs. This is self-presentation theory: we craft and maintain a desired personal image, and optimism is part of that image.
The Illusion of Control
One of the most powerful drivers of optimism bias is perceived control. The more control people believe they have over an outcome, the more optimistically biased they become about it.
Consider driving. People routinely underestimate their risk of being in a car accident when they're behind the wheel, but they're much more realistic about the risk when they're passengers. The difference? Control. As a driver, you feel like you can prevent bad things from happening through your own skill and attention. As a passenger, you're at someone else's mercy.
The same pattern shows up with health risks. If someone believes they have substantial control over whether they contract HIV—through their behavior choices, their selection of partners, their use of protection—they tend to rate their personal risk as lower than average. The underlying logic isn't crazy: control does matter for many outcomes. But we systematically overestimate how much control we have and underestimate how much randomness and external factors contribute to what happens to us.
Prior experience with negative events tends to reduce optimism bias, and researchers think this happens partly because experience diminishes the illusion of control. Once something bad has happened to you despite your efforts to prevent it, the comforting belief that you can control outcomes takes a hit.
You Know More About Yourself Than About Anyone Else
Here is a subtle but important factor: when we assess our own risk, we draw on rich, detailed information about our specific situation. We know our diet, our exercise habits, our family history, our stress levels, our careful driving record. When we assess the risk of "others," we're necessarily thinking about a vague, generalized group. We might picture "the average person" or "someone my age," but we can't know the specifics of their lives the way we know our own.
This information asymmetry creates a predictable pattern. We make nuanced, conditional judgments about ourselves—"well, given that I eat well and exercise and don't smoke, my heart disease risk is probably low"—while we can only make crude, stereotypical judgments about others. The comparison is inherently unfair because we're comparing a richly detailed picture of ourselves against a blurry outline of everyone else.
Interestingly, studies have shown that this effect can be partially reversed. In one experiment, researchers had some participants list all the factors that might influence their chances of experiencing various events. Then a second group read those lists before making their own risk assessments. The readers showed less optimism bias—presumably because the lists gave them more concrete information about what shapes other people's risks, making the comparison targets feel more real and detailed.
The Closer They Are, the More Similar They Seem
Related to the information problem is what researchers call interpersonal distance. When the comparison target is vague—"other people your age," "the average person"—optimism bias is strong. When the target is specific and familiar—your best friend, your sister—the bias weakens considerably.
This connects to something called person-positivity bias: we tend to evaluate things more favorably the more they resemble an individual human being. Abstract groups feel less real, less human, and therefore less comparable to our vivid sense of ourselves. A specific friend is a real person with a real life; "other people" is a statistical abstraction.
There's also an in-group versus out-group effect. We perceive our risks as more similar to people we consider part of our own group and more different from people we consider outsiders. The boundaries of "people like me" matter enormously for how we calibrate our risk assessments.
Mood Colors Everything
Finally, our underlying emotional state shapes how optimistically we assess the future. People in positive moods show more optimism bias; people in negative moods show less. This makes intuitive sense: when you're happy, it's easier to recall happy memories and imagine positive futures. When you're sad, negative possibilities feel more vivid and plausible.
This has an interesting implication for depression. People who are depressed tend to show reduced optimism bias—they're more "realistic" in some statistical sense about the likelihood of bad things happening. Some researchers have called this depressive realism, though the concept remains debated. Anxiety similarly reduces optimism bias, probably because anxious people are already primed to think about threats and dangers.
The Brain's Optimism Machine
The optimism bias isn't just a cultural phenomenon or a result of motivated reasoning—it appears to have a neurological basis. Functional brain imaging studies have identified a region called the rostral anterior cingulate cortex, or rostral ACC, that seems to play a key role in maintaining optimistic beliefs.
The rostral ACC is involved in emotional processing and in retrieving autobiographical memories. When people imagine positive future events, this region shows extensive communication with the amygdala, which processes emotions. When people imagine negative future events, that communication is much more restricted. The brain, it seems, is literally wired to engage more fully with positive possibilities than negative ones.
This suggests that optimism bias may be a fundamental feature of human cognition rather than a superficial error in reasoning. Evolution may have selected for brains that default to optimism because the benefits—motivation, resilience, willingness to take risks—outweighed the costs of occasionally being caught off guard by negative events.
Perhaps most surprisingly, optimism bias isn't unique to humans. Researchers have documented similar patterns in rats and birds. When given choices that involve uncertain outcomes, these animals also seem to overweight the possibility of positive results. Whatever function optimism bias serves, it's apparently old enough to predate the evolution of human language and culture.
The Framing Problem
How you ask the question turns out to matter enormously for how much optimism bias you observe. This is called the valence effect, and it's been studied extensively by the psychologist Ron Gold and his colleagues since the early 2000s.
Here's how it works. You can ask about the same health risk in two different ways. You could present information about factors that increase your risk of heart disease and ask people how likely they are to develop it. Or you could present information about factors that decrease your risk and ask how likely they are to avoid it. Logically, these are equivalent questions—your probability of getting heart disease plus your probability of avoiding it should equal one hundred percent.
But psychologically, the framing changes everything. When researchers present risks with negative framing—focusing on the bad outcome—people show stronger optimism bias. When they use positive framing—focusing on avoiding the bad outcome—the bias is weaker. We're more motivated to believe we'll escape something terrible than to believe we'll achieve something good.
This asymmetry has practical implications. In finance, valence effects can lead investors to overestimate a company's future earnings when those prospects are framed optimistically, potentially inflating stock prices. In organizational planning, optimistic framing of project outcomes contributes to unrealistic schedules and budgets.
The Planning Fallacy and Its Expensive Consequences
Speaking of unrealistic budgets: optimism bias is intimately connected to what the psychologists Daniel Kahneman and Amos Tversky called the planning fallacy. This is the tendency for planned projects to come in over budget, behind schedule, and with fewer benefits than expected.
Kahneman and Tversky first identified the planning fallacy in the 1970s, and subsequent research has confirmed it across virtually every domain where people plan complex undertakings. Construction projects, software development, military campaigns, government programs—the pattern is remarkably consistent. Initial estimates are systematically too optimistic about costs, timelines, and outcomes.
The connection to optimism bias is straightforward. Planners believe they're more competent than average, that their project will avoid the problems that plagued similar efforts, that things will go more smoothly for them than for others. They underweight the base rates of failure and delay in their category of project because they believe their specific situation is different.
This has enormous real-world consequences. Research on megaprojects—massive infrastructure undertakings like bridges, tunnels, Olympic venues, and urban transit systems—has identified optimism bias as one of the single largest causes of cost overruns. When a project originally budgeted at two billion dollars ends up costing five billion, optimism bias is usually a major culprit.
Public Health and the Limits of Risk Communication
Perhaps nowhere is optimism bias more consequential than in public health. The entire enterprise of preventive medicine depends on people accurately assessing their risks and taking appropriate action. Optimism bias systematically undermines both steps.
Consider heart disease, the leading cause of death in many countries. Studies have found that people who underestimate their comparative risk of heart disease also know less about the condition and, even after being given educational materials, remain less concerned about their risk. If you're convinced that heart disease is something that happens to other people, why bother learning about it or changing your behavior to prevent it?
The same pattern shows up across a wide range of health behaviors: exercise, diet, sunscreen use, smoking cessation, safe sex practices. Risk perception influences behavior, and optimism bias distorts risk perception. People know that smoking causes cancer—the information is ubiquitous—but many smokers believe they personally are less likely to get cancer than other smokers. The general knowledge coexists with the specific optimistic delusion.
Adolescents present a particular challenge. This age group engages in high rates of risky behavior: smoking, drug use, unprotected sex, reckless driving. They're generally aware of the risks in the abstract. But awareness rarely translates into behavior change, and teenagers with strong optimism bias about risky behaviors tend to become even more optimistically biased as they age. The bias reinforces itself over time.
Not Everyone Is Equally Optimistic
There's an interesting exception to the universality of optimism bias: autistic people appear to be less susceptible to it. Research has found that autistic individuals show reduced optimism bias compared to neurotypical people, though the reasons for this difference aren't fully understood. It may relate to differences in how autistic people process social comparisons or self-relevant information.
At the other end of the spectrum, some people show pessimism bias—they exaggerate the likelihood that bad things will happen to them. This is particularly common among people with depression, who tend to overestimate their risk of negative outcomes. Interestingly, studies of smokers have found a small but significant pessimism bias in their assessments of heart disease risk, though the literature on this specific population is mixed.
Can We Overcome It?
Given all the problems that optimism bias causes—failed projects, preventable diseases, poor decisions—you might expect that researchers have developed effective techniques for reducing it. Unfortunately, the evidence is discouraging.
Multiple studies have attempted various interventions to reduce optimism bias, with limited success. In one particularly disheartening experiment, researchers tested four different approaches to reducing the bias. All four actually increased it. Attempts to correct the bias sometimes backfire, perhaps because calling attention to optimistic beliefs makes people more invested in defending them.
Some conditions do seem to attenuate the bias, though not eliminate it entirely. Reducing social distance helps—when people compare themselves to close friends rather than vague "others," the gap in perceived risk narrows. Direct experience with negative events also reduces the bias, presumably because it's harder to believe you're immune to something that's already happened to you.
But there may be good reasons why the bias is so resistant to correction. If optimism bias is genuinely adaptive—if it helped our ancestors survive and reproduce by keeping them motivated and willing to take risks—then we might expect the brain to defend it against logical attack. The bias may be a feature, not a bug, even when it causes problems in specific modern contexts.
The Optimist's Paradox
This leaves us in a strange position. Optimism bias is demonstrably real, pervasive, and consequential. It leads to poor planning, inadequate prevention, and systematic misjudgment of risks. From a purely rational standpoint, we would all be better off with more accurate self-assessments.
And yet. The bias exists for reasons. Happy people are optimistically biased; depressed people are more "realistic." Optimism motivates action, sustains effort, and makes setbacks more bearable. A world of perfectly calibrated risk assessors might be a world of paralyzed pessimists, too aware of all the things that could go wrong to attempt anything difficult.
Perhaps the best we can do is cultivate awareness of the bias without entirely eliminating it. We can build systems and processes—like independent project audits, or reference class forecasting that looks at base rates rather than specific plans—that correct for optimism at the institutional level. We can remind ourselves, when making important decisions, that we're probably underestimating what could go wrong.
But we probably can't—and maybe shouldn't—stop believing that our futures will be brighter than the statistical averages suggest. That belief might be wrong, but it's also what gets us out of bed in the morning. The optimism bias is a useful delusion, and like most useful delusions, it persists precisely because it works.