← Back to Library
Wikipedia Deep Dive

Longtermism

Based on Wikipedia: Longtermism

The Ice Cream Problem

Here's a striking fact: humanity spends more money on ice cream in the United States alone than on ensuring our species survives the next century.

That's not a metaphor. Philosopher Toby Ord calculated that less than five hundredths of one percent of global economic output goes toward what he calls "longtermist causes"—efforts explicitly designed to help future generations thrive. Meanwhile, Americans spend billions annually on frozen desserts. Ord's modest proposal? "Start by spending more on protecting our future than we do on ice cream, and decide where to go from there."

This comparison crystallizes something that feels both obvious and revolutionary: the people who will exist after we're gone matter. And there might be a lot of them—potentially trillions of humans living across millions of years, if we don't blow it. The philosophy that takes this observation seriously has a name: longtermism.

What Longtermism Actually Claims

At its core, longtermism rests on three premises that seem almost too simple to be controversial.

First, future people count morally. A child born in the year 2200 has just as much moral worth as a child born today. Their happiness matters. Their suffering matters. The accident of when they exist doesn't diminish their humanity.

Second, there might be vastly more future people than present or past people combined. If humanity survives for another million years—a tiny fraction of how long our species has already existed—the number of future humans could dwarf everyone who has ever lived.

Third, we can actually influence whether those future lives happen and whether they're good ones. Our choices today ripple forward through time.

Put these together and you get what philosopher William MacAskill, who coined the term "longtermism" around 2017 along with Oxford colleague Toby Ord, calls "the view that positively influencing the long-term future is a key moral priority of our time." Note that careful phrasing: "a key priority," not necessarily "the key priority." MacAskill distinguishes between regular longtermism and what he calls "strong longtermism"—the more demanding claim that the far future should be our primary moral concern, overshadowing everything else.

Ancient Roots of a New Idea

Although the word is recent, the intuition behind longtermism is ancient.

The Iroquois Confederacy, a union of Indigenous nations in what is now northeastern North America, operated under an oral constitution called the Gayanashagowa—the Great Law of Peace. One of its principles urges decision-makers to always consider "not only the present but also the coming generations." This has been interpreted as the Seven Generation Principle: before taking any significant action, consider how it will affect your descendants seven generations into the future.

The philosophy resurfaced in contemporary Western thought through thinkers like Derek Parfit, whose dense 1984 masterwork "Reasons and Persons" explored the ethics of causing people to exist, and Jonathan Schell, whose 1982 book "The Fate of the Earth" confronted what nuclear weapons meant for humanity's future. Parfit wrote that we "live during the hinge of history"—a phrase that has become something of a mantra for longtermists. He meant that the decisions made by people alive today might determine whether there's a future at all, and if so, what kind.

Why Now Might Be Different

For most of human history, ordinary people couldn't do much about the very long-term future. You could plant trees your grandchildren would climb, build cathedrals your great-grandchildren would worship in, maybe write books that would outlast you. But shaping what happens in a thousand years? That seemed beyond anyone's reach.

Two developments changed this, according to researcher Fin Moorhouse.

The first is that we've invented technologies powerful enough to destroy ourselves. Nuclear weapons were the first. Engineered pandemics might be another. Artificial intelligence that escapes our control could be a third. For the first time in our species' history, we possess tools that could end the human story entirely. Suddenly, the long-term future became something that required protection.

The second development is that science has gotten better at prediction. Not perfect—nobody claims we can forecast the year 3000 with any precision—but substantially better than our ancestors could manage. We understand climate systems, pandemic dynamics, and technological trajectories well enough to make meaningful statements about what might happen decades or centuries from now. We can identify risks before they materialize.

MacAskill adds another observation: our era involves an extraordinary amount of change. Economic growth, technological progress, population expansion—all of these have accelerated dramatically compared to most of human history. A Roman farmer in 100 CE lived a life not terribly different from an Egyptian farmer in 2000 BCE. But the world of 1924 and the world of 2024 are almost unrecognizably different.

This rate of change cannot continue forever. Eventually, we'll hit physical limits. Population can't grow exponentially on a finite planet. Energy use can't increase faster than the sun can provide. Humanity will, at some point, settle into a more stable state. The question is which stable state. And the decisions we make during this unusual, volatile period might lock in the answer for a very long time.

Two Ways to Help the Future

Researchers who study longtermism have identified two broad strategies for improving the long-term future.

The first is survival: preventing catastrophes so bad they would end human civilization permanently. These are called existential risks—threats to "humanity's long-term potential," as Ord defines them. The obvious examples include all-out nuclear war, pandemics engineered to be maximally lethal, asteroid impacts, and the more speculative but potentially more dangerous scenario of artificial intelligence that pursues goals misaligned with human flourishing. Climate change is a less clear-cut case; while it could cause immense suffering and societal stress, most researchers don't think it would literally end human civilization, though it might make other risks more likely.

Reducing existential risk increases the quantity of future life by ensuring there are future people at all.

The second strategy is what researchers call "trajectory change": not just ensuring humanity survives, but ensuring it survives well. Even if we dodge every existential bullet, we could still end up in a future that's stable but terrible—a global totalitarian state that persists for millennia, for instance, or a civilization that never expands its moral circle beyond a narrow definition of who counts.

Trajectory changes increase the quality of future life.

The Abolition of Slavery as a Case Study

To understand what a trajectory change looks like, consider the abolition of slavery in the nineteenth century.

Historian Christopher Leslie Brown has argued that abolition was not inevitable. Slavery was still enormously profitable when the movement to end it gained momentum. There was no economic necessity driving abolition; if anything, economics pointed the other way. What happened instead was a moral revolution—a fundamental shift in how societies understood the permissibility of owning other human beings.

MacAskill uses this example to make a subtle point: value changes don't happen automatically. Slavery existed for thousands of years, across virtually every human civilization. There was nothing predetermined about its abolition. It required specific people making specific choices at specific moments in history.

And here's the longtermist insight: once slavery became morally unacceptable in the dominant global culture, it probably stayed unacceptable. The trajectory changed. Whatever the far future holds, it almost certainly doesn't include a return to chattel slavery as a widespread practice. The moral revolution persisted.

This suggests that bringing about positive value changes today—expanding the moral circle, strengthening commitments to human rights, developing norms against weapons of mass destruction—might be one of the most durable ways to help future generations.

The Problem of Discounting

Economists have a concept called the social discount rate, which captures how much less we should value future benefits compared to present ones. If a dollar today is worth more than a dollar tomorrow, how much more? The standard framework says future value decreases exponentially the further out you go. Money a century from now is worth almost nothing in present terms.

There are legitimate reasons for some discounting. Future benefits are uncertain—you might not be around to receive them. If economic growth continues, future people will be richer, so an extra dollar means less to them than to someone today. These factors justify reducing how much weight we give to distant outcomes.

But many economists also bake in something called "pure time preference"—the idea that future benefits should count for less simply because they're in the future, quite apart from any uncertainty or wealth differences. This is the part longtermists object to.

Frank Ramsey, the economist who developed the standard discounting model, also found pure time preference philosophically indefensible. He acknowledged it might describe how people actually behave—we do seem to prefer immediate gratification—but he didn't think it offered any guidance about how we should behave. The fact that something is further away in time, by itself, doesn't make it less important.

Longtermists often invoke an analogy with space. We generally accept that distance in space doesn't reduce moral worth. A child suffering on the other side of the planet matters just as much as a child suffering next door, even if we're more likely to help the nearby child for practical reasons. Longtermists like MacAskill suggest that "distance in time is like distance in space." A child who will suffer in 2125 matters just as much as one suffering in 2025.

Not everyone agrees. Philosopher Andreas Mogensen has defended what he calls "temporalism"—the view that temporal proximity does strengthen certain moral duties. His argument draws on kinship: common-sense morality allows us to prioritize those more closely related to us. Parents may favor their own children over strangers. Perhaps generations can similarly favor their closer temporal neighbors, weighing the welfare of their children more heavily than their great-great-great-grandchildren.

What About People Alive Today?

One of the most persistent criticisms of longtermism is that it might lead us to neglect present suffering in favor of speculative future benefits.

The worry has some force. If you're thinking in terms of trillions of potential future people, the interests of the eight billion people alive today can start to seem like a rounding error. Critics point to climate change as a test case: if you're focused on the next ten thousand years, the damage climate change will cause over the next fifty might seem less urgent. More troublingly, if you believe that certain present-day sacrifices could secure "astronomical" future value, couldn't that justify almost anything—including atrocities committed in service of the long-term good?

Anthropologist Vincent Ialenti has argued that avoiding this trap requires what he calls "a more textured, multifaceted, multidimensional longtermism"—one that resists the temptation to reduce all ethical considerations to a single calculation about expected future value.

Longtermists have several responses.

The most common is that actions good for the long-term future usually aren't in tension with helping people today. Consider pandemic preparedness. If we invest in better antivirals, faster vaccine development, improved personal protective equipment, and stronger public health infrastructure to guard against the worst-case scenarios—the engineered plagues that could threaten human extinction—we're also building capacity that helps with ordinary flu seasons and more conventional outbreaks. The same research that might save humanity from an existential threat could also reduce suffering right now.

Similarly, reducing the risk of nuclear war benefits everyone, including the people currently living in nuclear-armed states. Developing artificial intelligence safely serves present-day users as well as future generations. There's more overlap between near-term and long-term interests than the objection assumes.

The Prediction Problem

A different criticism targets longtermism's reliance on predicting the consequences of our actions over enormous timescales. Can we really know how choices made today will affect people living in the year 10,000? Even our best models can barely forecast next year's economy. The idea that we could meaningfully influence outcomes millennia away might be hubris.

Longtermist researchers acknowledge the difficulty but argue that some predictions are more tractable than others. We might not be able to predict the details of future civilization, but we can be fairly confident about certain things: if humanity goes extinct, there will be no future humans. That's predictable. If we develop a stable global totalitarian state, it might be very hard to escape. That's a reasonable conjecture about how certain political equilibria work.

The strategy these researchers adopt is to focus on what they call "value lock-in" events—moments where the choices we make become very difficult to reverse. Human extinction is the ultimate lock-in: there's no coming back from it. But other events might also have persistent effects. The development of certain technologies, the establishment of certain political institutions, the entrenchment of certain values—these could shape the trajectory of civilization for centuries.

By concentrating on these high-leverage moments rather than trying to optimize every detail of the distant future, longtermists hope to make their project more tractable. You don't need to predict what life will be like in a thousand years. You just need to ensure that a thousand years from now, people are still around to figure that out for themselves.

The Growing Longtermist Community

Longtermism isn't just an academic philosophy. It has spawned a constellation of organizations trying to put these ideas into practice.

Cambridge University hosts the Centre for the Study of Existential Risk, founded in 2012 to research threats to humanity's survival. The Future of Life Institute, based in the Boston area, has become particularly prominent for its work on artificial intelligence safety, including an open letter in 2023 calling for a pause on training the most powerful AI systems. The Global Priorities Institute at Oxford focuses on the more abstract philosophical questions underlying longtermism. Stanford has its own Existential Risks Initiative.

On the practical side, 80,000 Hours offers career advice to people who want to have the most positive impact with their working lives, often steering them toward longtermist causes. Open Philanthropy, funded largely by Facebook co-founder Dustin Moskovitz and his wife Cari Tuna, has distributed hundreds of millions of dollars to organizations working on existential risk reduction. Longview Philanthropy and the Forethought Foundation work on directing resources toward longtermist priorities.

These organizations are tightly connected to the broader effective altruism movement, which tries to apply evidence and reason to figure out how to do the most good. Longtermism has become one of effective altruism's dominant threads, particularly among people who've concluded that the sheer scale of potential future suffering (or flourishing) might dwarf anything we could accomplish for people alive today.

Beyond Humans

Some longtermists extend their moral concern beyond future humans.

If you believe that non-human animals can suffer and that their suffering matters morally, then the long-term future of animal welfare becomes a longtermist concern. Factory farming, with its billions of animals living in conditions that would horrify most people if they witnessed them, represents a potential trajectory that might persist for centuries if not challenged. Expanding humanity's moral circle to include other sentient beings could be one of the most significant and durable improvements we could make to the long-term future.

This connects to a broader longtermist theme: the importance of value changes. Just as the abolition of slavery represented a moral revolution that reshaped human civilization, a future shift toward taking animal suffering seriously—or toward recognizing the potential moral status of artificial minds, or toward any number of moral expansions we can't currently foresee—could have effects lasting far longer than any policy or technological intervention.

Living at the Hinge of History

Toby Ord's book "The Precipice," published in 2020, argues that we're living through the most dangerous period in human history. Not because life was better in the past—it wasn't, for most people—but because we've developed the power to end everything while not yet developing the wisdom to wield that power safely.

Ord offers probability estimates for various existential risks over the next century. He puts the overall chance of an existential catastrophe at roughly one in six—the same odds as Russian roulette. Unaligned artificial intelligence tops his list of concerns, followed by engineered pandemics. Nuclear war and climate change, while serious, he considers less likely to actually end human civilization.

The numbers are necessarily speculative. We can't run controlled experiments on existential risks. Researchers like Nick Bostrom have relied on expert opinion elicitation—asking people who study these threats what probabilities they'd assign—since traditional research methods don't apply. There's room for significant disagreement about whether Ord's estimates are too high, too low, or even meaningful.

But the underlying point remains: we have options. The future is not yet determined. The choices people make today—whether to invest in pandemic preparedness, how to develop artificial intelligence, which values to prioritize, how to structure global governance—will shape whether the human story continues and what kind of story it becomes.

That might sound grandiose. It's also, if longtermists are right, simply true.

A Framework for Thinking About the Future

In his 2022 book "What We Owe the Future," MacAskill offers a practical framework for evaluating which actions might have the biggest long-term impact. He suggests considering three factors: significance, persistence, and contingency.

Significance measures the average value of bringing about a particular state of affairs. Some changes matter more than others. Preventing human extinction is more significant than improving the efficiency of solar panels.

Persistence measures how long a change lasts. A temporary improvement that gets reversed within a generation has less long-term value than a permanent shift in civilization's trajectory. The abolition of slavery was highly persistent—that moral revolution stuck.

Contingency measures whether the change depends on specific actions. If something would happen anyway, you can't take credit for causing it. The most valuable interventions are those that genuinely pivot history onto a different track.

MacAskill acknowledges that applying this framework involves pervasive uncertainty. We often can't know how significant an outcome will be, whether it will persist, or how contingent it is on our choices. He offers four principles for navigating this fog: take "robustly good" actions that seem positive across a range of scenarios, build up options for the future rather than locking in specific outcomes, invest in learning more before committing to irreversible decisions, and above all, avoid causing harm.

That last principle matters. The worry about longtermism justifying atrocities comes from the idea that you could multiply any present harm by potential trillions of future beneficiaries to get a positive expected value calculation. MacAskill's framework pushes back: causing harm is especially risky when you're uncertain, because you might be wrong about those future benefits while being very right about the present harm.

Paying It Forward

Ord offers a different angle on why we might owe something to future generations.

Consider what past generations did for us. People we'll never meet cleared forests, built cities, discovered antibiotics, established democracies, fought for rights they wouldn't live to enjoy. They made sacrifices whose benefits flowed forward in time to people who had no way to repay them.

We are those people. We're the beneficiaries of countless gifts from the past. The arrow of time means we can't pay our ancestors back directly. But we can pay it forward. We can do for future generations what past generations did for us.

On this view, our duties to future generations aren't just about cold calculations of expected value. They're grounded in a kind of reciprocity across time—a partnership of the generations, where each generation receives from those before it and gives to those who will come after.

Whether you find this framing more compelling than the utilitarian arithmetic is perhaps a matter of moral temperament. But it offers a different entry point into longtermist thinking, one less vulnerable to the objection that calculating expected value over trillions of people leads to absurd conclusions.

The Bet We're Making

Critics sometimes object that longtermism amounts to taking low-probability bets on extremely large payoffs—a kind of moral Pascal's Wager. We're asked to devote resources to preventing speculative catastrophes when we could be addressing certain, immediate suffering. Isn't a bird in the hand worth two in the bush?

This objection deserves serious consideration. Expected value reasoning can lead to counterintuitive places. If you multiply a tiny probability by a big enough number, you can justify almost anything.

But longtermists might respond that the probabilities aren't actually that tiny. Ord's one-in-six estimate for existential catastrophe this century is not a remote possibility—it's the odds of rolling a particular number on a die. Would you board a plane if there were a one-in-six chance it would crash? We routinely make decisions to avoid much smaller risks.

And the immediate versus speculative framing may be misleading. As mentioned earlier, many longtermist interventions help people today as well as future generations. The choice isn't between helping the present and helping the future. Often, it's between helping both and helping neither.

Perhaps the deepest issue is what kind of bet we're implicitly making by not taking longtermism seriously. If there's even a reasonable chance that the longtermist worldview is correct—that there could be vast numbers of future people whose lives depend partly on our choices—then ignoring that possibility is itself a bet. It's a bet that the future doesn't much matter, or that we can't affect it, or that the probabilities are so low they can be rounded to zero.

Maybe that bet is justified. But it should be made consciously, with full awareness of the stakes.

In the meantime, we might at least start by spending more on our future than we do on ice cream.

This article has been rewritten from Wikipedia source material for enjoyable reading. Content may have been condensed, restructured, or simplified.