Effective altruism
Based on Wikipedia: Effective altruism
The Trolley Problem With Real Money
Here's a question that sounds absurd until you think about it: If you saw a child drowning in a pond, you'd jump in to save them, ruining your expensive suit in the process. Nobody would hesitate. But what if, instead of jumping in, you could sell that suit for a thousand dollars and donate the money to a charity that would save not one child but five children from dying of malaria?
The philosopher William MacAskill says you should sell the suit.
This uncomfortable answer sits at the heart of effective altruism, a movement that emerged in the early 2010s and has since channeled hundreds of millions of dollars toward causes its adherents believe do the most good per dollar spent. It's a community that includes Facebook co-founder Dustin Moskovitz, Skype creator Jaan Tallinn, and until his spectacular downfall, cryptocurrency exchange founder Sam Bankman-Fried.
The movement asks a deceptively simple question: If you want to help others, shouldn't you try to help them as much as possible?
The Birth of Calculated Compassion
Effective altruism didn't spring from nowhere. Its intellectual roots trace back to 1972, when the Australian philosopher Peter Singer published an essay called "Famine, Affluence, and Morality." Singer made an argument that still unsettles people half a century later: geographic distance, he insisted, has no moral significance. A starving child in Bangladesh deserves exactly as much of your concern as a starving child next door.
For decades, this remained an interesting philosophical position—the kind of thing students debated in ethics seminars before going on to live perfectly normal lives. But in the late 2000s, several separate communities began coalescing around the idea of actually doing something about it.
One group formed around GiveWell, an organization that emerged from the hedge fund world in 2007. Its founders wanted to apply the same rigorous analysis they used for investments to charitable giving. Which charities actually worked? Which ones wasted money? Could you measure the impact of a donated dollar the way you measured the return on a stock?
Another thread came from Oxford University, where a young philosopher named Toby Ord founded Giving What We Can in 2009. Members pledged to donate at least ten percent of their income to highly effective charities—a secular tithe, but one directed by evidence rather than tradition.
A third strand emerged from the rationalist community clustered around LessWrong, an online forum where participants tried to apply clear thinking to everything from artificial intelligence to personal decision-making. Many of these people were already worried about existential risks—threats that could end or permanently cripple human civilization—and they saw effective altruism as a natural extension of their concerns.
In 2011, these groups merged under a new umbrella organization and held a vote for its name. The winner: the Centre for Effective Altruism.
How to Choose What to Care About
Most charitable giving is driven by emotion. Someone sees a heartbreaking photograph, reads a compelling story, or encounters a charismatic fundraiser. Money flows toward whatever captures attention in the moment.
Effective altruists find this deeply irrational.
Instead, they advocate for what they call "cause prioritization"—the systematic comparison of different ways you might spend your charitable dollars. The framework they developed evaluates causes along three dimensions.
First, importance: How much would the world improve if this problem were solved? Curing aging would be more important than curing a rare disease that affects a handful of people, simply by the numbers.
Second, tractability: How much of the problem can actually be solved with additional resources? Some issues, however important, resist intervention. Others yield dramatically to relatively modest investments.
Third, neglectedness: How much attention is the cause already receiving? If a problem is both important and tractable but already attracts billions in funding, your marginal contribution matters less than it would for an equally worthy cause that few others support.
This framework led effective altruists to some surprising conclusions. Global health and poverty alleviation emerged as obvious priorities—organizations like the Against Malaria Foundation could demonstrate that distributing insecticide-treated bed nets saved lives at a cost of roughly four thousand dollars per death averted. That's an extraordinary return on charitable investment compared to, say, funding an art museum in a wealthy city.
But the framework also pointed toward causes that most people had never considered. Animal welfare, for instance, when measured by the number of sentient beings affected. Factory farms confine and kill billions of chickens every year under conditions that would horrify most consumers if they saw them directly. If animal suffering counts morally—and many effective altruists believe it does—then improving conditions on factory farms might be one of the most impactful things a person could do.
And then there's the long-term future.
The Billion-Year View
Some effective altruists take their reasoning to what critics consider an absurd extreme. If all lives count equally, they argue, then future lives count too. And there could be vastly more future people than there are present people.
This perspective, called longtermism, suggests that reducing existential risks—threats that could either destroy humanity entirely or permanently prevent us from reaching our potential—might be the most important thing we could possibly do. A catastrophe that ended civilization would eliminate not just the eight billion people alive today but the trillions upon trillions of people who might otherwise exist over millions of future years.
From this vantage point, even a small reduction in the probability of human extinction could outweigh almost any present-day intervention. Preventing a pandemic that might kill millions still leaves humanity intact. Preventing an artificial intelligence catastrophe that might end human civilization forever saves effectively everyone who would ever exist.
This is why effective altruist funding has poured into artificial intelligence safety research. The concern isn't science fiction—it's the mathematically driven observation that superintelligent systems, if they ever emerge, might not share human values, and that the transition to a world with such systems could be the most dangerous moment in our species' history.
In 2020, Toby Ord published "The Precipice," a book arguing that humanity faces perhaps a one-in-six chance of existential catastrophe over the next century. He compared our situation to that of a teenager who has acquired tremendous power—nuclear weapons, engineered pathogens, increasingly capable AI—before developing the wisdom to wield it safely.
The Quality-Adjusted Life Year
To compare interventions across wildly different domains, effective altruists needed a common currency. In health economics, that currency is the QALY—the quality-adjusted life year.
The concept works like this: One year of perfect health equals one QALY. A year of life with significant disability might count as 0.7 QALYs, or 0.5, depending on the severity. Death contributes zero. If a treatment extends someone's life by ten years but at reduced quality, you multiply the years by the quality weight to get the total QALYs gained.
This sounds coldly mathematical, and it is. It's also the kind of calculation that health systems around the world already use to decide which treatments to fund. The British National Health Service, for instance, generally won't pay more than twenty to thirty thousand pounds per QALY gained—a threshold that has sparked fierce debate but also forces honest conversations about scarcity and trade-offs.
Effective altruists took this framework and ran with it. GiveWell's recommended charities are selected partly on the basis of cost per life saved or cost per QALY, adjusted for uncertainty. The Against Malaria Foundation, a perennial favorite, achieves its results at rates that dwarf what most developed-world health interventions can match.
But the QALY framework also reveals uncomfortable truths. Many interventions that feel obviously good—building schools, funding scholarships, supporting local hospitals—turn out to be far less cost-effective than distributing antimalarial bed nets or providing vitamin A supplementation to children in developing countries. The intuition that charity should start at home runs directly into the arithmetic of global inequality.
The Drowning Child Problem, Revisited
Return for a moment to that drowning child in the pond. The philosopher Kwame Anthony Appiah posed a clever challenge to Singer's original analogy in 2006. What if the most effective action isn't to save the child yourself—ruining your expensive suit in the process—but to sell the suit and donate the proceeds to charity? You could save multiple children instead of just one.
Singer would respond that this misses the point. The drowning child is right in front of you. The duty to rescue is immediate and compelling in a way that the duty to send money is not.
But MacAskill went further. Presented with a scenario in which he could either save a child from a burning building or save a Picasso painting to sell for charity, he said the effective altruist should save the Picasso.
This answer struck many observers as monstrous. The psychologist Alan Jern called it "unnatural, even distasteful, to many people." It violates deep moral intuitions about the importance of immediate rescue, about the special obligations we have to those we can directly help.
MacAskill later softened his position, endorsing a "qualified definition of effective altruism" that acknowledges constraints—special obligations to those nearby, moral rules against letting preventable harm occur right in front of you. But the tension remains. Effective altruism, taken to its logical conclusion, seems to demand that we become calculating machines, optimizing every decision for maximum impact regardless of emotional pull or social convention.
The Problem With Measurement
Not everything that matters can be counted, and not everything that can be counted matters. This aphorism, often attributed to Einstein, captures a fundamental challenge for effective altruism.
The movement's emphasis on evidence and quantification naturally directs resources toward interventions with clear, measurable outcomes. Distributing bed nets is easy to count. So is administering vaccinations or providing deworming medication to schoolchildren. These interventions produce data points: number of nets distributed, number of vaccines administered, reduction in disease prevalence.
But what about political reform? What about strengthening democratic institutions, or fighting corruption, or shifting cultural attitudes toward human rights? These interventions are "worked on one grinding step at a time," as the writer Pascal-Emmanuel Gobry observed, and their results resist controlled experiments.
Critics worry that effective altruism systematically undervalues interventions that address root causes—structural inequality, political oppression, cultural discrimination—in favor of interventions that treat symptoms. Distributing bed nets is wonderful, but it doesn't address the reasons why some countries remain poor while others become rich. It's a band-aid on a wound that requires surgery.
The movement has responded to these criticisms in part by expanding its scope. Open Philanthropy, the grantmaking organization funded primarily by Dustin Moskovitz and Cari Tuna, has made significant investments in criminal justice reform, immigration policy research, and other systemic issues. But the tension between measurable and important remains unresolved.
Effective Altruism and Its Discontents
The New York Times columnist Ross Douthat once imagined "effective altruists sitting around in a San Francisco skyscraper calculating how to relieve suffering halfway around the world while the city decays beneath them." It's a vivid image, and it captures something real about the movement's demographics and blind spots.
Effective altruism emerged from elite universities—Oxford, Cambridge, Harvard, Stanford—and has remained concentrated there. Its adherents are disproportionately male, disproportionately white, and disproportionately employed in technology and finance. The movement's emphasis on impartial, detached reasoning appeals to people who are comfortable with abstraction, which correlates strongly with certain educational and socioeconomic backgrounds.
The philosopher William Schambra has argued that effective altruism undermines the kind of face-to-face, community-based charitable giving that builds social trust and sustains democracy. When neighbors help neighbors, they strengthen bonds of reciprocity that make civil society possible. When donors write checks to distant charities selected by algorithms, they participate in a fundamentally different—and perhaps less valuable—form of altruism.
There's also the awkward matter of how some effective altruists acquired the wealth they're now giving away. The technology industry that produced so many of the movement's major donors has its own moral complexities: addictive social media platforms, algorithmic systems that amplify misinformation, working conditions that have drawn regulatory scrutiny.
And then there's Sam Bankman-Fried.
The FTX Catastrophe
For several years, Bankman-Fried was effective altruism's most prominent public face. The founder of the cryptocurrency exchange FTX, he had made his wealth explicitly to give it away—a strategy the movement called "earning to give." He lived frugally, at least by billionaire standards. He spoke at effective altruism conferences. He funded effective altruism organizations to the tune of hundreds of millions of dollars.
Then, in November 2022, FTX collapsed. Bankman-Fried was eventually convicted of fraud. It emerged that much of his charitable giving had been funded by customer deposits—other people's money, which he had no right to give away.
The scandal forced the movement into painful self-examination. Had effective altruism's emphasis on impact encouraged reckless risk-taking? Had the community been too eager to celebrate a wealthy donor without scrutinizing how he acquired his wealth? Was "earning to give" an invitation to ethical shortcuts—a framework that judged the ends while ignoring the means?
Defenders of effective altruism pointed out that fraud is fraud regardless of the fraudster's stated philosophy. Bankman-Fried's crimes were crimes; his effective altruism was incidental. But critics noted that the movement had provided him with social capital and moral cover, amplifying his influence in ways that may have delayed accountability.
The FTX collapse also revealed the degree to which effective altruism had become dependent on a small number of extremely wealthy donors. When those donors stumbled—whether through fraud, market downturns, or simply changing priorities—the organizations they funded became vulnerable.
What Effective Altruism Gets Right
Despite its controversies, effective altruism has introduced ideas that deserve to outlast any particular scandal or criticism.
The movement has normalized talking about charitable effectiveness. Before GiveWell, most donors had no way to compare charities rigorously. Now there are multiple organizations devoted to researching which interventions work and which don't. This is straightforwardly valuable.
Effective altruism has also surfaced neglected causes. Factory farming affects billions of animals under conditions that are genuinely horrifying, but until recently it received almost no philanthropic attention. Artificial intelligence safety research seemed like science fiction until effective altruists began funding it seriously; now it's a mainstream policy concern.
Perhaps most importantly, the movement has made explicit what was always implicit: that how we allocate our charitable resources involves trade-offs. Giving to one cause means not giving to another. Pretending otherwise—treating all charitable impulses as equally worthy—may be emotionally comfortable, but it comes at the cost of impact.
The philosopher Richard Pettigrew has argued that effective altruists often feel more profound dismay at distant suffering than most people feel for suffering nearby. They're not unemotional calculating machines; they're people whose empathy extends further than convention suggests it should. Larissa MacFarquhar, who profiled several effective altruists in her book "Strangers Drowning," found the same thing: these were not coldly rational optimizers but people possessed by an unusual intensity of moral concern.
The Question That Won't Go Away
In 2015, MacAskill published a book called "Doing Good Better." The title captured both the ambition and the humility of effective altruism. Not doing good perfectly. Not doing the most good possible. Just doing good better than we otherwise would.
This modest framing appeals to people who find the movement's more extreme implications troubling. You don't have to sell the Picasso. You don't have to donate everything above subsistence income to GiveWell's top charities. You don't have to work on AI safety instead of becoming a doctor.
But you could ask whether your giving is actually accomplishing anything. You could compare charities. You could consider causes you'd never thought about. You could do good better.
The drowning child is still in the pond. The question is what you do about it.