Trolley problem
Based on Wikipedia: Trolley problem
Would you kill one person to save five?
It sounds like a question from a philosophy exam, and it is. But it's also a question that autonomous vehicle programmers are trying to answer right now, that military commanders have faced in actual combat, and that you yourself might wrestle with if you spend any time thinking seriously about ethics. The trolley problem, as this thought experiment is called, has escaped the ivory tower and crashed into everything from internet memes to courtroom precedents.
The Setup
Here's the classic scenario. A runaway trolley is barreling down the tracks toward five people who cannot escape. You're standing next to a lever that can divert the trolley onto a side track. The problem is that there's one person on that side track. Pull the lever, and you save five lives but end one. Do nothing, and five die while one survives.
Most people, when surveyed, say they'd pull the lever. About ninety percent, in fact. The math seems straightforward: five is greater than one. Save the many by sacrificing the few.
But watch what happens when we change the scenario slightly.
The Fat Man on the Bridge
Same runaway trolley. Same five people in danger. But now you're standing on a bridge above the tracks, and next to you is a large man. The only way to stop the trolley is to push him off the bridge and onto the tracks below. His body will halt the trolley, saving the five, but he will die.
The arithmetic hasn't changed. It's still one life traded for five. Yet most people recoil from this version. They'll flip a switch to redirect death, but they won't use their hands to push a person into it.
Why? What's the difference?
This is the trolley problem. Not the dilemma itself, but the puzzle of why our moral intuitions shift so dramatically based on seemingly trivial details. The English philosopher Philippa Foot first posed this puzzle in 1967, and the American philosopher Judith Jarvis Thomson gave it its memorable name in 1976. Since then, it has spawned an entire subfield of moral philosophy, generated countless variations, and revealed something genuinely strange about how humans think about right and wrong.
The Original Context Was Abortion
Foot wasn't actually interested in trolleys. She was analyzing debates about abortion and something called the doctrine of double effect, a principle from Catholic moral theology that distinguishes between intended consequences and foreseen but unintended side effects.
The doctrine works like this: it might be wrong to kill a fetus directly to save a mother's life, but permissible to perform a surgery that saves the mother while foreseeing that the fetus will die as a side effect. The death isn't intended, merely foreseen. According to the doctrine, this moral distinction matters enormously.
Foot wanted to test whether this distinction holds up under pressure. She constructed a series of scenarios to probe our intuitions about when it's acceptable to cause harm and when it isn't. The trolley was just a philosophical tool, a way to strip away the emotional and political baggage of the abortion debate and examine the underlying logical structure.
But the trolley took on a life of its own.
Why the Fat Man Feels Different
In the lever scenario, you're redirecting an existing threat. The trolley was already going to kill someone; you're just changing who. In the bridge scenario, you're introducing a new cause of death. The fat man wasn't in any danger until you pushed him.
Another way to look at it: flipping the switch uses the trolley to kill one person. Pushing the fat man uses the fat man himself as a tool, treating a human being as mere means to an end. The philosopher Immanuel Kant famously argued that this is always wrong, that the dignity of persons requires us to treat them as ends in themselves, never merely as instruments.
The doctrine of double effect offers a related distinction. When you flip the switch, you don't intend to kill the one person on the side track. You intend to save the five, and the death is an unfortunate but unintended side effect. When you push the fat man, his death isn't a side effect at all. It's the mechanism by which you save the others. You're using his death instrumentally.
Critics find these distinctions slippery. Does it really matter whether death is intended or merely foreseen, if you know for certain it will happen? Can any abstract principle capture why hands-on killing feels worse than switch-flipping?
Your Brain on Trolleys
In 2001, the psychologist Joshua Greene and his colleagues put people into brain scanners while they contemplated trolley problems. What they found was striking.
When subjects considered the impersonal scenario, flipping a switch, their brains showed activity primarily in regions associated with deliberate reasoning and calculation. When they considered the personal scenario, pushing someone to their death, emotional centers lit up instead. The anterior cingulate cortex, which processes conflict between emotion and cognition, became especially active.
Greene argued that we have two moral systems: a fast, emotional one that generates gut reactions, and a slower, more deliberate one that calculates consequences. The trolley problem creates conflict between them. Our utilitarian calculator says to maximize lives saved. Our emotional intuition screams that pushing someone to their death is murder.
This dual-process theory has been enormously influential. It suggests that our moral disagreements aren't just about different values or different reasoning, but about which cognitive system happens to dominate in a given moment. People who push their emotional reactions aside and reason through the math are more likely to push the fat man. People who trust their gut are more likely to refuse.
Some philosophers have used this finding to argue that we should discount our emotional reactions in ethics, that they're evolutionary vestiges unsuited to modern moral reasoning. Others argue the opposite: that our emotional responses track something morally important that pure calculation misses, something about the significance of personal violence and bodily integrity.
Before the Trolley
Foot gets credit for launching the philosophical industry, but she wasn't the first to pose this type of dilemma.
In 1905, the psychologist Frank Chapman Sharp gave University of Wisconsin undergraduates a questionnaire that included a variation. In his version, a railway switchman had to choose between diverting a train to kill his own child or letting it continue toward several other people. Sharp was interested in how ordinary people reasoned about moral dilemmas, and he found considerable disagreement even then.
German legal scholars discussed similar scenarios in the 1930s and 1950s, examining how the law should handle cases where someone causes harm to prevent greater harm. The Jewish scholar Avrohom Yeshaya Karelitz, in a 1953 commentary on the Talmud, asked whether it's ethical to deflect a projectile from a larger crowd toward a smaller one.
Perhaps most strikingly, a 1954 American television play called "The Strike" depicted a Korean War commander facing a trolley-like choice. He could order an air strike on an approaching enemy force, which would kill his own twenty-man patrol unit, or call off the strike and risk the lives of five hundred soldiers in the main army. Twenty versus five hundred. The mathematics of war.
The Philosophers Proliferate Scenarios
Once Thomson named the problem and gave it momentum, moral philosophers went wild with variations. The scenarios became increasingly baroque, as if philosophers were competing to design the most absurd ethical thought experiment.
What if instead of pushing a fat man, you could drop him through a trap door? What if you could throw a switch that would cause a boulder to fall on one person, stopping the trolley before it hits five? What if the one person on the side track is the villain who tied the five to the main track in the first place? What if you're a surgeon who could kill one healthy patient to harvest organs for five dying patients?
Each variation probes a different moral intuition. The organ harvesting case triggers strong resistance in almost everyone, even though the mathematics is identical to the original trolley. Something about the medical context, the violation of trust, the deliberate killing of someone who came to you for help, makes it feel completely different from flipping a switch.
The villain variation reveals something interesting about retributive justice. When people learn that the fat man is responsible for the whole situation, their willingness to push him increases dramatically. It's no longer purely a question of arithmetic; considerations of desert and punishment enter the picture.
The Case Against Trolleyology
Not everyone thinks the trolley problem is useful. Critics argue that it's too artificial, too removed from real moral situations, to tell us anything important about ethics.
In a 2014 paper, researchers complained that the scenario is so extreme and unlikely that whatever intuitions it triggers may not generalize to actual moral life. We don't usually face situations where we're certain that exactly five people will die unless we kill exactly one person. Real ethical dilemmas involve uncertainty, partial information, complex relationships, and consequences that unfold over time.
The philosopher Nassim Jafari Naimi argued in 2017 that the trolley problem actually embodies a "reductive" and "impoverished" version of ethics. By stripping away all context, all relationships, all ambiguity, it privileges a cold utilitarian calculation that may serve powerful interests while ignoring the concerns of ordinary people. She was particularly critical of the idea that trolley problems could serve as templates for programming autonomous vehicles.
The British philosopher Roger Scruton made a related complaint. Trolley problems, he wrote, have "the useful character of eliminating from the situation just about every morally relevant relationship and reducing the problem to one of arithmetic alone." Real moral decisions, he argued, happen within webs of relationship, obligation, and meaning that the trolley problem deliberately erases.
Nevertheless, People Love Them
Despite these criticisms, trolley problems have escaped philosophy departments entirely. They've become a cultural phenomenon.
In 2017, the YouTuber Michael Stevens, better known as Vsauce, created the first realistic trolley problem experiment. Participants were placed in what they believed was a real train-switching station, shown footage they thought was live of a train approaching five workers on one track and one worker on another. They had access to a lever.
Of seven participants, only two pulled the lever.
This result is dramatically different from the ninety percent who say they'd pull the lever in surveys. When the situation felt real, with actual apparent consequences, people froze. They chose inaction. Perhaps our survey responses reflect what we think we should do, while our actual behavior reveals a deep reluctance to cause death directly, even to save lives.
The trolley problem has also become an internet meme, typically satirizing the dilemma by introducing absurd alternatives. One popular version shows the trolley heading toward five people, with the side track curving back around to hit them anyway. Another shows a person frantically pulling the lever back and forth, trying to hit everyone on both tracks. The jokes work because the original dilemma has become so culturally familiar that everyone gets the reference.
When Trolleys Meet the Courtroom
Legal scholars have found the trolley problem useful for examining how law handles the tension between consequences and prohibitions.
Consider the distinction between action and omission. In most legal systems, failing to prevent harm is treated differently from actively causing it. If you walk past a drowning child and do nothing, you may face moral condemnation but rarely criminal prosecution. If you hold the child underwater, that's murder. This act-omission doctrine is built into law, and the trolley problem helps explain why.
The most famous legal case that resembles a trolley problem is R v Dudley and Stephens, decided in 1884. Four shipwrecked sailors were adrift without food or water. After nearly three weeks, two of them killed the cabin boy, the weakest among them, and ate his flesh to survive. They were eventually rescued and put on trial.
The defendants argued necessity. If they hadn't killed the cabin boy, all four would have died. By sacrificing one, they saved three. The trolley arithmetic seemed to favor them.
The court rejected this defense entirely. English common law, it ruled, does not permit the deliberate taking of human life, even to save other lives. The sailors were convicted of murder. The case established a precedent that still stands: in British law, you cannot kill an innocent person to save others, no matter the mathematics.
Interestingly, the sentences were commuted almost immediately, and the sailors served only six months. The law condemned their action while recognizing the impossible situation they faced. This compromise, conviction without severe punishment, suggests the law itself is ambivalent about trolley-like dilemmas.
The Autonomous Vehicle Problem
Self-driving cars have brought the trolley problem out of philosophy seminars and into engineering meetings.
Imagine a car whose brakes have failed. It's heading toward a crowd of pedestrians. The car's computer can swerve to avoid the crowd, but this will send the vehicle into a concrete barrier, certainly killing the passenger inside. What should the software do? Should it prioritize the safety of its passenger or the safety of the pedestrians? Should it minimize total deaths, or does it have a special obligation to the person who bought and trusted the car?
Researchers at the Massachusetts Institute of Technology created a platform called Moral Machine to gather public opinion on these questions. Participants from around the world were presented with various crash scenarios and asked to choose outcomes. The results revealed striking cultural differences. People in different countries had different preferences about whether to spare the young or the old, the many or the few, pedestrians or passengers.
Some researchers argue that trolley-style scenarios are the wrong frame for autonomous vehicle ethics. Real crashes don't present binary choices with known outcomes. A car doesn't actually know that it will definitely hit five people unless it swerves to hit one. The situations are filled with uncertainty, split-second timing, and imperfect information. Programming a car to make trolley-style calculations might not be feasible or desirable.
Others worry about the incentive structures. If people know that autonomous vehicles are programmed to sacrifice passengers in certain situations, will anyone buy them? Would you purchase a car that might deliberately kill you to save strangers?
What Professional Philosophers Think
In 2009, the philosophers David Bourget and David Chalmers surveyed their colleagues about the trolley problem. Among professional philosophers, sixty-eight percent said they would flip the switch to save five lives at the cost of one. Eight percent said they would not. The remaining twenty-four percent either had a different view, couldn't answer, or found the question ill-formed.
Philosophers, it turns out, are more willing to pull the lever than ordinary people in realistic experiments. Perhaps training in ethics makes you more comfortable with utilitarian calculations. Perhaps it just makes you more willing to commit to hypothetical positions you'll never actually have to act on.
Five Problems with the Problem
The Japanese philosopher Masahiro Morioka identified five "problems with the trolley problem" in a 2017 article that connected the thought experiment to the atomic bombings of Hiroshima and Nagasaki.
First, rarity. Situations that neatly match the trolley scenario are extraordinarily uncommon. Most moral decisions don't involve clear-cut tradeoffs between known numbers of lives.
Second, inevitability. The trolley problem assumes that the five deaths are certain unless you intervene. Real situations rarely offer such certainty.
Third, the safety zone. The person on the side track is assumed to be completely safe unless you redirect the trolley. This clean separation between the endangered and the safe is unusual in real life.
Fourth, the possibility of becoming a victim. In the thought experiment, you're a detached observer. In real decisions, you might be one of the people at risk.
Fifth, and most haunting, the trolley problem ignores the perspective of the dead. The person you sacrifice had no say in the matter. Their freedom of choice was violated absolutely. The trolley problem treats them as a mathematical unit, not as a person with their own life, projects, and relationships.
Morioka argued that the atomic bombings exemplify this structure. The decision to bomb Hiroshima and Nagasaki was made according to trolley-style reasoning: sacrifice a smaller number of Japanese civilians to save a larger number of American soldiers (and, some argued, Japanese civilians who would die in a prolonged invasion). The people who died in the bombings had no voice in this calculation. They were simply assigned to the wrong track.
The Limits of Arithmetic
After decades of proliferating variations and experimental studies, what has the trolley problem taught us?
Perhaps the central lesson is that our moral intuitions don't reduce to simple arithmetic. Most people will flip a switch to save five at the cost of one, but most people won't push a fat man off a bridge, harvest organs from a healthy patient, or frame an innocent person to prevent a riot. The numbers are the same; the moral judgments differ wildly.
This suggests that we don't actually believe in simple consequentialism, the view that the right action is always the one that produces the best overall outcomes. Something else is going on. We care about how outcomes are brought about, not just what outcomes occur. We distinguish between killing and letting die, between intended and foreseen consequences, between personal violence and impersonal mechanisms.
Whether these distinctions are rationally defensible is another question. Critics argue that they're mere evolutionary baggage, holdovers from a time when personal violence was common and switch-flipping was inconceivable. Defenders argue that they track something morally important, something about human dignity and the meaning of our actions that pure calculation misses.
The trolley problem doesn't settle these debates. But it does expose them with unusual clarity. By stripping away context and forcing binary choices, it reveals the fault lines in our moral thinking, the places where different principles collide and our intuitions pull in opposite directions.
Beyond the Tracks
The trolley problem began as a tool for analyzing the doctrine of double effect in debates about abortion. It became a cottage industry in moral philosophy, spawning endless variations and counterexamples. It entered psychology through brain scanning studies that revealed the dual-process structure of moral judgment. It invaded law through discussions of necessity and the act-omission distinction. It reached engineering through the ethics of autonomous vehicles. And finally, it became a meme, a cultural touchstone that everyone recognizes even if they can't articulate its philosophical significance.
This trajectory says something about the problem's power. It captures, in a vivid and memorable way, a deep tension in moral thought. We believe that outcomes matter. We also believe that some actions are wrong regardless of their outcomes. When these commitments collide, we get trolley problems, both the thought experiment and the real dilemmas that it was designed to illuminate.
The next time you see a trolley meme, remember that you're looking at a hundred years of moral philosophy, compressed into a joke. The question of whether to pull the lever is easy. The question of why your answer changes when the lever becomes a pair of hands, that's the real puzzle. And we're still working on it.