Rationalist community
Based on Wikipedia: Rationalist community
The Movement That Thinks It Might Save the World
Imagine a group of people who genuinely believe they might be humanity's last line of defense against extinction. Not in a dramatic, Hollywood way—no bunkers or weapons stockpiles—but through careful thinking, probability calculations, and blog posts. This is the rationalist community, and whether you find them inspiring or insufferable likely depends on how you feel about people who've decided that being right about everything is both possible and morally necessary.
The rationalists emerged from the internet in the early 2000s, coalescing around blogs with names that sound like they were chosen by very earnest graduate students: LessWrong, Overcoming Bias, Slate Star Codex. What started as a collection of people interested in cognitive science and decision theory has grown into something stranger and more influential—a subculture that has captured the attention of Silicon Valley billionaires, shaped the artificial intelligence safety debate, and occasionally spawned what critics call cults.
What Rationalists Actually Believe
The word "rationalist" might conjure images of Enlightenment philosophers in powdered wigs, but this is something different. The modern rationalist movement draws a careful distinction between two types of rationality.
The first is epistemic rationality—forming accurate beliefs about the world. This sounds obvious, but rationalists argue that human brains are riddled with systematic errors. We're overconfident. We seek out information that confirms what we already believe. We're terrible at understanding probabilities. Epistemic rationality means developing techniques to catch yourself making these mistakes.
The second is instrumental rationality—taking actions that actually achieve your goals. Again, this seems straightforward until you consider how often people sabotage their own objectives through poor planning, akrasia (the philosophical term for weakness of will), or simply not thinking through the consequences of their choices.
Bayesian inference sits at the heart of rationalist thinking. Named after Thomas Bayes, an 18th-century Presbyterian minister who worked out a mathematical formula for updating beliefs based on new evidence, Bayesian reasoning asks: given what I already know, how much should this new information change my mind? Rationalists try to apply this framework to everything from career decisions to the probability of existential catastrophe.
The Anti-Spock Philosophy
Here's where things get counterintuitive. Despite the emphasis on logic and probability, rationalists explicitly reject the Star Trek vision of rationality embodied by Mr. Spock—the emotionless logic machine who views feelings as a weakness.
Eliezer Yudkowsky, the closest thing the movement has to a founding figure, argues that emotions often constitute rational responses to situations. Fear of heights keeps you alive. Love motivates you to care for others. The goal isn't to eliminate emotions but to notice when they're leading you astray and recalibrate accordingly.
This creates a distinctive rationalist aesthetic. Members tend to be intensely analytical about their own feelings, discussing jealousy and attachment and anxiety with the same careful precision they'd apply to a probability problem. Whether this produces healthier relationships or just more neurotic ones is a matter of ongoing debate, both inside and outside the community.
The San Francisco Connection
While the rationalist movement began online, it developed a strong physical presence in the San Francisco Bay Area. This isn't coincidental. Silicon Valley was already home to people who believed technology could solve humanity's problems, and the rationalist emphasis on clear thinking and world optimization resonated deeply with the region's founder culture.
Bay Area rationalists often live in intentional communities—group houses where like-minded people share space and, frequently, unconventional relationship structures. Polyamory is common enough in rationalist circles that it's become a running joke in adjacent communities. The rationalist response is typically that if your goal is human flourishing, and multiple people can make each other happy, why should arbitrary social conventions prohibit the arrangement?
This willingness to question received wisdom extends throughout rationalist culture. Nothing is sacred—not traditional family structures, not established career paths, not polite dinner conversation topics. As journalist Tara Isabella Burton observed, rationalists believe that nothing "could, or should, get between human beings and their ability to apprehend the world as it really is." Social niceties, fear of political incorrectness, emotional discomfort—all of these are obstacles to clear thinking and must therefore be overcome.
The AI Apocalypse Scenario
If there's one issue that defines the rationalist movement, it's artificial intelligence safety. Many rationalists believe that the development of artificial general intelligence—a system that can match or exceed human cognitive abilities across all domains—represents an existential threat to humanity.
The argument goes roughly like this: an AI system designed to optimize for some goal will pursue that goal with superhuman intelligence and efficiency. If the goal isn't perfectly aligned with human values—and getting that alignment right is extraordinarily difficult—the AI might take actions that are catastrophic for humanity while technically achieving its programmed objective. The classic thought experiment involves an AI tasked with manufacturing paperclips, which converts all available matter on Earth (including humans) into paperclips because nobody thought to specify that it shouldn't.
This concern has propelled rationalist-adjacent institutions into surprising influence. The Machine Intelligence Research Institute, founded by Yudkowsky in 2000, conducts technical research on AI alignment. The Center for Applied Rationality teaches the thinking techniques rationalists believe might help humanity navigate this challenge. Silicon Valley luminaries including Elon Musk, Peter Thiel, and Facebook co-founder Dustin Moskovitz have donated hundreds of millions of dollars to organizations associated with the movement.
In 2023, the rationalist community's concerns entered the mainstream when Sam Altman was briefly removed as CEO of OpenAI. While the full story remains murky, the incident demonstrated how seriously some of the most powerful people in technology take the AI safety arguments that rationalists have been making for decades.
The Psychological Cost of Saving the World
Believing you might be among the only people who can prevent human extinction is, as it turns out, extremely stressful.
Mental health crises have become a recognized issue within the rationalist community. When every action might have cosmically significant consequences, ordinary decisions become paralyzing. When you've convinced yourself that most people don't understand the danger, social isolation follows naturally. When your entire worldview centers on a threat that might or might not materialize, anxiety becomes a constant companion.
Bloomberg Businessweek journalist Ellen Huet noted that the rationalist movement "valorizes extremes: seeking rational truth above all else, donating the most money and doing the utmost good for the most important reason." This drive toward optimization, she observed, "can lend an attractive clarity, but it can also provide cover for destructive or despicable behavior."
The effective altruism movement, which overlaps heavily with rationalism, encourages members to donate large portions of their income to causes selected for maximum impact. Some adherents work high-paying jobs they find unfulfilling specifically to earn money to donate. Others have taken on crushing responsibility for the welfare of distant strangers while struggling to maintain their own wellbeing. The logic is impeccable; the psychological sustainability is questionable.
The Egregious Ideas Problem
The rationalist commitment to following arguments wherever they lead has created uncomfortable situations.
Gideon Lewis-Kraus, writing in The New Yorker, argued that rationalists "have given safe harbor to some genuinely egregious ideas," including scientific racism and neoreactionary political philosophy. The rationalist defense is that vile ideas should be engaged with and refuted rather than left to fester as forbidden knowledge. If an argument is wrong, surely rational analysis can demonstrate why. Refusing to engage only gives bad ideas mystique.
But this creates a peculiar dynamic. Rationalist forums have become one of the few places where people with socially unacceptable views can find interlocutors willing to engage seriously with their arguments. Whether this helps debunk those views or simply provides them an intellectual veneer is genuinely unclear.
The situation becomes more paradoxical when you consider that rationalists also believe some ideas are dangerous enough to suppress. These are called information hazards—knowledge that could cause harm if widely disseminated. The most famous example within the community is something called Roko's Basilisk, a thought experiment so potentially damaging to mental health that discussing it was banned on LessWrong for years. The specifics are bizarre enough that explaining them would require another essay, but the existence of the concept reveals an interesting tension: who decides which ideas are too dangerous to discuss?
Harry Potter and the Introduction to Rationality
In 2010, Eliezer Yudkowsky began publishing a Harry Potter fanfiction called "Harry Potter and the Methods of Rationality." In this version of the story, Harry is raised by a scientist and approaches the magical world with a researcher's mindset. Why does magic work? What are its limits? Can you get rich through arbitrage between the wizarding and muggle economies?
The fanfiction became enormously popular, eventually running to over 660,000 words across 122 chapters. More importantly for the movement, it served as an effective recruitment tool. A 2013 survey of LessWrong users found that a quarter had discovered the site through the fanfiction. Yudkowsky used the work to solicit donations for the Center for Applied Rationality, which teaches courses based on rationalist techniques.
This approach—using narrative to smuggle in philosophical concepts—represents something distinctive about rationalist culture. The community produces an enormous amount of content: blog posts, podcasts, fiction, and what they call "sequences"—extended series of interconnected essays building up a worldview piece by piece. If you want to understand rationalism, you can spend months reading the source material. Many people have.
The Post-Rationalists and Other Splinters
Not everyone who enters the rationalist community stays on the orthodox path.
The post-rationalists represent one significant departure. These former rationalists became disillusioned with what they perceived as the community's increasingly dogmatic and cult-like tendencies. They argue that the original movement lost focus on the less quantifiable elements of human flourishing—art, beauty, meaning, spirituality—in its pursuit of optimization and probability calculations.
Post-rationalists congregate on Twitter under the acronym TPOT (This Part of Twitter), maintaining some of the rationalist interest in careful thinking while embracing a broader range of concerns. They're more likely to discuss embodied cognition, phenomenology, and the limitations of purely analytical approaches to life.
Then there are darker offshoots. The Zizians—a splinter group that combined rationalist-style thinking with veganism and anarchism—became notorious in 2025 when members were suspected of involvement in four murders. The group had originated within the Bay Area rationalist community but turned against mainstream rationalist organizations, accusing them of anti-transgender discrimination and misusing donor funds.
Anna Salamon, director of the Center for Applied Rationality, compared the Zizian belief system to that of a doomsday cult. Publications including the Boston Globe and New York Times drew comparisons to the Manson Family. Whether this represents a unique pathology or reveals something troubling about the broader rationalist project remains a subject of heated debate.
The Cult Question
Is the rationalist community a cult? The question comes up frequently, and the answer depends largely on definitions.
Religious scholar Greg Epstein, quoted in the New York Times, offered this observation: "When you think about the billions at stake and the radical transformation of lives across the world because of the eccentric vision of this group, how much more cult-y does it have to be for this to be a cult? Not much."
The rationalist community exhibits several cult-like features. There's a charismatic founder figure in Yudkowsky. There's a specialized vocabulary that insiders use and outsiders struggle to parse. There are intentional communities where members live together and form relationships primarily with other members. There's an apocalyptic belief system (the AI risk narrative) that creates urgency and justifies extreme measures.
But there are also notable differences. Rationalists emphasize independent thinking and explicitly encourage questioning authority—including Yudkowsky himself. There's no central organization controlling membership or finances. Former members freely criticize the community without apparent retaliation. The movement has produced genuine intellectual contributions that have influenced mainstream thinking about AI, decision-making, and cognitive bias.
Perhaps the most accurate description is that rationalism creates an environment where cult-like dynamics can emerge without the community as a whole being a cult. Some subgroups develop unhealthy characteristics. Some individuals become consumed by the ideology. But many people engage with rationalist ideas casually, take what's useful, and go about their lives.
The Critics and Their Acronym
Computer scientist Timnit Gebru and philosopher Émile Torres have proposed a framework for understanding the rationalist movement and its allies. They use the acronym TESCREAL to link together what they see as a cluster of related ideologies: Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism.
The argument is that these movements share underlying assumptions—particularly the belief that advanced technology will radically transform human existence and that we should be planning for that transformation now. Critics in the TESCREAL framework argue that these ideologies divert resources and attention from immediate problems affecting real people today toward speculative future scenarios that may never occur.
The transhumanism connection is particularly strong. Many rationalists believe that human biology can and should be enhanced through technology—that aging might be cured, intelligence augmented, and consciousness eventually uploaded to computer substrates. Whether this represents exciting possibility or techno-utopian delusion depends on your baseline skepticism of grand predictions.
The Question of Influence
Whatever you think of rationalist ideas, the movement has achieved remarkable influence given its origins as a collection of blogs.
AI safety research, once a fringe concern, is now taken seriously by major technology companies and governments worldwide. Effective altruism, which emerged from rationalist circles, has redirected billions of dollars toward carefully selected causes. Rationalist concepts—Bayesian reasoning, cognitive bias mitigation, the importance of forecasting—have spread into business, policy, and popular culture.
At the same time, the rationalist track record on predictions is mixed. The movement has been warning about transformative AI for over two decades, and while recent advances in large language models have vindicated some concerns, humanity remains stubbornly un-extinct. Critics argue that rationalists have repeatedly moved goalposts and redefined terms to avoid acknowledging failed predictions.
The community's influence in Silicon Valley has produced its own complications. When billionaires fund organizations aligned with a particular worldview, that worldview gains amplification it might not otherwise deserve. When AI companies employ researchers trained in rationalist thinking, the movement's concerns become embedded in the technology shaping everyone's future. Whether this represents appropriate weight being given to important ideas or undue influence by a self-selected group is not a question with an easy answer.
The Uncomfortable Truth
The rationalist community sits at an uncomfortable intersection of genuine insight and potential delusion. They've developed useful techniques for thinking more clearly about complex problems. They've raised important questions about technological risk that the broader world was ignoring. They've built communities where intellectually curious people can engage with difficult ideas.
But they've also demonstrated the dangers of believing you're smarter than everyone else, the psychological costs of world-saving ambition, and the capacity of any ideology—even one explicitly devoted to avoiding bias—to become tribal and dogmatic.
Perhaps the most rationalist thing you can do is hold both possibilities in mind simultaneously: that these people might be onto something important, and that their certainty about their own correctness might be their most dangerous cognitive bias of all.