Slate Star Codex
Based on Wikipedia: Slate Star Codex
Imagine writing a blog so influential that when a newspaper threatened to reveal your real name, six thousand people signed a petition to stop them—including a Harvard psychologist, a Princeton philosopher, and one of the world's most famous economists. That's what happened to Scott Alexander, the psychiatrist behind Slate Star Codex, a blog that somehow managed to become required reading for Silicon Valley executives, academic researchers, and anyone trying to make sense of the strange new world we live in.
The Blog That Spawned a Movement
Slate Star Codex launched in 2013 as a personal blog by a Bay Area psychiatrist writing under a pen name. By 2020, it had become something far stranger and more significant: a gathering place for what's called the "rationalist community"—people who believe that careful reasoning, probabilistic thinking, and intellectual honesty can help us navigate an increasingly complex world.
The numbers alone are staggering. If you compiled every post Alexander wrote into a single document, you'd end up with a PDF over nine thousand pages long. That's roughly fifteen novels' worth of material, covering everything from psychiatric medications to medieval history to the ethics of eating animals.
But it wasn't just quantity. The New Yorker called Alexander's arguments "often counterintuitive and brilliant." Economist Tyler Cowen, himself one of the most influential bloggers in the world, described him as "a thinker who is influential among other writers"—a second-order kind of influence where your ideas shape the people who shape public opinion.
How to Think About Thinking
What made Slate Star Codex distinctive wasn't just what Alexander wrote about, but how he wrote about it. Many posts began with something called an "epistemic status"—a frank assessment of how confident Alexander was in what he was about to say.
This might seem like a small thing, but it's actually revolutionary.
Most writers present their arguments with uniform confidence, as if everything they say is equally certain. Alexander instead might label a post "epistemic status: highly speculative, based on limited evidence" or "epistemic status: pretty confident, though I could be missing something." This simple practice encouraged readers to calibrate their own belief accordingly, rather than either accepting everything uncritically or dismissing it all as mere opinion.
The approach reflected a broader philosophy. The rationalist community, which Alexander helped define, emerged from an earlier community called LessWrong—a website dedicated to improving human reasoning. The name comes from the goal of being "less wrong" over time, a humble acknowledgment that perfect knowledge is impossible but systematic improvement isn't.
Meditations on Why Everything Is Terrible
One of Alexander's most influential essays bears the evocative title "Meditations on Moloch." In ancient Canaanite religion, Moloch was a god associated with child sacrifice—parents would burn their children alive to earn divine favor. Alexander uses this horrifying image as a metaphor for something equally troubling: situations where everyone loses because no one can coordinate.
Consider the tragedy of the commons. A shared pasture can support a certain number of cattle. Each farmer benefits by adding one more cow. But if every farmer adds cows, the pasture is destroyed and everyone's cattle starve. Each individual decision is rational; the collective outcome is catastrophic.
Or think about the prisoner's dilemma, a famous puzzle from game theory. Two criminals are arrested. If both stay silent, they each get one year in prison. If one betrays the other, the betrayer goes free while the other gets ten years. If both betray, both get five years. Rationally, each prisoner should betray—but if both follow this logic, both end up worse off than if they'd cooperated.
Alexander argues these patterns—coordination failures, race-to-the-bottom dynamics, situations where individual rationality produces collective disaster—explain many of humanity's worst problems. Climate change, arms races, the hollowing out of journalism, the addictiveness of social media: Moloch lurks behind them all.
The essay connects these game-theoretic patterns to artificial intelligence research. If we build superintelligent AI systems that optimize for the wrong goals, we might face the ultimate coordination failure—one with no second chances.
The Toxoplasma of Your Newsfeed
Another influential essay, "The Toxoplasma of Rage," explains why the most divisive content spreads fastest on social media.
The title references toxoplasma gondii, a parasite that infects mice and rewires their brains to be attracted to cat urine—which gets them eaten by cats, where the parasite reproduces. Alexander argues that controversial ideas work similarly: they hijack our tribal instincts to spread themselves, regardless of whether spreading them serves anyone's interests.
Here's the mechanism. Suppose you're a dedicated animal rights activist. You want to signal your commitment to the cause. You could share content from Vegan Outreach, an organization that focuses on practical dietary changes. But that's boring—everyone agrees factory farming is unpleasant.
Instead, you share something outrageous from PETA, an organization famous for inflammatory campaigns comparing meat-eating to the Holocaust. Now you're taking a real stand. Your fellow activists know you're serious because you're willing to endorse something controversial.
The result? PETA becomes far more famous than Vegan Outreach, even though the latter is probably more effective at actually reducing animal suffering. The controversy itself becomes the product, and the most divisive content wins.
Alexander applied this framework to a specific case: the Rolling Stone article "A Rape on Campus," which described a brutal gang rape at a University of Virginia fraternity. The story turned out to be fabricated, but before that revelation, it generated enormous attention and passionate debate.
Why did this particular story spread so far? Alexander argues it was precisely because it was so extreme that it provoked strong reactions on both sides. Supporters could signal their commitment to believing survivors; skeptics could signal their commitment to due process. Everyone was talking, everyone was outraged, and everyone was performing for their respective audiences.
Shiri's Scissor and the Engineering of Discord
In a short story called "Sort By Controversial," Alexander introduced a concept that has since escaped into the broader internet: the "scissor statement."
A scissor statement is an opinion that splits people cleanly in two, with each side utterly unable to comprehend how anyone could hold the opposite view. It's not just a matter of disagreement—it's a matter of fundamental incomprehension, where each side sees the other as not just wrong but morally defective or perhaps insane.
The story imagines a machine learning system trained to generate maximally divisive statements. Feed it demographic information, and it produces sentences precisely calibrated to tear apart families, friendships, and nations. The fictional "Shiri's Scissor" was a statement so perfectly divisive that it caused immediate violence whenever two people who disagreed encountered each other.
This is science fiction, but barely. Social media algorithms already optimize for engagement, and engagement often means conflict. The scissor statement concept gives us language to discuss what happens when this optimization goes wrong—or perhaps, when it goes exactly right from the algorithm's perspective.
Lizardmen and the Limits of Polling
Not all of Alexander's contributions are weighty philosophical essays. Some are simple observations that crystallize something we all vaguely knew but couldn't articulate.
In 2013, Public Policy Polling reported that four percent of Americans believe lizard people run the world. Headlines duly appeared: "Four Percent of Americans Are Insane!" or words to that effect.
Alexander had a simpler explanation.
When a pollster calls you during dinner and asks if you believe reptilian humanoids secretly control global politics, about four percent of people will say yes. Not because they actually believe it, but because they're bored, or annoyed, or think it's funny to mess with surveys, or weren't really paying attention, or wanted to see what would happen if they said yes.
Alexander called this "Lizardman's Constant"—the baseline level of noise in any poll, the percentage of responses that don't reflect sincere beliefs. It's probably around four percent for most questions, though it might be higher for boring questions and lower for questions with real consequences.
The concept is genuinely useful. When you see a poll saying that five percent of the population believes something absurd, the right reaction might not be alarm about that five percent. It might be recognition that you've just discovered the floor—the minimum number of people who will endorse any statement under any circumstances.
Alexander even proposed a practical solution: include a "trap question" with an obviously absurd answer, and discard respondents who choose it. This would clean up a lot of polling data, though it would also make polls more expensive and complicated.
Dancing with Neoreactionaries
One of the more controversial aspects of Slate Star Codex was Alexander's willingness to engage with neoreactionaries—a loose movement that, at its extreme, advocates for abolishing democracy and returning to something like feudalism or monarchy.
The most prominent neoreactionary thinker is Curtis Yarvin, who wrote under the pen name Mencius Moldbug. Yarvin's views include belief in natural racial hierarchies and explicit rejection of Enlightenment values like equality and individual rights. These are not mainstream positions, to put it mildly.
Alexander did not endorse these views. In fact, he wrote a thirty-thousand-word critique titled "The Anti-Reactionary FAQ," systematically dismantling neoreactionary arguments about the benefits of monarchy, the decline of Western civilization, and related claims.
But he also allowed neoreactionaries to comment on his blog and participate in discussion threads, on the theory that the best way to defeat bad ideas is to expose them to scrutiny rather than to suppress them. This "open marketplace of ideas" approach made some readers uncomfortable, though others appreciated the rare opportunity to see these ideologies seriously engaged rather than merely denounced.
The result was a strange kind of influence. Several journalists cited Alexander's FAQ as the best available explanation of what neoreactionaries actually believe—meaning his critique became a primary source for understanding the very movement he opposed.
The Day the Blog Went Dark
In June 2020, Alexander did something drastic. He deleted the entire blog.
The reason was a New York Times technology reporter working on a story about Slate Star Codex. According to Alexander, the reporter told him that the Times intended to publish his full legal name, which he had kept private for years. The newspaper's policy, the reporter explained, was to identify people by their real names.
Alexander called this doxing—the practice of publicly revealing someone's private information against their wishes. He said he feared harassment, damage to his psychiatric practice, and potential danger to his patients, who might be unsettled to learn their psychiatrist ran a popular and controversial blog.
The Times responded with a carefully worded non-denial: "We do not comment on what we may or may not publish in the future. But when we report on newsworthy or influential figures, our goal is always to give readers all the accurate and relevant information we can."
What happened next was remarkable.
Within days, a petition defending Alexander's anonymity gathered over six thousand signatures. The signatories weren't just loyal readers—they included Steven Pinker, the Harvard cognitive psychologist; Jonathan Haidt, the social psychologist known for his work on moral reasoning; Scott Sumner, an influential monetary economist; Scott Aaronson, a quantum computing researcher at MIT; and Peter Singer, the Princeton philosopher sometimes called the world's most influential living ethicist.
Critics debated whether the Times was applying its anonymity policy consistently. They noted that the paper routinely granted anonymity to sources who requested it, and questioned why a pseudonymous blogger deserved different treatment than, say, a whistleblower or a crime victim. The New Statesman, while sympathetic to Alexander, cautioned that the public only had his account of what the Times intended, and suggested waiting for more information.
Inside the Times, according to The Daily Beast, the controversy sparked internal debate about whether identifying Alexander served any journalistic purpose.
Phoenix Rising
The standoff lasted seven months.
In January 2021, Alexander launched a new blog called Astral Codex Ten—an anagram of "Slate Star Codex"—on the newsletter platform Substack. This time, he revealed his full name himself: Scott Alexander Siskind. He had quit his job at the psychiatric clinic and taken other measures that made him comfortable with being publicly identified.
Three weeks later, the New York Times finally published its article about the blog. By then, the name question was moot—Alexander had beaten them to it.
The episode became a minor flashpoint in what the Poynter Institute's David Cohn called an ongoing clash between the technology and media industries. Once, the conflict was primarily economic: newspapers versus Google and Facebook for advertising dollars. Now it seemed to involve deeper disagreements about values—who has the right to remain anonymous, when public interest justifies violating privacy, and who gets to decide.
The Effective Altruism Connection
Slate Star Codex played an important role in spreading a philosophical movement called effective altruism, which applies rigorous analysis to charitable giving. The core idea is simple: if you want to do good in the world, you should figure out which interventions actually work and fund those, rather than giving based on emotional appeal or social pressure.
In a 2017 survey of how people first learned about effective altruism, Slate Star Codex ranked fourth—behind only personal contacts, the LessWrong website, and a catch-all category of "other books, articles, and blog posts." It ranked above 80,000 Hours, an organization specifically dedicated to promoting effective altruism.
Alexander regularly explored moral questions relevant to this community. Can good acts cancel out bad acts, like carbon offsets for your conscience? How should we weigh animal suffering against human convenience? When should charities focus on systemic change versus direct aid?
These questions don't have easy answers, but Slate Star Codex was one of the few places where they were seriously examined—where someone might spend five thousand words analyzing whether it's ethical to eat meat, then follow up with a correction when new evidence emerged.
The Legacy of Careful Thinking
What made Slate Star Codex matter wasn't any single idea or essay. It was a style of thinking: careful, curious, willing to engage with uncomfortable questions, honest about uncertainty, and relentlessly focused on figuring out what's actually true rather than what's socially acceptable to believe.
In an era of hot takes and tribal signaling, that kind of thinking is rare. Most public intellectuals pick a team and stick with it. Alexander would regularly publish pieces that irritated everyone—conservatives and liberals, rationalists and their critics, his own readers and their enemies.
That's probably why the blog attracted such a strange and devoted following. Where else could you find a psychiatrist writing about the game theory of coordination failures, coining terms that spread across the internet, engaging seriously with fringe political philosophies, and still making time to review obscure books about the history of mental illness?
Astral Codex Ten continues where Slate Star Codex left off. The name is different, the platform is different, the author's identity is now public. But the project remains the same: one person, thinking carefully about difficult questions, sharing what he finds with anyone willing to read nine thousand pages of someone else's thoughts.
Which, apparently, a lot of people are.