TESCREAL
Based on Wikipedia: TESCREAL
Imagine a secret handshake, except instead of interlocking fingers, it's interlocking ideologies—and instead of fraternity brothers, it's some of the wealthiest and most influential people in Silicon Valley. That's the accusation behind TESCREAL, a newly coined acronym that its creators argue reveals something troubling about the intellectual currents shaping artificial intelligence, space exploration, and the future of humanity itself.
The Alphabet Soup of Tomorrow
TESCREAL stands for seven distinct but allegedly interconnected movements: Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism. Each word represents a philosophy about humanity's future. Together, critics argue, they form something more concerning than any single ideology could be on its own.
The term was proposed in 2023 by computer scientist Timnit Gebru and philosopher Émile Torres. Gebru is a prominent artificial intelligence researcher who made headlines when she departed Google amid controversy over her research into the biases embedded in large language models. Torres is a philosopher who has written extensively about existential risk and the ethics of emerging technologies.
Their argument is deceptively simple: these seven ideologies aren't separate intellectual traditions that happen to attract similar followers. They're a "bundle," interconnected and overlapping, with shared origins and—most controversially—shared roots in the eugenics movements of the twentieth century.
Unpacking Each Letter
Let's take these one at a time, because each represents a distinct vision of what humanity could become.
Transhumanism is perhaps the most familiar. It's the idea that we should use technology to transcend the biological limitations of our bodies and minds. Advocates envision a future where humans merge with machines, extend their lifespans indefinitely, and enhance their cognitive abilities beyond anything evolution produced. Think brain-computer interfaces, genetic engineering, or uploading consciousness to digital substrates.
Extropianism is transhumanism's more optimistic cousin, emphasizing the idea that technology will lead to ever-increasing order, intelligence, and capability. The term comes from "extropy," coined as the opposite of entropy—that inexorable tendency toward disorder that physics tells us governs the universe. Extropians believe we can fight back against cosmic decay.
Singularitarianism centers on the concept of the technological singularity, a hypothetical future point when artificial intelligence becomes capable of recursive self-improvement. The idea is that once we create an AI smart enough to make itself smarter, and that smarter AI makes itself smarter still, intelligence will explode exponentially. What happens after that is, by definition, impossible for our pre-singularity minds to predict—hence "singularity," borrowed from physics, where a singularity is a point beyond which normal rules break down.
Cosmism is perhaps the most obscure term in the bundle. It refers to a philosophical tradition with roots in early twentieth-century Russia, particularly the work of Nikolai Fedorov, who believed humanity's cosmic duty was to resurrect the dead and colonize space. Modern cosmism embraces similar grand ambitions about humanity's destiny among the stars.
Rationalism, in this context, doesn't mean the broad philosophical tradition dating back to Descartes. Instead, it refers to a specific internet community centered around websites like LessWrong, where participants apply Bayesian reasoning and decision theory to questions about artificial intelligence, human cognition, and existential risk. This community has been enormously influential in shaping how Silicon Valley thinks about AI safety.
Effective Altruism is a social movement that tries to use evidence and reasoning to determine the most effective ways to benefit others. At its most basic, it asks: if you want to do good in the world, how can you do the most good? This has led some effective altruists to focus on neglected tropical diseases, where charitable dollars stretch furthest, while others have concluded that preventing future catastrophes—including those posed by artificial intelligence—should be the priority.
Longtermism is the philosophical view that positively influencing the long-term future is a key moral priority of our time. If humanity could exist for millions or billions of years, longtermists argue, then the potential number of future people vastly exceeds the current population. Actions that affect whether those future people exist, or what kind of lives they lead, become overwhelmingly important—even if they seem abstract or distant today.
The Bundle Theory
Here's where Gebru and Torres's critique becomes pointed. They argue these seven philosophies aren't merely overlapping Venn diagrams of interested nerds. They're a coherent worldview with a troubling genealogy.
The connecting thread, they claim, is a particular way of thinking about human value. Who matters? Who should be prioritized? Who gets to define what a good future looks like?
Consider longtermism's emphasis on future people. If we take seriously the claim that there could be trillions of future humans living among the stars, then present-day concerns—poverty, inequality, algorithmic bias—might seem comparatively trivial. After all, what's a few million people suffering today compared to trillions who might never exist if we fail to develop artificial general intelligence or colonize space?
This is where critics see danger. Media scholar Ethan Zuckerman argues that by only considering goals valuable to the TESCREAL worldview, proponents can justify projects with immediate drawbacks—racial inequity, environmental degradation, algorithmic bias—as acceptable costs for far-future benefits.
Science fiction author Charles Stross puts it more bluntly. He argues these ideologies allow billionaires to pursue massive personal projects—like space colonization—by framing them as existential necessities. If not pursuing your pet project poses an existential risk to humanity, who could object? The stakes, conveniently, always justify the investment.
The Eugenics Question
The most explosive element of Gebru and Torres's argument is their claim that TESCREAL ideologies "directly originate from twentieth-century eugenics."
This requires some historical context. Eugenics—the idea that humanity could be improved through selective breeding—was once mainstream science. It enjoyed support across the political spectrum and was implemented through forced sterilization programs in many countries, including the United States. The Nazi regime took eugenics to its horrific logical conclusion, and after World War II, the term became toxic.
But the underlying impulse—the desire to improve humanity, to guide evolution, to create better humans—didn't disappear. It changed vocabulary. Gebru and Torres argue that transhumanism and its related movements represent eugenics in new clothes, now focused on technological rather than biological enhancement, but retaining the same fundamental assumptions about human hierarchy and improvement.
Not everyone buys this genealogy. Critics of the TESCREAL concept argue that grouping these movements together ignores their genuine differences and conflates people with wildly different motivations. Writing in Asterisk, a magazine associated with effective altruism, Ozy Brennan criticized the framework as treating different philosophies as a "monolithic" movement when they're actually distinct.
Oliver Habryka, who runs the rationalist website LessWrong, expressed bemusement at being accused of participating in a movement he'd never heard of: "I've never in my life met a cosmist; apparently I'm great friends with them."
Politics writer Danyl McLauchlan at Radio New Zealand noted the awkwardness of lumping effective altruists—many of whom focus on helping the global poor through proven interventions like malaria prevention—into a conspiracy with would-be creators of superhuman AI.
The Secular Religion
One of the more interesting accusations leveled at TESCREAL movements is that they function as secular religions.
Consider the parallels. There's an eschatology—a narrative about the end of the world—whether that's the singularity, the existential catastrophe we must prevent, or the cosmic destiny we must fulfill. There's a chosen people: those enlightened enough to understand the stakes. There are prophets: figures like Ray Kurzweil preaching the coming singularity, or philosopher Nick Bostrom warning about superintelligent AI. There are sacred texts: Bostrom's Superintelligence, Eliezer Yudkowsky's writings on AI alignment, William MacAskill's What We Owe the Future.
There are even schisms. The TESCREAL world divides roughly into "accelerationists," who believe we must race toward superintelligent AI to achieve utopia, and "doomers," who believe that same AI is likely to destroy humanity unless we proceed with extreme caution. These camps fight bitterly, yet both agree that artificial general intelligence is the central question of our time.
Gebru has described this conflict as "a secular religion selling AGI-enabled utopia and apocalypse." The AI is either our salvation or our extinction, but either way, it's the only thing that really matters.
Writers in Current Affairs compared this to "any other monomaniacal faith... in which doubters are seen as enemies and beliefs are accepted without evidence."
The Manifesto Moment
In late 2023, venture capitalist Marc Andreessen published what he called the "Techno-Optimist Manifesto." Andreessen co-founded the legendary web browser company Netscape and now runs Andreessen Horowitz, one of Silicon Valley's most influential investment firms. His manifesto was a 5,000-word defense of technological progress against its critics.
The document reads like a religious text. It lists "Patron Saints of Techno-Optimism" including Nietzsche, Ayn Rand, and various economists and futurists. It condemns "enemies" of progress: sustainability advocates, social responsibility initiatives, the precautionary principle. Most notably, Andreessen argued that artificial intelligence could save countless future potential lives—and that those working to slow its development should be condemned as murderers.
Critics Jag Bhalla and Nathan Robinson called the manifesto a "perfect example" of TESCREAL ideologies in action. Here was a billionaire investor declaring that opposition to his portfolio companies' work was not merely mistaken but morally equivalent to mass murder.
The Billionaire Gallery
Part of what makes TESCREAL a compelling concept, even to skeptics, is how neatly it maps onto the actual stated beliefs of extremely powerful people.
Elon Musk tweeted in 2022 that William MacAskill's longtermist book What We Owe the Future was "a close match for my philosophy." Musk has founded Neuralink, a company developing brain-computer interfaces—a quintessentially transhumanist project. He founded SpaceX with the explicit goal of making humanity a multi-planetary species—a cosmist dream. His XAI company focuses on artificial general intelligence and existential risk—singularitarian and rationalist territory.
Peter Thiel, the PayPal co-founder and venture capitalist, has invested in life extension research and written about the importance of technological progress escaping the constraints of conventional governance. Benjamin Svetkey wrote in The Hollywood Reporter that Thiel and other Silicon Valley executives who supported Donald Trump's 2024 presidential campaign were pushing policies that would eliminate "regulators whose outdated restrictions on things like human experimentation are slowing down progress toward a technotopian paradise."
Sam Altman, the CEO of OpenAI, has been described as deeply influenced by TESCREAL movements. His company's explicit mission is to develop artificial general intelligence that benefits humanity—but critics argue the company's approach exemplifies precisely the dangers Gebru and Torres warn about: building enormously powerful systems first, and hoping to figure out the safety implications later.
The FTX Connection
No discussion of TESCREAL's influence would be complete without mentioning Sam Bankman-Fried, the cryptocurrency exchange founder who became the most prominent face of effective altruism before his empire collapsed in fraud.
Bankman-Fried was explicit about his motivations. He wanted to make as much money as possible so he could give it away as effectively as possible. He funded AI safety research, pandemic preparedness, and other causes aligned with longtermist priorities. His spectacular fall—he was convicted of fraud and sentenced to prison—raised uncomfortable questions about whether TESCREAL ideologies might rationalize bad behavior in pursuit of ostensibly noble ends.
According to The Guardian, bankruptcy administrators have been trying to recover approximately five million dollars allegedly transferred to help purchase a historic hotel used for conferences associated with longtermism, rationalism, and effective altruism. At one such conference, attendees reportedly included a self-described "liberal eugenicist."
The Critique of the Critique
Not everyone finds the TESCREAL framework convincing.
James Pethokoukis of the American Enterprise Institute, a conservative think tank, argues that the tech billionaires criticized for allegedly espousing TESCREAL have significantly advanced society. Whatever their philosophical motivations, they've built products that billions of people use daily.
Eli Sennesh and James Hughes, writing for the technoprogressive Institute for Ethics and Emerging Technologies, argue that TESCREAL is a left-wing conspiracy theory that groups together philosophies with mutually exclusive tenets. You cannot simultaneously believe that AI will inevitably destroy humanity and that AI will inevitably save humanity—yet the TESCREAL framework treats both views as part of the same movement.
There's also a simpler objection: maybe rich tech people just tend to be interested in science fiction, futurism, and grand visions of humanity's potential. That doesn't make them participants in a coordinated ideology with eugenic roots. It might just make them nerds with money.
Why This Matters for AI
At the heart of the TESCREAL debate is a question about who gets to shape the future of artificial intelligence.
Much of the discourse about existential risk from AI occurs among people Gebru and Torres would identify as TESCREALists. They frame the choices as binary: either we develop superintelligent AI and achieve utopia, or we fail and humanity goes extinct. Either we're accelerationists racing toward the singularity, or we're doomers desperately trying to avert it.
But what if this framing itself is the problem?
Gebru and Torres argue that both accelerationists and doomers use hypothetical AI-driven apocalypses to justify unlimited research, development, and deregulation. By focusing on speculative far-future scenarios, they distract from present-day harms: the workers displaced by automation, the communities surveilled by facial recognition, the people denied jobs or loans by opaque algorithmic systems, the carbon emissions of training ever-larger models.
Philosopher Yogi Hale Hendlin argues that TESCREALists simultaneously ignore the human causes of societal problems and over-engineer solutions, missing the context in which problems actually arise. You don't need to colonize Mars to address climate change. You don't need superintelligent AI to reduce global poverty. But those interventions are less exciting, require less venture capital, and don't come with the frisson of saving humanity itself.
The Language of Philosophy
George Orwell warned that when certain topics are raised, "the concrete melts into the abstract and no one seems able to think of turns of speech that are not hackneyed." He was writing about political language, but the observation applies equally to discussions of humanity's far future.
TESCREAL discourse is saturated with abstractions: existential risk, expected value, astronomical stakes, utility maximization, x-risk, s-risk, p(doom). These terms have precise technical meanings within their communities of use. They also have the effect of making speculative scenarios feel concrete while making present suffering feel abstract.
When you calculate that preventing human extinction could save trillions of potential future lives, the math seems to justify almost anything. When you speak of "astronomical waste"—the lost value of all those futures that won't exist if we don't colonize the galaxy—present human concerns can seem parochial, even selfish.
This is perhaps the deepest critique of the TESCREAL worldview. It's not just that these ideologies might share uncomfortable historical roots. It's that they might make it harder to think clearly about the world as it actually is, the people who actually exist, and the problems that actually need solving.
The Future of the Argument
Whether TESCREAL proves to be a useful analytical category or an overly broad brush remains to be seen. The term has clearly struck a nerve, entering discussions of technology ethics, AI governance, and Silicon Valley culture with remarkable speed for academic coinage.
What seems certain is that the underlying tensions won't disappear. As artificial intelligence becomes more powerful, as wealth concentrates further among those building and deploying these systems, as the gap between their visions of the future and everyone else's experience of the present widens—these questions will only become more urgent.
Who decides what humanity's future should look like? Whose concerns count? And when someone tells you they're working to save the world, it might be worth asking: whose world, exactly, and saved for whom?