Dead Internet theory
Based on Wikipedia: Dead Internet theory
In 2024, Facebook users began sharing and commenting on images of Jesus Christ merged with shrimp. Thousands of accounts wrote "Amen" beneath these bizarre creations. The images weren't made by humans—they were generated by artificial intelligence, spread by automated accounts, and engaged with by bots programmed to simulate religious devotion. To many observers, "Shrimp Jesus" became an unsettling symbol of something that conspiracy theorists had been warning about for years: the internet might already be dead.
Not dead in the sense that your browser won't load. Dead in a more philosophical sense—that the vibrant, chaotic, deeply human thing the internet once was has been hollowed out and replaced by something else entirely. Something artificial. Something that merely performs humanity without actually being human at all.
The Theory Takes Shape
The dead internet theory emerged from the shadowy corners of online message boards, the kind of places where conspiracy theories germinate alongside genuine technological anxieties. In 2021, a user calling themselves "IlluminatiPirate" posted a manifesto on an obscure forum called Agora Road's Macintosh Cafe. The post was titled "Dead Internet Theory: Most Of The Internet Is Fake," and it synthesized ideas that had been circulating on imageboards like Wizardchan for years.
The theory has two distinct parts, and understanding this distinction matters.
The first part is observational: that bots, algorithms, and automated systems have displaced genuine human activity online. This claim is actually supported by data. In 2016, the security firm Imperva examined over 16.7 billion visits across 100,000 websites and found that automated programs were responsible for 52 percent of all web traffic. By 2023, that figure had risen to nearly 50 percent again, with artificial intelligence programs scraping websites to train the next generation of language models contributing to the increase.
The second part is conspiratorial: that this displacement is intentional, coordinated by governments and corporations to manipulate the human population. In IlluminatiPirate's original post, they put it bluntly: "The U.S. government is engaging in an artificial intelligence-powered gaslighting of the entire world population."
This is where the theory crosses from uncomfortable observation into unfounded speculation.
The Kernel of Truth
Like many conspiracy theories, the dead internet theory wraps legitimate concerns in paranoid packaging. The legitimate concerns are worth taking seriously.
Consider search engines. When you type a query into Google, the search engine might claim millions of results exist. But try scrolling past the first few pages. Eventually, you'll hit a wall. The results stop. Google has indexed far more of the web than it will ever show you, and its algorithms decide what you're allowed to see. This isn't necessarily malicious—Google argues it's filtering out spam and low-quality content—but it does mean that the searchable web is a curated subset of what actually exists.
This curation compounds with another phenomenon called link rot. When websites go offline, all the links pointing to them break. Studies suggest that a significant percentage of links on the web lead nowhere. The internet you can actually reach keeps shrinking, even as the total amount of content theoretically available keeps growing.
Some have described Google as a Potemkin village—named after the Russian minister who allegedly built fake villages to impress Catherine the Great. The facades looked real from a distance, but there was nothing behind them. Is the searchable web similarly illusory? Are we looking at the facade of an infinite library when the shelves behind the front row are actually empty?
When Machines Started Talking
The dead internet theory existed for years as a fringe concern, discussed mostly by people who spent too much time on imageboards and had perhaps developed an overly paranoid relationship with technology. Then, in November 2022, a company called OpenAI released ChatGPT to the public.
Everything changed.
Before ChatGPT, creating convincing fake content required either technical sophistication or manual effort. Governments and well-funded corporations could deploy bot armies, but the average person couldn't flood the internet with realistic-sounding text. Large language models—the technology behind ChatGPT—democratized content generation. Suddenly, anyone could produce thousands of words of coherent prose in seconds.
Timothy Shoup of the Copenhagen Institute for Futures Studies predicted in 2022 that if these models "got loose," 99 to 99.9 percent of content online might be AI-generated by 2025 to 2030. In 2024, Google acknowledged that its search results were being overwhelmed by websites that "feel like they were created for search engines instead of people." The company admitted that generative AI was contributing to the rapid proliferation of such content.
The dead internet theorists started to look less paranoid and more prescient.
The Weak and Strong Versions
A 2025 academic book on disinformation drew a useful distinction between what it called the "weak" and "strong" versions of the dead internet theory.
The weak version suggests that powerful elites—corporations, governments, wealthy individuals—are using automated systems to shape public discourse. This version doesn't require believing in any grand conspiracy. It simply observes that those with resources have always sought to influence public opinion, and that modern technology makes this easier than ever. Social media platforms themselves are designed to maximize engagement, which often means promoting content that provokes emotional reactions rather than content that informs or enriches.
The strong version goes much further. It suggests that some catastrophic event has already occurred—that society has somehow collapsed without anyone noticing—and that the internet is being used to maintain an illusion of normalcy. This version edges into science fiction territory, reminiscent of simulation theories or the premise of "The Matrix." It lacks any supporting evidence and serves more as a thought experiment or creepypasta—internet horror fiction—than a serious claim about reality.
The AI Slop Problem
Even setting aside conspiracy theories, the practical effects of AI-generated content are becoming impossible to ignore.
On Facebook, AI-generated images have gone viral with alarming frequency. Besides Shrimp Jesus, users have encountered fake images of flight attendants, fabricated pictures of Black children standing next to artwork they supposedly created, and countless other synthetic creations designed to provoke engagement. These posts accumulate hundreds or thousands of comments, many from accounts that may themselves be automated.
Critics have started calling this phenomenon "AI slop"—a term that captures both the low-quality nature of the content and the way it floods feeds like industrial runoff contaminating a river.
The implications extend beyond social media. Researchers studying online cancer support forums have raised concerns about patients seeking emotional support and unknowingly receiving responses from language models instead of other humans. The psychological impact of discovering that what felt like genuine human connection was actually machine-generated could be devastating for people already in vulnerable situations.
There's also a technical problem looming. AI models are trained on data scraped from the internet. As AI-generated content becomes more prevalent online, future AI models will increasingly be trained on content produced by previous AI models. Professor Toby Walsh of the University of New South Wales warns that this could cause the quality of AI output to degrade over time—a kind of digital inbreeding where each generation of models becomes slightly less connected to authentic human expression.
The Platforms Respond
In January 2025, Meta—the company that owns Facebook, Instagram, and WhatsApp—announced plans to introduce AI-powered autonomous accounts. These wouldn't be chatbots responding to queries. They would be artificial personas with bios, profile pictures, and the ability to generate and share content independently. Connor Hayes, Meta's vice president of product for generative AI, said the company expected "these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do."
The backlash was immediate. Meta quickly removed the accounts.
But the announcement revealed something important about how major technology companies view the future. They're not trying to minimize AI presence on their platforms. They're trying to normalize it. Facebook already includes an option to generate AI responses to group posts. If nobody responds to your question within an hour, the platform may automatically generate an AI response for you.
This raises a question that the dead internet theorists have been asking all along: at what point does a platform cease to be a place for human connection and become something else entirely?
The Inversion
YouTube engineers coined a term for one of their deepest fears: "the Inversion."
The platform has long struggled with fake views. There's a thriving market for artificially inflating view counts to boost a video's credibility and trick the algorithm into recommending it more widely. YouTube developed sophisticated systems to detect and filter out these fake views.
But the engineers worried about a scenario where fake views became so prevalent that the detection algorithm would start to treat them as the baseline. Real human viewing patterns would begin to look like anomalies. The system would start filtering out genuine engagement while letting the fake engagement through.
The Inversion represents a tipping point where the artificial becomes the default and the authentic becomes the aberration. It's a localized version of what the dead internet theory describes happening to the web as a whole.
The "I Hate Texting" Phenomenon
Sometimes the evidence for increasing bot activity comes from unexpected places.
Since 2020, numerous Twitter accounts have posted variations of tweets starting with "I hate texting" followed by an alternative romantic activity. "I hate texting I just want to hold ur hand." "I hate texting just come live with me." These posts receive tens of thousands of likes, and analysts suspect many of those likes come from automated accounts.
The tweets themselves might be genuine expressions of millennial romantic longing. Or they might be generated by bots designed to build follower counts through relatable content before pivoting to spam or propaganda. The fact that it's become nearly impossible to tell the difference is, in some ways, the point.
What Does It Mean for the Internet to Be Alive?
The dead internet theory forces us to grapple with a fundamental question: what made the early internet feel alive in the first place?
Part of the answer involves unpredictability. When you visited a forum or a chat room in the 1990s or early 2000s, you genuinely didn't know what you'd find. People were weird and earnest and unpolished. They posted amateur poetry and shared obscure music and got into lengthy arguments about whether a hot dog is a sandwich. The content was human in all its messy, inconsistent, surprising glory.
Modern platforms have smoothed out much of that unpredictability, even before AI entered the picture. Algorithmic feeds show you what's likely to keep you engaged, which often means content similar to what you've engaged with before. Filter bubbles and echo chambers emerged as predictable consequences. The internet became less like exploring an unfamiliar city and more like walking through a shopping mall designed to guide you toward specific stores.
AI-generated content threatens to accelerate this homogenization. Language models are trained to produce plausible, coherent, average output. They're explicitly designed not to be weird. They don't have the irrational obsessions or inexplicable passions that make individual humans distinctive. A thousand AI-generated articles about a topic will sound remarkably similar, while a thousand articles written by different humans would reflect a thousand different perspectives, knowledge bases, and writing styles.
The Conspiracy That Isn't
Here's what the dead internet theory gets wrong: the transformation doesn't require conspiracy.
Governments didn't need to secretly coordinate the flooding of the internet with artificial content. They didn't need to manipulate search engines or orchestrate bot armies. Market forces and technological progress accomplished it without any central direction.
Social media companies want engagement. AI-generated content is cheap to produce and can be optimized for engagement. Therefore, more AI-generated content appears. Search engines want to show relevant results. Spam farms want to capture search traffic. Spam farms use AI to generate content that games search algorithms. The web fills with AI slop. Users want help writing emails, posts, and comments. AI assistants provide that help. The percentage of AI-assisted or AI-generated text increases.
No conspiracy required. Just incentives playing out to their logical conclusions.
In some ways, this is more unsettling than a conspiracy. A conspiracy could theoretically be exposed and stopped. A system of misaligned incentives that nobody controls is much harder to fix.
Are We Still Here?
The strange app called SocialAI, launched in September 2024, might represent the logical endpoint of these trends. Its creator, Michael Sayman, designed it specifically for users to interact exclusively with AI bots. No humans allowed. It's social media without the social—or at least without the human social.
Why would anyone use such a thing? Perhaps because AI interlocutors don't judge, don't ghost you, and don't have bad days that make them snappy. Perhaps because the exhausting work of maintaining human relationships—with all their demands for reciprocity and emotional labor—can be avoided entirely. Perhaps because, after years of interacting with platforms where you could never be quite sure whether responses came from humans or bots anyway, the pretense simply becomes unnecessary.
Or perhaps there's something deeply sad about it—a symptom of social atomization where the simulation of connection becomes preferable to the vulnerability of real connection.
The Essay's End, Not the Story's
One academic observer, Thomas Sommerer, looked at Shrimp Jesus and saw a messenger. Not a messenger from God, but a messenger from whatever system we've collectively maneuvered ourselves into. "Decoupled, proliferated, and in a state of exponential metastasis," he wrote.
The dead internet theory, stripped of its conspiratorial elements, might be less a theory and more a forecast. The trends it identifies are real and accelerating. Bot traffic continues to increase. AI-generated content continues to proliferate. The platforms that mediate our online lives continue to optimize for engagement over authenticity.
But here's the thing: you're reading this. A human wrote it, based on information compiled by humans, about a phenomenon that humans noticed and named and worried about. The act of concern itself—the very existence of the dead internet theory as a topic of discussion—suggests that the internet isn't quite dead yet.
The harder question isn't whether we're still here. It's whether we'll remain recognizable amid the noise.