Section 230
Based on Wikipedia: Section 230
Twenty-Six Words That Built the Internet
In 1996, two congressmen wrote a single sentence that would shape the future of human communication. It reads: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
That's it. Twenty-six words.
These words became Section 230 of the Communications Act, and they are why YouTube doesn't get sued every time someone uploads a defamatory video, why Twitter isn't held liable for every false tweet, and why Facebook can host billions of posts without employing billions of lawyers. Without this single sentence, the internet as we know it—the chaotic, creative, occasionally toxic, endlessly generative internet—might never have emerged.
The Lawsuit That Started Everything
To understand why Section 230 exists, you need to travel back to the early 1990s, when the internet was a strange new frontier and nobody quite knew what the rules should be.
Two companies—CompuServe and Prodigy—were among the first to offer Americans access to online discussion forums. They made different choices about how to run their platforms. CompuServe took a hands-off approach, essentially saying: post whatever you want, we won't police it. Prodigy hired moderators to review content and remove anything inappropriate.
Then both companies got sued.
CompuServe won its case. The court ruled that because the company didn't review or edit content, it was merely a distributor—like a bookstore or a newsstand. Bookstores don't read every book on their shelves, so they're not responsible if one of them contains libel. CompuServe was the same, the court reasoned.
Prodigy lost. Because the company had chosen to moderate content, the court treated it as a publisher—like a newspaper. And publishers are responsible for everything they print. By trying to keep its platform civil, Prodigy had inadvertently made itself liable for every single thing its users posted.
The message was perverse: if you want to avoid lawsuits, don't try to clean up your platform. The more responsibility you take, the more legal exposure you create.
A Congressman Reads the Newspaper
Representative Christopher Cox, a Republican from California, read about these two cases and thought the courts had gotten everything backwards. "It struck me that if that rule was going to take hold then the internet would become the Wild West and nobody would have any incentive to keep the internet civil," he later said.
Cox teamed up with Democratic Representative Ron Wyden from Oregon to draft a solution. Their idea was elegant: treat online platforms as neither publishers nor mere distributors, but as something new entirely. Platforms would be immune from liability for content posted by their users, and—crucially—they wouldn't lose that immunity by trying to moderate content.
This was the key innovation. Under the Cox-Wyden approach, a platform could choose to remove hate speech, pornography, or harassment without suddenly becoming responsible for everything else on the site. The law explicitly protected "good faith" efforts to restrict material that platforms considered "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable."
That last phrase—"otherwise objectionable"—would prove extraordinarily broad. It essentially gave platforms permission to set their own standards.
A Strange Birth
The story of how Section 230 became law is genuinely odd. It was tucked inside the Communications Decency Act of 1996, which was primarily designed to criminalize sending "indecent" material to minors online. The larger bill was championed by Senator James Exon, a Democrat from Nebraska who was worried about children stumbling onto pornography.
So here was this strange legislative package: one part trying to restrict speech online, another part trying to protect platforms from liability for that speech. The whole thing passed with near-unanimous support and was signed by President Bill Clinton in February 1996.
Within a year, the Supreme Court struck down the anti-indecency provisions as unconstitutional violations of the First Amendment. But Section 230 survived. The court determined it was "severable"—legal jargon meaning it could stand on its own even though the surrounding legislation had been invalidated.
And so the twenty-six words lived on, orphaned from their original context, ready to shape a digital revolution their authors could barely have imagined.
How the Law Actually Works
Section 230 has two main components, both hiding under the somewhat misleading heading "Good Samaritan."
The first part says that providers of "interactive computer services" won't be treated as publishers of content created by others. If a user posts something defamatory on your platform, you can't be sued for defamation. The user can be sued, but you're protected.
The second part says that platforms can remove content they find objectionable without losing their immunity. This is the "good faith" moderation protection that was designed to fix the Prodigy problem.
Courts have developed a three-part test to determine when Section 230 applies. First, the defendant must be a provider or user of an interactive computer service. Second, the lawsuit must be trying to treat the defendant as a publisher of the content in question. Third, the content must have been provided by someone else—the platform can't claim immunity for content it created itself.
The immunity isn't absolute. Section 230 doesn't protect platforms from federal criminal prosecution. It doesn't shield them from intellectual property claims. And since 2018, it doesn't protect platforms that facilitate sex trafficking.
The Foundation of Modern Tech
It's difficult to overstate how much Section 230 enabled.
Consider what happens every minute on the internet. Users upload 500 hours of video to YouTube. They send 500 million tweets per day. They post 95 million photos and videos to Instagram. They write reviews, forum posts, comments, and messages in quantities that no human team could possibly review in real time.
Without Section 230, every one of these platforms would face a choice: either review every piece of content before it goes live—an impossible task at scale—or accept crushing legal liability for the inevitable defamatory, libelous, or otherwise illegal content that slips through.
Search engines exist because of Section 230. If Google could be sued for every problematic webpage it links to, it would either have to manually verify every site on the internet or shut down. Review sites like Yelp and TripAdvisor depend on the law. So do online marketplaces, social networks, comment sections, and dating apps.
A 2017 study by the economic consulting firm NERA estimated that Section 230 and its companion law, the Digital Millennium Copyright Act, together supported about 425,000 American jobs and generated $44 billion in annual revenue.
The Good Samaritan Paradox
Here's the deep irony at the heart of Section 230: it was designed to encourage platforms to clean up their act, but critics now argue it allows them to avoid accountability for leaving content up.
The law's authors believed that if platforms knew they could moderate without incurring liability, they would moderate more aggressively. They imagined a future where online spaces would become more civil because platforms would have every incentive to remove the worst content.
What actually happened was more complicated. Platforms did use their moderation powers, but inconsistently and often opaquely. They developed elaborate content policies but enforced them unevenly. They automated much of the process with algorithms that made mistakes. And they discovered that engagement—the metric that drove advertising revenue—was often highest for content that provoked outrage.
The law gave platforms both a shield and a choice. The shield protected them from liability. The choice was what to do with that protection.
From Left and Right, Complaints
By the 2020s, Section 230 had become one of the few issues that united American politicians across party lines—not in support, but in criticism. Both Democrats and Republicans wanted to change it, though for opposite reasons.
Conservatives argued that platforms were using their moderation powers to silence right-wing voices. When Twitter and Facebook banned Donald Trump after the January 6th Capitol riot, it confirmed for many Republicans that Big Tech was engaged in political censorship. They wanted Section 230 reformed to prevent platforms from removing legal speech based on viewpoint.
Liberals had the opposite complaint. They argued that platforms weren't moderating enough—that Section 230 gave tech companies immunity to host hate speech, disinformation, and harassment with impunity. Legal scholar Mary Anne Franks pointed out that the law's protections effectively subsidized the spread of bigotry, with marginalized groups bearing the heaviest costs.
Both sides were, in a sense, correct. Section 230 did give platforms enormous discretion. They could remove almost anything they wanted, or remove almost nothing. The law's only requirement was that whatever moderation they did perform be in "good faith."
The Seigenthaler Incident
One of the more famous demonstrations of Section 230's reach involved an unlikely target: Wikipedia.
John Seigenthaler was a distinguished American journalist who had worked as an aide to Robert Kennedy in the 1960s. In 2005, he discovered that his Wikipedia biography contained a bizarre and defamatory claim—that he had been "thought to have been directly involved in the Kennedy assassinations."
The false information had sat on Wikipedia for 132 days before anyone noticed. When Seigenthaler investigated, he found he had no legal recourse against Wikipedia itself. Section 230 protected the platform from liability for content posted by its users.
Seigenthaler wrote a scathing op-ed in USA Today: "We live in a universe of new media with phenomenal opportunities for worldwide communications and research—but populated by volunteer vandals with poison-pen intellects. Congress has enabled them and protects them."
The incident became a landmark example of both Wikipedia's vulnerability to vandalism and the scope of Section 230's protections.
The Zeran Case Sets the Template
The legal interpretation of Section 230 was largely set by a 1997 case called Zeran v. America Online.
The facts were disturbing. Shortly after the 1995 Oklahoma City bombing, someone posted advertisements on AOL using a Seattle man named Kenneth Zeran's phone number. The ads offered t-shirts celebrating the bombing with slogans like "Visit Oklahoma... It's a BLAST!!!" Zeran was flooded with threatening phone calls. He contacted AOL repeatedly, begging them to remove the posts. They eventually did, but slowly, and new posts kept appearing.
Zeran sued AOL for negligence in failing to remove the defamatory content quickly. The Fourth Circuit Court of Appeals ruled against him, holding that Section 230 provided complete immunity. The court's reasoning would shape internet law for decades: Congress had made a deliberate choice to protect platforms from liability because "the specter of tort liability in an area of such prolific speech would have an obviously chilling effect."
The court noted that it would be "impossible for service providers to screen each of their millions of postings for possible problems." Rather than create incentives for platforms to remain ignorant of bad content (as the Prodigy case had done), Section 230 allowed them to address problems without fear that doing so would make things worse.
The FOSTA-SESTA Exception
For more than two decades, Section 230 remained largely unchanged. Then came the first major carve-out.
In 2018, Congress passed FOSTA-SESTA—the Allow States and Victims to Fight Online Sex Trafficking Act. The law removed Section 230 protection for platforms that facilitated sex trafficking. Websites could now be held liable if they knowingly assisted, supported, or facilitated sex trafficking.
The law was a response to Backpage.com, a classified advertising site that prosecutors alleged had become a hub for sex trafficking. Under Section 230, victims had struggled to hold the site accountable.
FOSTA-SESTA was controversial even among those who wanted to fight trafficking. Critics argued that by making platforms liable for user content related to sex work, the law would push the sex trade underground where it would be harder to police. Sex worker advocacy groups warned that the law would endanger their members by eliminating online spaces where they could screen clients and share safety information.
Whatever its merits, FOSTA-SESTA established an important precedent: Section 230's protections could be narrowed. The wall wasn't impenetrable.
What If Section 230 Disappeared?
Legal scholars and tech policy experts have gamed out what might happen if Section 230 were repealed or significantly weakened.
One possibility is that platforms would become much more restrictive. Facing potential liability for any user content, they might preemptively remove anything even slightly risky. Political speech, controversial opinions, criticism of powerful people—all might be suppressed simply because platforms couldn't afford the legal exposure of hosting them.
Another possibility is that platforms would moderate less, not more. If any moderation decision could be second-guessed in court, platforms might decide it's safer to do nothing at all. They could argue they're mere conduits—the way CompuServe did in 1991—and hope the courts buy it.
A third possibility is that the business models of modern tech companies would become unworkable. User-generated content—the foundation of social media, review sites, and collaborative platforms—might simply be too risky to host. The internet could revert to something more like broadcast television, with professional content gatekeepers deciding what gets published.
Some argue this wouldn't be entirely bad. If platforms had to think harder about what they hosted, they might make better choices. The incentives that currently reward engagement over accuracy might shift toward rewarding responsibility.
Global Implications
Section 230 is American law, but its effects ripple worldwide.
Most major internet platforms are headquartered in the United States. The legal environment that shaped Facebook, Google, Twitter, and YouTube was an American legal environment, and Section 230 was at its center. When these platforms expanded globally, they brought their American DNA with them.
Other countries have made different choices. The European Union has pursued a more regulatory approach, requiring platforms to remove certain types of content and holding them responsible when they fail. Germany's NetzDG law imposes fines on platforms that don't quickly remove hate speech. The United Kingdom's Online Safety Bill creates new obligations for platform accountability.
These international developments are putting pressure on the American model. If platforms have to follow stricter rules elsewhere, they may adopt those rules globally—or create a fragmented internet where different users see different things depending on where they live.
The Enduring Question
At its core, Section 230 represents a bet that Congress made in 1996: that the benefits of an open, user-generated internet would outweigh the harms, and that platforms should have the freedom to develop their own approaches to content moderation.
For nearly three decades, that bet has held. The internet became the most powerful tool for communication and commerce in human history. It democratized publishing, enabled new forms of community, and created entirely new industries.
It also became a vector for harassment, disinformation, hate speech, and radicalization on a scale that the law's authors never anticipated.
Whether Section 230 should change—and how—remains one of the most contested questions in technology policy. The platforms say they need immunity to function at scale. Critics say that immunity has enabled irresponsibility. Conservatives say platforms censor too much. Liberals say they don't censor enough.
The twenty-six words that built the internet may need to be rewritten for the world that internet helped create. But there's little agreement on what the new words should say.