← Back to Library

The World Needs Europe to Rein In Social Media Before It Breaks Us All

Deep Dives

Explore related topics with these Wikipedia articles, rewritten for enjoyable reading:

  • Digital Services Act 13 min read

    The article heavily references the EU's Digital Services Act as the key regulatory framework for reining in social media platforms. Understanding its specific provisions, enforcement mechanisms, and scope would give readers essential context for evaluating the article's arguments about European regulatory leadership.

  • 2024 Romanian presidential election 15 min read

    The article cites Romania's 2024 election as a concrete example of social media manipulation leading to the annulment of election results. This unprecedented event demonstrates the real-world stakes of platform regulation and foreign interference that the article argues Europe must address.

  • Section 230 13 min read

    Understanding Section 230 of the Communications Decency Act explains why US lawmakers struggle to regulate Big Tech despite recognizing harms. This legal framework immunizing platforms from liability for user content is the implicit backdrop to why the article argues Europe must lead where America cannot.

The newly released US National Security Strategy criticizes Europe for what it calls an “censorship of free speech and suppression of political opposition” through digital regulation. The document reveals an administration so thoroughly entangled with Big Tech that it cannot acknowledge the damage platforms cause, let alone confront it. And it gets the main thing backward: Europe would be doing the world a massive favor by taking the lead on desperately needed action.

We are extending our 20% off holiday discount to New Years! You unlock full access and commenting rights and get to commission content on occasion.

A decade ago few would have predicted that social‑media platforms would evolve from places of connection into algorithmic engines of addiction, polarization and foreign interference. But that is what happened. Facebook and Instagram (Meta), TikTok (ByteDance), X (formerly Twitter), YouTube (Google) and others do not merely host speech; they direct what billions of people see. Their recommendation systems — tuned relentlessly for engagement — elevate whatever keeps users hooked. And is that means conspiracy, harassment, hate, fraud, or state‑backed disinformation campaigns, then that’s the way it goes.

If a hot‑dog vendor discovered that adding poison increased sales, we would not celebrate the innovation. We would shut him down. Profit is not a defense for practices that harm the public. The same moral logic must apply when the product is the information environment itself.

The harms are not speculative. The US Surgeon General’s 2023 advisory laid out extensive evidence linking heavy social‑media use to anxiety, depression and self‑harm risks among adolescents and called for urgent action. Meta’s own secret internal research showed that Instagram makes body‑image issues worse for one in three teen girls, and that teens themselves blamed the app for worsening anxiety and depression. And internal documents rerevealed just last month show that Meta expected to earn a large share of revenue from ads tied to scams, fraudulent schemes and banned goods, and repeatedly failed to stop them. In the UK this skullduggery seems to have been especially egregious.

These companies know the damage; they simply find it spectacularly profitable.

Civil society watchdogs and academic studies continuously document instances where coordinated disinformation campaigns, botnets, and misleading content flooded social media during election cycles — undermining trust in institutions and weakening democratic norms.

Romania learned this the hard way. In late 2024, coordinated networks of Russian‑linked bot accounts pushed divisive narratives

...
Read full article on Dan Perry →