State-sponsored Internet propaganda
Based on Wikipedia: State-sponsored Internet propaganda
The Invisible Army in Your Feed
Somewhere in a nondescript office building, perhaps in Shanghai or St. Petersburg or even suburban Virginia, a person is logging into what appears to be a normal social media account. The profile picture shows a smiling woman named "Jennifer" or "Sarah" or "Mike." The account has been active for years, posting about sports, complaining about weather, sharing memes. It looks completely ordinary.
It's not.
This is a soldier in a new kind of warfare—one fought not with bullets but with posts, likes, and shares. Governments around the world have discovered that manipulating what people see on the internet is far cheaper than tanks and far more effective at shaping how populations think. Welcome to the age of state-sponsored internet propaganda.
What Makes It Different From Regular Propaganda
Governments have always tried to control information. The term "propaganda" itself comes from a seventeenth-century Catholic organization dedicated to spreading the faith. Radio broadcasts, leaflet drops, state-controlled newspapers—these are ancient arts by modern standards.
But internet propaganda is something genuinely new. Here's why.
First, scale. A single operator can manage dozens of fake accounts. Software tools called bots can amplify messages automatically, making fringe ideas appear mainstream. China's operations have involved hundreds of thousands of accounts. In June 2020 alone, Twitter deleted over one hundred seventy thousand accounts linked to a single Chinese government campaign.
Second, precision. Traditional propaganda was a blunt instrument—you broadcast a message and hoped the right people heard it. Internet propaganda can target specific demographics with surgical accuracy. In April 2024, Microsoft's Threat Analysis Center discovered that accounts affiliated with the Chinese Communist Party were posing contentious questions about American domestic issues specifically to map voting demographics before the presidential election. They weren't just spreading messages; they were gathering intelligence about which messages would work best on which groups.
Third, deniability. When a government newspaper publishes something, everyone knows where it came from. When a seemingly random Twitter user shares an inflammatory story, tracing it back to a government operation requires sophisticated forensic analysis. The whole point is to make state messaging look organic, grassroots, authentic.
China's Fifty Cent Army
The name sounds almost comical. China's internet propagandists are sometimes called the "Fifty Cent Party" because they were reportedly paid fifty Chinese cents—about seven American cents—for each pro-government post they created. The actual compensation structure has evolved, but the nickname stuck.
The reality is anything but funny.
These operations work on multiple fronts simultaneously. Domestically, they flood discussions with pro-government content, drowning out criticism through sheer volume. A 2016 academic study estimated the operation generates approximately four hundred forty-eight million social media posts per year within China alone.
Internationally, the tactics are more sophisticated. In 2022, a Chinese public relations firm called Shanghai Haixun Technology Company began planting pro-Beijing stories in almost three dozen news outlets worldwide. They weren't creating obviously fake content—they were hiring real freelance writers, creating front media companies in Western countries, and even attempting to recruit protesters for causes aligned with Chinese interests.
The network known as "Spamouflage" or "Dragonbridge" has been linked to China's Ministry of Public Security and has attempted to influence American elections. But interestingly, researchers have found these efforts often ineffective. The accounts frequently fail to gain traction with real users. They're quantity over quality—a firehose rather than a precision instrument.
There's also an internal tension in Chinese internet nationalism. Fang Kecheng, a professor at the Chinese University of Hong Kong who studies these phenomena, notes that the Communist Party is "acutely aware that radical nationalists may go out of control and cause trouble." The party wants passionate but controllable supporters, which is harder to engineer than it sounds.
Russia Wrote the Modern Playbook
If China's approach is a flood, Russia's is a poison. Russian internet propaganda operations, particularly those linked to the Internet Research Agency based in St. Petersburg, pioneered many techniques that other governments now copy.
The Russian approach doesn't just promote pro-Russian views. It seeks to undermine the very concept of shared truth. If everyone is lying, the thinking goes, then no one can criticize Russia for lying. If all information is suspect, then Russian denials become as valid as anyone else's accusations.
This manifests in tactics like "whataboutism"—responding to any criticism of Russia with criticism of the critic. It shows up in the promotion of conspiracy theories from across the political spectrum. Russian operations have simultaneously supported Black Lives Matter protests and Blue Lives Matter counter-protests. The goal isn't to advance either cause but to deepen divisions, to make Americans distrust each other and their institutions.
The Russian model has been exported. Belarus, under strongman Alexander Lukashenko, coordinates its disinformation efforts with Russian operations. During the 2020 Belarusian protests against Lukashenko's contested election victory, trolls from Russia and Serbia actively participated in spreading disinformation alongside Belarusian government accounts. The claims ranged from blaming Poland and Ukraine for instigating problems to making direct threats against activists.
America Plays the Game Too
It would be convenient to frame this as purely a story of authoritarian governments manipulating democratic societies. The reality is messier.
The United States has its own history of internet manipulation operations. Between 2010 and 2012, the United States Agency for International Development—the government body responsible for foreign aid—secretly created a Twitter-like service called ZunZuneo targeting Cuba. The service built a user base by offering innocuous content, with the apparent long-term goal of organizing political opposition to the Cuban government. When it was exposed, the operation became an embarrassment.
Operation Earnest Voice, which officially launched in 2011, developed technology allowing American military personnel to manage multiple fake online personas for use in foreign countries. The stated purpose was countering extremist propaganda, but the tools could theoretically be used for any information operation.
More recently, a Reuters investigation reported that during Donald Trump's first presidential term, the Central Intelligence Agency allegedly used social media accounts with fabricated identities to spread negative information about the Chinese government. A campaign using the hashtag "#ChinaAngVirus" during the early days of the coronavirus pandemic was linked to American influence operations.
The difference, proponents argue, is that American operations target foreign adversaries while authoritarian governments target their own citizens. Critics respond that the distinction is less clean than it appears and that such operations inevitably blow back, affecting domestic discourse and undermining American credibility when exposed.
The Democratic Dilemma
Here's the uncomfortable truth: democracies are structurally vulnerable to information warfare in ways that authoritarian states are not.
In China, the government can simply delete content it doesn't like. The Great Firewall blocks foreign platforms entirely. When foreign propaganda tries to reach Chinese citizens, it faces enormous technical and legal barriers. Meanwhile, Chinese operations face few obstacles posting on American platforms.
Democracies prize free speech. Removing content, even foreign propaganda, triggers legitimate concerns about censorship. When Twitter deleted those one hundred seventy thousand Chinese accounts in 2020, some observers worried about the precedent. Who decides what counts as "coordinated inauthentic behavior"? Today it's Chinese bots; tomorrow could it be domestic political movements?
The Philippines offers a cautionary tale. During Rodrigo Duterte's 2016 presidential campaign, his team spent at least two hundred thousand dollars to hire between four hundred and five hundred people to defend him online and attack critics. According to Oxford University researchers, these "keyboard trolls" played a significant role in his victory. After he was elected, the trolls didn't disappear. Critics of Duterte's policies—including his brutal drug war that killed thousands—faced coordinated online harassment including threats of violence and rape.
The infrastructure built for a campaign became a tool of governance. This pattern has repeated elsewhere.
The Strange Case of Dappi
Japan might seem an unlikely setting for state-sponsored trolling. It's a stable democracy with a free press. Yet between 2019 and 2021, an anonymous Twitter account called Dappi systematically attacked opposition parties while praising the ruling Liberal Democratic Party.
What made Dappi unusual wasn't the content—partisan accounts are hardly rare. It was the pattern. The tweets were posted almost exclusively during regular business hours on weekdays. They rarely appeared on weekends or holidays. It operated like a job because it was one.
In 2021, after two opposition politicians filed suit for defamation, courts compelled Twitter to reveal information about the account. It belonged to an employee of an information technology company called Ones Quest, which had business dealings with the Liberal Democratic Party. The account wasn't simply a passionate supporter; it was a paid operation masquerading as grassroots enthusiasm.
The revelation sparked outrage but little lasting change. The line between political communication and manipulation had become impossibly blurry.
When Everyone Is Doing It
The list of countries engaged in internet propaganda operations reads like a United Nations roll call. India's ruling Bharatiya Janata Party allegedly operates an "IT Cell" for online influence. A European watchdog discovered 265 fake media outlets across sixty-five countries managed by an Indian network targeting policy makers with anti-Pakistan messaging.
Turkey employs an estimated six thousand paid social media commentators, known colloquially as "AK Trolls" after the ruling party. When Twitter removed over seven thousand Turkish accounts in June 2020 for "coordinated inauthentic activity," President Erdogan's administration appeared to threaten the platform with government restrictions.
Israel has official programs called Hasbara—Hebrew for "explaining"—that train citizens to defend Israeli policies online. The Act.IL app coordinates users to respond to perceived anti-Israel content. Whether this constitutes propaganda or legitimate advocacy depends largely on one's political sympathies, which is precisely the problem. The tactics of state-sponsored manipulation have become so normalized that distinguishing legitimate political communication from coordinated manipulation requires expert analysis.
Even small countries play. North Korea reportedly maintains two hundred agents who post propaganda to South Korean websites using stolen identities, generating tens of thousands of posts annually. Kazakhstan under former President Nursultan Nazarbayev had "Nurbots"—the name combining his first name with "bots"—to divert attention from domestic problems.
The Propaganda Paradox
Iran's operations reveal an interesting paradox. In April 2019, researchers discovered an Iranian campaign targeting Arab Twitter users. The accounts masqueraded as Arabic news outlets to gain trust. But here's the thing: the campaign was largely ineffective. Real users rarely engaged with the content. The fake accounts talked mostly to each other.
This pattern appears repeatedly. Chinese influence operations on Western platforms generally fail to gain traction. Russian campaigns are more sophisticated but still often obvious to careful observers. The propaganda works not by convincing people but by creating noise, muddy waters where truth becomes hard to find.
There's also a domestic trap. Thailand's Royal Thai Army has allegedly coordinated with Russia to spread anti-American messaging and support the military-aligned government. But when a government becomes dependent on manufactured consensus, it loses the ability to hear genuine feedback. The propaganda becomes a mirror showing rulers only what they want to see.
Syria offers the extreme case. Under Bashar al-Assad, state-sponsored sockpuppets—fake accounts controlled by one person—spread disinformation about the civil war. A primary target has been the White Helmets, a humanitarian organization rescuing civilians from conflict zones. The propaganda hasn't ended the war or silenced all critics. It has merely made it harder for anyone to know what's actually happening.
The Platforms' Impossible Position
Social media companies face genuine dilemmas. They want their platforms to be open spaces for global conversation. They also don't want to be tools of authoritarian manipulation.
Meta, the company formerly known as Facebook, has become increasingly aggressive about removing what it calls "coordinated inauthentic behavior." In 2019, it removed one hundred three pages, groups, and accounts linked to Pakistan's military intelligence. In 2020, it disrupted what it called the first known Chinese operation specifically targeting American voters. By May 2023, the company was detailing increasingly sophisticated Chinese tactics including front companies, freelance writer networks, and even attempts to recruit real protesters.
But every removal raises questions. When Meta removes Pakistani military accounts, is it protecting users or taking sides in geopolitical conflicts? When it identifies Chinese influence operations, how can users verify the company's claims? The platforms have become judges of information legitimacy without clear standards, accountability, or democratic oversight.
There's also the problem of scale. Twitter can delete one hundred seventy thousand accounts, but new ones appear constantly. The operations adapt, learning to appear more authentic, spreading across more platforms, becoming harder to identify. It's an arms race with no obvious end.
What Actually Works
If the situation seems hopeless, there are some encouraging patterns.
Media literacy helps. People who understand how propaganda works are better at identifying it. Countries that invest in education about information manipulation—Finland is often cited as a model—show greater resistance to influence operations.
Transparency helps too. When researchers can study platform data, they can identify coordinated campaigns. When journalists can trace the origins of viral content, they can expose manipulation. The very act of discussing these operations makes them less effective; it's hard to pretend to be organic when everyone knows to look for signs of coordination.
Perhaps most importantly, the underlying conditions matter. Propaganda exploits existing divisions. It amplifies grievances that already exist. Societies with strong institutions, high social trust, and functioning democratic processes prove more resilient—not because they're immune to manipulation but because the manipulation has less to work with.
The Chinese accounts asking divisive questions about American domestic issues are effective only because those issues are genuinely divisive. Russian operations amplifying conspiracy theories work only because people are genuinely susceptible to conspiracy thinking. The propaganda is a symptom as much as a cause.
Living With the Noise
We are not going back to a world where governments don't try to manipulate internet discourse. The economics are too attractive—influence at scale for minimal cost. The technology enables it. The returns, however uncertain, exceed the investment.
What changes is awareness. A generation ago, people largely trusted that social media showed them authentic voices of real people. That innocence is gone. Every viral post, every trending topic, every seemingly organic movement now carries an asterisk: this might be manufactured. This might be a soldier in an invisible army following orders from a government halfway around the world.
That skepticism is both protection and loss. Protection because it makes manipulation harder. Loss because it erodes the democratic promise of the internet—the idea that technology could create a global public square where citizens connect and deliberate free from elite control.
The global public square exists. It's just full of spies, agents, and operatives pretending to be neighbors. In that crowd, the challenge isn't just finding truth. It's remembering why truth matters enough to keep searching.