← Back to Library
Wikipedia Deep Dive

Social bot

Based on Wikipedia: Social bot

In January 2025, users of Instagram and Facebook began noticing something strange in their feeds. New accounts were appearing with blue verification checkmarks, the symbol that Meta uses to indicate a profile is trustworthy and authentic. But these accounts weren't celebrities or journalists. They were artificial intelligence characters created by Meta itself, complete with fabricated biographies and AI-generated profile pictures. Most unsettling of all: when users tried to block these accounts, they couldn't.

This wasn't a bug. It was a feature.

Welcome to the age of the social bot, where the line between human and machine on your favorite platforms has become so blurred that even the companies running those platforms have decided to simply embrace the confusion.

What Exactly Is a Social Bot?

A social bot is a software program designed to operate a social media account and perform the kinds of actions you might do yourself: liking posts, sharing content, following other users, and even holding conversations. Some bots operate with complete autonomy, running their scripts around the clock without any human oversight. Others work in a hybrid arrangement, where a human operator steps in for certain decisions while the bot handles routine tasks.

The sophistication varies wildly. At the simple end, you might have a bot that does nothing but automatically like every post that mentions a particular brand. At the complex end, modern social bots can use large language models, the same technology behind tools like ChatGPT, to engage in conversations that feel remarkably human.

The key distinction is intent. A bot that tweets weather alerts for your city is a social bot. So is a bot that floods political discussions with thousands of fake accounts all pushing the same message. The technology is neutral. The purpose is everything.

The Intellectual Ancestry of Digital Imposters

The story of social bots begins not with social media, but with a question posed by Alan Turing in 1950: Can machines think? Turing proposed a test that has haunted computer science ever since. If a human judge, communicating only through text, cannot reliably distinguish between a human and a machine respondent, the machine should be considered capable of thinking.

This became known as the Turing Test, and it set the agenda for decades of artificial intelligence research.

In 1966, Joseph Weizenbaum at the Massachusetts Institute of Technology created ELIZA, one of the first programs designed to simulate human conversation. ELIZA's most famous configuration was called DOCTOR, which mimicked a psychotherapist using a technique called pattern matching. If you typed "I feel sad," ELIZA might respond "Why do you feel sad?" or "Tell me more about your feelings."

Weizenbaum was disturbed by what he had created. Not because ELIZA was particularly intelligent, but because humans were so eager to treat it as if it were. His secretary asked him to leave the room so she could have a private conversation with the program. Students spent hours pouring out their problems to it. People formed emotional attachments to what Weizenbaum knew was just a simple script.

This revealed something profound about human psychology that social media companies would later exploit at scale: we are predisposed to see humanlike intelligence wherever we find responsive communication, whether or not any intelligence actually exists.

The MySpace Laboratory

When social media platforms emerged in the early 2000s, they created entirely new possibilities for automated accounts. MySpace, which dominated social networking before Facebook's rise, became an early testing ground.

Marketing firms discovered they could use bots to inflate activity on their clients' profiles. More comments, more friend connections, more apparent popularity. This mattered because social media had introduced a new currency: attention, measured in followers and engagement metrics that anyone could see.

The logic was straightforward. If someone visited your profile and saw thousands of friends and comments, they'd assume you were important. That assumption made them more likely to pay attention to what you were saying. And that attention could be converted into real money.

A black market emerged. Want a thousand followers? That'll be fifty dollars. Want your new song to appear more popular than it actually is? Bots can generate those streams. Want your political opponent to look unpopular? Bots can flood their mentions with hostility.

The Attention Economy's Dark Engine

Social bots exist because social media platforms created perverse incentives that reward apparent popularity over actual human connection.

Consider how these platforms work. Their algorithms surface content based on engagement signals: likes, shares, comments, follower counts. More engagement means more visibility. More visibility means more influence. More influence means more money, whether you're selling products, promoting causes, or running for office.

The problem is that these engagement signals are trivially easy to fake. A single person with the right tools can create thousands of accounts. Those accounts can like, share, and comment at superhuman speeds. And because the platforms' business models depend on showing growth in user numbers and engagement, they have historically been reluctant to aggressively purge the fake accounts that inflate their metrics.

This creates what economists call a market for lemons. When fake engagement is indistinguishable from real engagement, the incentive to fake it becomes overwhelming. Why spend years building a genuine audience when you can buy a convincing-looking following overnight? The honest participants get drowned out by those willing to cheat.

The Many Uses of Automated Accounts

Not all social bots are malicious. Many serve perfectly legitimate purposes.

Customer service bots handle the initial triage when you message a company with a problem. They can answer frequently asked questions, collect information about your issue, and route you to a human agent if needed. These bots have become so common that many companies have laid off customer service staff, for better or worse.

Notification bots provide useful automated information: earthquake alerts, weather warnings, transit updates. These accounts clearly identify themselves as automated and provide a genuine public service.

Entertainment bots create whimsical or amusing content. There are bots that generate poetry, bots that create images, bots that tell jokes. When clearly labeled, these can add value to social media without deceiving anyone.

The problem arises when bots pretend to be human.

Manufacturing Consensus

The most consequential use of social bots is manipulating public opinion, a practice sometimes called computational propaganda.

Here's how it works. Suppose you want to create the impression that a particular political position has widespread support. You create or purchase a network of bot accounts, perhaps hundreds or thousands of them. You give them profile pictures, biographical details, posting histories. You make them look like real people.

Then you coordinate. Your bot army begins posting messages supporting your position. They share each other's posts, amplifying the apparent reach. They like and comment on content from real users who share your views, boosting that content in the algorithm. They flood opposing voices with criticism, making it seem like their positions are unpopular.

To an outside observer scrolling through social media, it looks like a genuine groundswell of opinion. Humans are social creatures. We take cues from those around us about what's normal, acceptable, and true. When it appears that thousands of people hold a certain view, we're more likely to adopt that view ourselves, or at least to believe it's more mainstream than it actually is.

This technique has been documented in elections around the world. It's been used to manipulate stock prices. It's been used to promote products and destroy reputations. The Islamic State terrorist organization famously used coordinated bot networks to amplify its propaganda, pushing content into Twitter's trending topics where it would reach far larger audiences than the underlying human support could have achieved organically.

The Scale of the Deception

How big is the bot problem? The honest answer is that nobody knows for certain, including the platforms themselves.

In 2017, CNBC reported that approximately 48 million Twitter accounts, roughly fifteen percent of the platform's 319 million users, were likely bots. Instagram, which reached one billion monthly active users in 2018, was estimated to have up to ten percent of those accounts operated by automated software.

Twitter claimed in 2022 that it was removing one million spam bot accounts per day. Think about that number. Every single day, a million accounts were being created that violated the platform's terms of service badly enough to warrant deletion. And that was just the ones they caught.

The challenge is that bot detection is an arms race. Early bots could sometimes be identified by their superhuman posting rates, publishing content every few seconds around the clock. But bot operators adapted. They introduced delays and timing variations. They programmed their bots to sleep.

Bots used to be detectable by their obviously fake profile pictures, often cartoons or stock photos. But generative AI can now create photorealistic images of people who don't exist. Every face you see online might belong to nobody at all.

The Cat and Mouse Game of Detection

Researchers have developed various techniques to identify bot accounts, but each method has limitations.

One approach looks at behavioral patterns. Bots often have unusual ratios of followers to accounts they follow. They may have low engagement rates relative to their follower counts, because those followers are also fake accounts that don't actually read content. Their posting patterns may show statistical regularities that differ from human behavior.

A tool called Botometer, originally developed by researchers at Indiana University, analyzes Twitter accounts against over a thousand features to generate a probability score for how likely an account is to be a bot. But even sophisticated tools like this produce false positives and false negatives. Some bots are sophisticated enough to fool them. Some human accounts are weird enough to trigger them.

Another approach uses honeypots. Researchers create accounts that post obviously nonsensical content, things no human would ever share. When bot accounts repost this gibberish, they reveal themselves. But bot operators catch on. They update their software. What worked last year doesn't work this year.

In 2020, researchers at the University of Pretoria in South Africa proposed using Benford's Law, a mathematical principle about the frequency distribution of leading digits in naturally occurring datasets, to detect anomalies in bot behavior. The theory is elegant: real human activity follows certain statistical patterns, and bot activity, being artificially generated, may not.

The Regulatory Response

Governments have begun trying to address the social bot problem, though enforcement remains challenging.

California passed the Bolstering Online Transparency Act, cleverly abbreviated as the B.O.T. Act, in 2019. The law makes it illegal to use automated software to appear human for the purpose of influencing commercial transactions or voting decisions. Other states, including Utah and Colorado, have followed with similar legislation.

The European Union has taken a more comprehensive approach with its Artificial Intelligence Act, the world's first broad regulatory framework for AI. Among its provisions: AI-generated content on social media must be clearly labeled as such. Social bots cannot use artificial intelligence to mimic human behavior in ways that deceive users about whether they're communicating with a machine.

But enforcement is difficult. How do you prove that a particular account is a bot rather than an eccentric human? How do you hold accountable an operator who might be on the other side of the world, using servers in a third country, with no assets in your jurisdiction? The legal frameworks are still catching up to the technological reality.

The Legitimate Edge Cases

The regulatory challenge is complicated by the fact that not all automation is deceptive, and different platforms have different policies.

Twitter officially prohibits bots that spam or automatically follow users, but its terms of service explicitly permit automation for "entertainment, informational, or novelty purposes." The platform provides an Application Programming Interface, or API, that allows developers to build tools that interact with the platform programmatically. Many useful services depend on this: social media management tools, analytics services, automated posting schedulers.

Reddit and Discord similarly allow bots that serve constructive purposes, like moderating discussions or providing information, while prohibiting those that spread harmful content or harass users.

The question isn't whether bots should exist. It's whether they should be allowed to pretend they're human.

The Phishing Connection

Beyond manipulation and spam, social bots serve as infrastructure for outright criminal activity.

Phishing attacks, which trick users into revealing passwords or personal information, are significantly more effective when delivered through what appears to be a trusted account. If a message comes from a profile with thousands of followers and years of posting history, you're more likely to click the link than if it comes from an account created yesterday with no followers.

Bot operators build these trusted-looking profiles specifically to make their malicious links more convincing. They use URL shortening services like TinyURL or bit.ly to disguise the actual destination of links, hiding malicious domains behind innocuous-looking shortened URLs.

The fake engagement industry and the cybercrime industry are deeply intertwined. The same tools that let you buy followers for your legitimate business let criminals build convincing fronts for their scams.

The Dead Internet Theory

Among some internet observers, concerns about bots have crystallized into something called the Dead Internet Theory. This is the idea, somewhere between legitimate concern and paranoid conspiracy, that the majority of content and activity on the internet is now generated by bots rather than humans.

In September 2024, an entrepreneur named Michael Sayman, who had previously worked at Google, Facebook, Roblox, and Twitter, launched an app called SocialAI. Its explicit purpose is to let users have social media interactions with nothing but AI bots. There are no humans to follow or be followed by. Just you, talking to machines.

Some observers saw SocialAI as a parody. Others saw it as honesty about what social media had already become. The technology journalist site Ars Technica explicitly connected the app to the Dead Internet Theory.

The theory is probably overstated. Humans still create most meaningful content. But the core insight is hard to dismiss: much of what looks like human activity online is not human at all, and the proportion is growing.

Meta's Remarkable Admission

Which brings us back to January 2025 and Meta's verified AI characters.

For years, social media companies presented bots as a problem to be solved, an infestation to be cleaned up. They hired thousands of moderators and content reviewers. They developed increasingly sophisticated detection systems. They published transparency reports about how many fake accounts they'd removed.

Meta's announcement represented something different. The company wasn't fighting bots anymore. It was becoming a bot operator itself, deliberately creating artificial accounts and giving them the same verification badges used to establish human trustworthiness.

The AI characters Meta created were designed to have personalities, to share content, to engage with human users. And when humans tried to remove these artificial entities from their experience by blocking them, Meta had designed the system to prevent that.

Think about what this means. The platform that hosts billions of human social interactions had decided that its own AI-generated content deserved not just a presence, but a privileged presence that users couldn't escape.

The Question We Can No Longer Avoid

Joseph Weizenbaum, watching his secretary ask for privacy with ELIZA back in 1966, worried that humans were too ready to attribute understanding to machines that possessed none. He spent the rest of his career warning about the dangers of confusing simulation with reality.

Social media has turned that confusion into a business model.

The platforms profit from engagement, regardless of whether that engagement is human. The bot operators profit from selling fake influence. The scammers profit from the trust that fake popularity creates. And ordinary users, scrolling through feeds full of content they can no longer verify as human, are left to wonder whether anyone on the other end is real.

The Turing Test asked whether machines could think convincingly enough to fool humans. Social bots have answered that question. Not by achieving genuine intelligence, but by exploiting our deep-seated tendency to assume that anything that communicates must be a someone rather than a something.

We don't need machines that actually think. We just need machines that talk. We'll do the rest ourselves.

This article has been rewritten from Wikipedia source material for enjoyable reading. Content may have been condensed, restructured, or simplified.