← Back to Library
Wikipedia Deep Dive

Misinformation

Based on Wikipedia: Misinformation

In 1835, readers of the New York Sun learned that astronomers had discovered life on the Moon. The articles described bat-winged humanoids and bearded blue unicorns, complete with illustrations. People believed it. They talked about it at dinner parties. They bought more newspapers. By the time anyone thought to question whether such observations were even optically possible, the false story had already shaped how thousands of people understood the universe.

This was the Great Moon Hoax, history's first documented large-scale disinformation campaign. Nearly two centuries later, we're still falling for the same tricks—just faster, and on screens instead of newsprint.

What We Mean When We Talk About False Information

The word "misinformation" sounds clinical, almost innocent. And in a sense, it is. Misinformation is simply incorrect or misleading information—the stuff that spreads when people don't know any better, make honest mistakes, or misunderstand something complicated. Your uncle sharing an outdated article about health advice isn't trying to deceive anyone. He just didn't check the date.

Disinformation is different. It wears the same clothes but has darker intentions.

When someone deliberately creates or spreads false information to deceive, that's disinformation. The distinction matters because it points to motive. A doctor who gives outdated guidance because their medical training happened before new research emerged is spreading misinformation. A pharmaceutical company that buries unfavorable studies while promoting misleading ones is spreading disinformation.

Then there's a third category that doesn't get enough attention: malinformation. This is accurate information weaponized through context. Think of someone publishing a politician's home address during a heated campaign, or selectively releasing true but private information to damage a reputation. The facts are correct. The intent is harmful.

These distinctions can blur. How do you know if someone sharing false information genuinely believes it or is deliberately misleading you? Often, you can't. The person who originally fabricated a story knows they're lying, but by the time it reaches your feed, it's been shared by dozens of people who think they're being helpful.

A History of Lies

If you think misinformation is a product of the internet age, Renaissance Italians would like a word.

In Rome, there was a tradition called pasquinades—anonymous verses posted on statues in public squares, particularly near a statue called Pasquino. These talking statues became bulletin boards for political gossip, insults, and smears. If you wanted to destroy someone's reputation without taking personal responsibility, you wrote a pasquinade. Witty, vicious, and utterly unaccountable.

In France, printed broadsides called "canards" spread sensational stories, sometimes including engravings to make them look more credible. Same playbook we see today: add an image to make the lie look real.

Consider what happened in the summer of 1587. Europe was desperate for news about the Spanish Armada sailing against England. The Spanish postmaster and Spanish agents in Rome actively promoted reports of Spanish victory, hoping to convince Pope Sixtus V to release the one million ducats he'd promised upon the successful landing of troops. False news of Spanish triumph was celebrated in Paris, Prague, and Venice. It wasn't until late August that reliable reports of the actual Spanish defeat reached major cities. The fleet's remains didn't limp home until autumn.

The Spanish were running a disinformation campaign to secure funding. Nearly five hundred years before Twitter.

Technology as Accelerant

Every major communications technology has amplified the reach and speed of false information. The printing press made pamphlets cheap. Radio brought propaganda into homes. Television made it visual and emotional.

The twentieth century brought the mass media revolution, and with it, both unprecedented access to reliable information and unprecedented capacity for manipulation. War-time propaganda, political disinformation, and corporate public relations operations could now reach millions simultaneously. Television proved especially powerful because it combined sight, sound, and the authority of broadcast journalism. Once a false narrative entered the television ecosystem, it reinforced existing biases and became extremely difficult to correct.

But nothing prepared us for the internet.

A 2018 study of Twitter—now called X—found that false information spreads significantly faster, further, deeper, and more broadly than accurate information. Think about that for a moment. Lies have a structural advantage on these platforms. They travel farther because they're often more emotionally provocative, more surprising, more aligned with what we want to believe or fear might be true.

Research on Facebook found similar patterns: misinformation was more likely to be clicked on than factual information. The platforms are designed for engagement, and outrage engages. Novelty engages. Confirmation of our existing fears and prejudices engages. Truth is often boring by comparison.

During the 2016 United States presidential election, content from websites deemed untrustworthy reached up to forty percent of Americans, even though misinformation made up only six percent of overall news media. A small amount of false content can achieve enormous reach when the distribution system rewards virality.

Why We Fall for It

The obvious explanation is that people are gullible. But that's too simple, and it lets us off the hook too easily.

Researchers have identified factors at three levels: individual, group, and societal.

At the individual level, people vary in their ability to spot false information. Some of this is skill—critical thinking, media literacy, domain expertise. But personal beliefs, motivations, and emotions play huge roles. We're more likely to believe misinformation when we're emotionally connected to the topic. A false story about something we care deeply about bypasses our skepticism in ways that a false story about something boring never could.

Interestingly, the hypothesis that believers in misinformation simply use more mental shortcuts and less careful reasoning has produced mixed results in studies. It's not that people who believe false things are necessarily lazy thinkers. Sometimes they're very careful thinkers who happen to be reasoning from false premises, or who have motivated reasons to reach certain conclusions.

At the group level, we tend to associate with like-minded people. This creates echo chambers and information silos where false beliefs can take root and reinforce themselves. If everyone in your community believes something, you'd need extraordinary evidence and social courage to disagree. The information doesn't even have to be demonstrably false—it just has to be selectively presented, with inconvenient facts filtered out.

At the societal level, public figures wield disproportionate influence. When a politician or celebrity shares misinformation, it reaches far more people than when your neighbor does. Mass media outlets can amplify or challenge these messages. And broader trends—political polarization, economic inequality, declining trust in institutions, changing perceptions of authority—all affect how receptive populations are to false claims.

The Trust Problem

Here's a paradox: as the number and variety of information sources has increased, it's become harder to assess their credibility. More choices should mean more opportunity to find reliable information. Instead, it often means more opportunity to find information that confirms what we already believe.

In 2017, forty-seven percent of Americans reported social media as their main news source, compared to traditional news outlets. Polling shows that Americans trust mass media at record-low rates. Among young adults, trust in information from social media is roughly equal to trust in national news organizations. That's not young people being naive about social media—it's them being equally skeptical of everything.

The twenty-four-hour news cycle compounds the problem. When networks need to fill airtime constantly, there isn't always time for adequate fact-checking. Stories go out before they're fully verified. Corrections come later, if at all, and often reach far fewer people than the original error.

The line between opinion and reporting has also blurred. What looks like news might actually be commentary, but that distinction isn't always clear to viewers or readers. A pundit offering their interpretation of events and a journalist reporting verified facts occupy the same screens, often in the same visual format.

The Tricks That Work

Misinformation succeeds partly because it mimics the appearance of reliable information.

Adding hyperlinks to a false claim makes readers trust it more. If those hyperlinks point to scientific journals, trust increases further. And here's the kicker: trust is highest when readers don't actually click the links to investigate. The mere presence of citations creates an aura of legitimacy, even if the citations don't support the claims.

Images work the same way. Research has shown that placing a relevant image alongside an incorrect statement increases both its believability and shareability—even when the image doesn't actually provide evidence for the statement. A false claim about macadamia nuts paired with a photograph of macadamia nuts seems more credible than the same claim without the image. The picture adds nothing factual. It just feels more authoritative.

The translation of scientific research into popular reporting creates its own problems. Newspapers are more likely than scientific journals to cover observational studies and studies with weaker methodologies. Dramatic headlines grab attention but don't always reflect what the research actually found. Nuance gets flattened. Preliminary findings get reported as established fact. And when the science later evolves or gets overturned, people remember the original sensational story, not the quiet correction.

The Persistence of False Memories

One of the most unsettling findings in misinformation research is that false information can continue to influence our thinking even after we know it's false. Corrections don't fully undo the damage.

Studies have shown that exposure to misinformation can alter people's recollection of events—even when they've been warned to watch for misinformation, and even when the misinformation is later explicitly corrected. The false version gets woven into memory and remains influential.

This has profound implications. It means that the old model of fact-checking—false claim goes out, correction follows, problem solved—doesn't actually solve the problem. By the time a fact-check reaches someone, the misinformation may have already reshaped how they remember and understand events.

Fighting Back

Given all this, what actually works?

One commonly taught method is called SIFT, which stands for Stop, Investigate, Find better coverage, and Trace claims. It's a structured approach to encountering new information.

First, Stop. Before accepting or sharing something, pause. Do you know the source? Is it reliable? What's your emotional reaction, and might that reaction be making you less critical?

Second, Investigate the source. What expertise does this source have? Do they have an agenda—financial, political, ideological—that might shape how they present information?

Third, Find better coverage. Look for reliable reporting on the same claim. Is there consensus among credible sources, or is this claim an outlier?

Fourth, Trace claims back to their original context. Has important information been left out? Is the original source credible?

For visual misinformation, specific techniques help. Misleading graphs can be identified by examining the data presentation—truncated axes, manipulated scales, poor color choices that obscure rather than clarify. Reverse image searching can reveal whether a photograph has been taken out of its original context. An image presented as showing Event A might actually be from Event B years earlier.

Education and media literacy do correlate with the ability to recognize misinformation. But even educated, media-literate people fall for false information when their emotions are engaged or their existing beliefs are being confirmed. Critical thinking isn't a shield you can put up once; it's a discipline you have to practice continuously, especially on topics where you have strong feelings.

The Institutional Dilemma

Governments and platforms have tried various approaches to combat misinformation, with mixed results and unintended consequences.

In January 2024, the World Economic Forum identified misinformation and disinformation as the most severe short-term global risks. The concern is that false information is being used to "widen societal and political divides" and undermine trust in institutions.

But efforts to address misinformation can themselves become tools of censorship. According to UNESCO and other monitoring organizations, anti-misinformation laws and policies have been used in some countries to restrict journalistic work and limit political expression. In the worst cases, these measures have resulted in the imprisonment of journalists and editors.

The countries ranked worst for media freedom in 2025 include Eritrea, North Korea, China, Syria, Iran, and Afghanistan at the bottom—places where the government's definition of "misinformation" often means "any information the government doesn't like." The tools created to fight false information can be turned against true information just as easily.

This creates a genuine dilemma. Doing nothing allows misinformation to spread freely. Doing something creates mechanisms that can be abused. There's no solution that doesn't involve difficult tradeoffs.

The Emotional Core

Perhaps the most important thing to understand about misinformation is that it works because it meets emotional needs.

People don't believe false things purely because they lack information. They believe false things because those beliefs offer something: a sense of control in a chaotic world, validation of their fears or prejudices, membership in a community of like-minded believers, a narrative that makes sense of confusing events.

This is why simply providing correct information often fails to change minds. The information deficit model—the idea that people believe false things because they lack true facts, and will correct their beliefs when given those facts—doesn't account for the emotional and social functions that false beliefs serve.

When someone shares misinformation, they're often not trying to deceive. They're trying to warn their friends about a danger, or rally their community around a cause, or make sense of something frightening. The social rewards for sharing—likes, shares, responses, a sense of being in the know—can outweigh any commitment to accuracy.

Living with Uncertainty

The scientific guidance on infant sleep positions has changed over time. Advice that was standard practice a generation ago is now considered dangerous. This isn't a failure of science—it's science working as intended, revising its conclusions as evidence accumulates. But to parents trying to keep their babies safe, it can feel like experts can't make up their minds.

This kind of legitimate uncertainty creates openings for misinformation. When official guidance changes, or when experts publicly disagree, it becomes easier to dismiss all expertise as unreliable. If the scientists were wrong before, why should we believe them now?

The answer is that being wrong and then correcting yourself is a feature, not a bug. It's how knowledge advances. But it requires a kind of comfort with uncertainty that doesn't come naturally to most people—and that misinformation, with its confident assertions and simple narratives, doesn't require.

We want clear answers. Misinformation provides them. The truth is usually more complicated, more provisional, more hedged with caveats. Selling uncertainty has never been as easy as selling certainty, even when the certainty is false.

What Remains

The Great Moon Hoax worked because people wanted to believe in a universe full of wonder. The Spanish Armada disinformation campaign worked because the Spanish wanted to believe their fleet had triumphed, and the Pope wanted to believe his investment was sound. The misinformation that circulates today works because it tells people what they want to hear, or what they fear might be true, or what their friends are sharing.

Technology has changed the speed and scale, but not the fundamental dynamics. We are the same creatures who celebrated Spanish victories in Prague based on false reports, the same creatures who bought newspapers to read about lunar bat-people. We want stories that make sense. We want to believe our side is winning. We want to share information that makes us look smart or connected or caring.

Understanding these impulses in ourselves—not just in those other people who believe the wrong things—might be the beginning of wisdom. The next time you feel the urge to share something that confirms what you already believe, or that sparks outrage, or that seems almost too perfect, that's exactly when to pause.

Stop. Investigate. Find better coverage. Trace the claim.

Or, if you prefer the shorter version: slow down. The truth can wait. The lie is counting on you not to.

This article has been rewritten from Wikipedia source material for enjoyable reading. Content may have been condensed, restructured, or simplified.