Fear, uncertainty, and doubt
Based on Wikipedia: Fear, uncertainty, and doubt
In 1991, Microsoft shipped a beta version of Windows 3.1 with a hidden Easter egg designed to sabotage its competitor. If you ran the software on DR DOS instead of Microsoft's own operating system, you'd see an ominous error message: "Non-Fatal error detected: error #2726. Please contact Windows 3.1 beta support." The message offered two choices: press Enter to exit, or press C to continue.
Here's the thing: if you pressed C, Windows ran perfectly fine on DR DOS. There was nothing wrong at all.
Years later, internal Microsoft memos surfaced during the United States v. Microsoft antitrust case. One memo from Senior Vice President Brad Silverberg laid out the strategy with remarkable candor: "What the user is supposed to do is feel uncomfortable, and when he has bugs, suspect that the problem is DR-DOS and then go out to buy MS-DOS."
This is Fear, Uncertainty, and Doubt in its purest form. And it's been shaping decisions in technology, politics, and public policy for decades.
The Anatomy of FUD
Fear, Uncertainty, and Doubt—commonly abbreviated as FUD—is a propaganda technique that works by spreading negative, dubious, or outright false information to influence perception. It's a cousin of the appeal to fear, one of the oldest persuasion tactics in the rhetorical playbook, but FUD has a particular flavor: it doesn't need to convince you that something bad will definitely happen. It just needs to make you nervous enough to stick with what you know.
The phrase itself has been around for a surprisingly long time. A similar formulation, "doubts, fears, and uncertainties," appeared in print as early as 1693. The modern ordering—fear, uncertainty, and doubt—showed up in the 1920s. But the acronym FUD didn't gain its current meaning until 1975, when a computer industry legend gave it a very specific definition.
Gene Amdahl had spent years at IBM, working his way up to become one of the chief architects of the System/360 mainframe, a machine that would define corporate computing for a generation. Then he left to start his own company, Amdahl Corporation, to build competing mainframes. He quickly discovered that his former employer had a particular way of dealing with competition.
"FUD is the fear, uncertainty and doubt that IBM sales people instill in the minds of potential customers who might be considering Amdahl products," he explained.
The beauty of FUD, from the perspective of whoever's deploying it, is that it doesn't require proving anything. IBM salespeople didn't need to demonstrate that Amdahl's machines were inferior. They just needed to raise enough questions, cast enough shadows, create enough doubt that the safe choice seemed obvious. As the industry axiom went: "Nobody ever got fired for buying IBM equipment."
How the Game Is Played
The FUD playbook is remarkably consistent across industries and eras. Here's how it typically works.
First, you emphasize risks and unknowns about your competitor's product, even if those risks are theoretical or exaggerated. You don't need to lie outright—you just need to ask unsettling questions. "What happens if their company goes out of business? Who will support this product? Have they really worked out all the bugs?"
Second, you position yourself as the safe, established choice. Your company has been around for decades. You have a track record. Sure, maybe your product isn't as innovative or as cheap, but it's a known quantity. In a world full of uncertainty, that's worth something.
Third, you create social pressure. If a decision-maker chooses your competitor and something goes wrong, their career is on the line. If they choose you and something goes wrong, well, who could blame them? They made the sensible choice. This is the dark genius of FUD: it exploits the asymmetry of blame. Being wrong in a conventional way is forgivable. Being wrong in an unconventional way is career-ending.
The result is that technically superior products often lose to inferior ones, and genuine innovation gets stifled. IT departments end up buying software they know isn't the best option because their upper management is more likely to recognize the brand name. The decision-makers aren't stupid; they're just operating in an environment where FUD has shifted the incentives.
Microsoft's Masterclass
If IBM invented modern FUD, Microsoft perfected it.
By the 1980s, Microsoft had learned from watching IBM and began deploying FUD as a primary marketing weapon. The company became so adept at the technique that one industry observer noted they "ended up out FUD-ing IBM themselves during the OS/2 versus Windows 3.1 years." The student had surpassed the master.
The DR DOS incident from 1991 was just one example. The leaked internal memos revealed a coordinated strategy. Bill Gates himself had written to employees asking for help: "You never sent me a response on the question of what things an app would do that would make it run with MS-DOS and not run with DR-DOS. Is there a feature they have that might get in our way?"
In 1996, a company called Caldera sued Microsoft, accusing them of multiple anti-competitive practices including "issuing vaporware announcements"—promising products that didn't exist yet to freeze the market—and "creating FUD." Microsoft settled in 2000 for $280 million, though the amount wasn't publicly disclosed until 2009.
Perhaps the most revealing document was the "Halloween Documents," a set of internal Microsoft memos that leaked in the late 1990s. In these memos, Microsoft strategists assessed the threat from open source software—programs whose source code is freely available for anyone to inspect, modify, and redistribute. Their conclusion was sobering: "OSS is long-term credible... therefore FUD tactics cannot be used to combat it."
That admission is remarkable. It suggests that Microsoft's strategists understood FUD as a deliberate tactic, knew exactly when it would and wouldn't work, and were genuinely concerned about facing competition they couldn't FUD into submission.
The Open Source Wars
Despite that internal acknowledgment, Microsoft spent the next decade deploying FUD against open source software anyway, particularly against Linux, a free operating system that had become the darling of the tech underground.
The attacks took several forms. Microsoft made ominous statements about the "viral nature" of the GNU General Public License, the legal framework that governs much open source software. The GPL requires that if you incorporate GPL-licensed code into your product, you must also release your product under the GPL. Microsoft characterized this as legally dangerous, a trap that could force companies to give away their proprietary code. Legal scholars largely disagreed with this interpretation, but the seeds of doubt had been planted.
Then came the patent claims. In 2007, Microsoft announced that open source software infringed on "no fewer than 235 Microsoft patents." This was an extraordinary claim—and notably, Microsoft never specified which patents were allegedly being violated, making it impossible to evaluate or defend against. The threat hung in the air, undefined but menacing. At the time, software patent law was still in its infancy, and no precedents had been established for cases like this. Microsoft was essentially saying: we might sue, we might not, but you should be scared.
Microsoft also launched a campaign called "Get the Facts," which published studies showing that Windows Server 2003 had a lower total cost of ownership than Linux. When researchers examined the methodology, they discovered Microsoft had been comparing Linux running on an extremely expensive IBM mainframe against Windows running on a commodity Intel server. The comparison was technically accurate but practically meaningless—like proving that cars are cheaper than helicopters.
In 2010, Microsoft released a video attacking OpenOffice.org, a free alternative to Microsoft's Office suite. The video claimed that the free software had higher long-term costs and poor interoperability. It included the question: "If an open source freeware solution breaks, who's gonna fix it?" The answer, which the video didn't mention, is that open source projects often have vibrant communities of developers who fix bugs, sometimes faster than commercial software vendors.
The SCO Saga: FUD in the Courtroom
Perhaps the most dramatic example of technology FUD played out in the courts. In 2003, a company called the SCO Group filed a lawsuit against IBM, claiming $5 billion in damages. SCO alleged that IBM had taken SCO's proprietary Unix code and contributed it to Linux, effectively giving away SCO's intellectual property to the open source community.
The lawsuit sent shockwaves through the industry. If SCO was right, then every company running Linux might be liable for copyright infringement. The uncertainty was paralyzing. Should companies abandon their Linux deployments? Should they license SCO's claimed intellectual property just to be safe?
SCO's CEO, Darl McBride, made a series of increasingly bold public statements. "IBM has taken our valuable trade secrets and given them away to Linux," he claimed. He threatened to sue Linux creator Linus Torvalds personally. He announced that SCO would begin auditing IBM's customers directly. He warned that "users are running systems that have basically pirated software inside... they have liability."
The stock market responded. SCO's shares rocketed from under $3 to over $20 in a matter of weeks. Someone was making money on this uncertainty.
It later emerged that Microsoft had been secretly funding SCO's legal campaign. This wasn't unusual—litigation funding is a legitimate practice—but it raised questions about whether the lawsuit was primarily about protecting intellectual property or about weaponizing uncertainty against a competitor.
As the case proceeded, IBM accused SCO of spreading "fear, uncertainty, and doubt" as a litigation strategy. The courts agreed. Magistrate Judge Brooke Wells issued a scathing order criticizing SCO's refusal to specify exactly which code had been stolen. Her analogy was memorable:
"Certainly if an individual were stopped and accused of shoplifting after walking out of Neiman Marcus they would expect to be eventually told what they allegedly stole. It would be absurd for an officer to tell the accused that 'you know what you stole, I'm not telling.' Or, to simply hand the accused individual a catalog of Neiman Marcus' entire inventory and say 'it's in there somewhere, you figure it out.'"
In 2007, a court ruled that Novell, not SCO, owned the Unix copyrights that were supposedly at the heart of the case. SCO's stock crashed to under 50 cents. The company eventually filed for bankruptcy. The FUD campaign had failed—but not before years of uncertainty had disrupted the industry and made countless organizations hesitant to adopt open source solutions.
Manufacturing Doubt on a Grander Scale
Technology companies didn't invent the strategy of manufacturing uncertainty. They borrowed it from an older and more consequential arena: public policy.
The term "manufactured uncertainty" describes a specific tactic used to undermine scientific consensus. The playbook is disturbingly effective: cast doubt on academic findings, exaggerate their limitations, cherry-pick favorable data, amplify fringe voices, and create the appearance of a controversy where none genuinely exists among experts.
This approach has been deployed against an extraordinary range of scientific findings:
- The depletion of the ozone layer
- The reality of human-caused climate change
- The link between ultraviolet radiation and skin cancer
- The safety and efficacy of vaccines
- The scientific consensus on evolution
- The health effects of various industrial chemicals
The strategy works because science, by its nature, deals in probabilities and acknowledges uncertainty. No honest scientist claims absolute certainty. Manufactured doubt exploits this epistemic humility, taking the normal caveats of scientific discourse and amplifying them until they seem like fatal flaws.
Researcher Alan Attie described the process precisely: "to amplify uncertainties, cherry-pick experts, attack individual scientists, marginalize the traditional role of distinguished scientific bodies and get the media to report 'both sides' of a manufactured controversy."
The phrase "both sides" is key. Journalists are trained to present balanced coverage, which makes sense when covering genuine debates. But when one "side" represents the overwhelming consensus of thousands of researchers and the other represents a handful of industry-funded contrarians, false balance itself becomes a form of misinformation. It creates the impression that experts are evenly divided when they're not.
The Mechanics of Delay
Why manufacture uncertainty? Often, the goal isn't to win the argument permanently. It's to buy time.
Consider the pharmaceutical and chemical industries. When research suggests that a product might cause health problems, the manufacturer faces an immediate threat: regulation, warnings, liability. Every year of delay in implementing those regulations is another year of unimpeded sales.
The tactics for achieving delay are well-documented. Attack the methodology of concerning studies. Fund alternative research designed to reach different conclusions. Demand impossible levels of certainty before any action is taken. Label inconvenient science as "junk science." File legal challenges. Create industry-funded think tanks that pump out skeptical-sounding reports.
One chilling example: research established that children who took aspirin during viral illnesses faced an elevated risk of Reye's syndrome, a serious and sometimes fatal condition. The link was clear enough that public health officials wanted to issue warnings. But delay tactics, including demands for more research and challenges to the evidence, postponed those warnings for years. During that time, children continued to develop Reye's syndrome. Some died.
The Data Quality Act, passed in the United States in 2001, has been described as a particularly effective tool for manufacturing delay. It allows interested parties to challenge the quality of government-used scientific data, creating additional procedural hurdles for regulators trying to act on health and safety research.
Conflicts Built Into the System
Part of what makes manufactured uncertainty possible is the structure of how research gets funded and how regulations get made.
Consider pharmaceutical regulation. The Food and Drug Administration requires extensive testing before approving new drugs. But that testing is largely funded by the companies seeking approval. The same companies whose profits depend on approval are the primary source of the safety data used to grant that approval.
This isn't necessarily corrupt—companies have strong incentives to produce accurate data, since failures after approval can be catastrophic for their business. But it does create a structural tension. Regulators at agencies like the FDA and the Environmental Protection Agency often rely heavily on unpublished studies from industry sources that haven't been peer-reviewed by independent scientists. This gives industries significant control over the pace and scope of available research.
When a company's product is threatened by emerging scientific evidence, it can fund counter-research, slow the peer review process for threatening findings, or simply flood the zone with enough contradictory data that regulators feel unable to act decisively.
Beyond Technology and Science
FUD isn't limited to software sales and scientific controversies. It appears wherever decisions are made under uncertainty—which is to say, everywhere.
In 2003, Caltex Australia was caught deploying FUD against its own business partners. An internal memo leaked, revealing that the company wanted to "use FUD to destabilize franchisee confidence" and thereby negotiate better terms for Caltex. The memo became evidence in a Senate inquiry examining unconscionable business conduct. Company executives claimed the memo didn't reflect company principles, but the strategy it described was clear enough.
The Clorox company faced criticism in 2008 for an advertising campaign built on FUD. Their Green Works line of cleaning products used the slogan "Finally, Green Works." The implication was twofold: first, that all previous "green" cleaning products from other companies had been ineffective; second, that Green Works was as effective as Clorox's traditional products. Environmental groups, consumer protection organizations, and even the Better Business Bureau challenged these claims. Critics also noted that Green Works products contained ingredients that advocates of natural products had long campaigned against.
Even Apple has deployed the tactic. When the Electronic Frontier Foundation (EFF) sought to legalize iPhone "jailbreaking"—the practice of removing software restrictions that prevent users from installing unauthorized applications—Apple claimed that jailbreaking could potentially allow hackers to crash cell phone towers. The EFF's representative called this "more FUD than truth."
The Security Industry's Dirty Secret
Perhaps nowhere is FUD more endemic than in the cybersecurity industry. Security vendors have an obvious incentive to make threats seem as frightening as possible—after all, they're selling protection against those threats.
This has given rise to what critics call "security theater": products and practices that create the appearance of security without meaningfully reducing risk. FUD is the marketing engine that drives security theater. Vendors publish alarming reports about emerging threats. They describe nightmare scenarios in vivid detail. They imply that their product is the only thing standing between the customer and catastrophe.
Some security marketing has become so divorced from reality that researchers have documented "pages describing purely artificial problems"—threats that don't actually exist but are frightening enough to sell solutions against. These pages sometimes include links that supposedly demonstrate the threat but actually lead nowhere, or ironically enough, to content that claims it will "execute malicious code on your machine regardless of current security software" but contains no actual executable code.
The security industry's overreliance on FUD has a significant downside: when the predicted catastrophes don't materialize, customers and decision-makers lose faith. Organizations that feel they've been scared into unnecessary purchases become skeptical of all security recommendations, even legitimate ones. The wolf gets cried too many times, and eventually the village stops listening.
Technical Support Scams: FUD's Criminal Cousin
If security marketing represents FUD gone commercially overboard, technical support scams represent FUD weaponized for outright theft.
The scheme is depressingly common. A popup appears on your screen, warning that your computer is infected with viruses. Sometimes it's accompanied by blaring audio or a fake "blue screen of death." The message provides a phone number to call for support. When you call, a scammer posing as a technician offers to fix your "infected" computer—for a fee.
There's nothing actually wrong with your computer. The popup is itself the only threat. But the FUD works, especially on elderly or less computer-savvy users who can't easily distinguish a legitimate warning from a fake one.
In extreme cases, these scams escalate beyond simple fraud. Scammers have threatened victims with criminal prosecution for unpaid taxes, or even accused them of possessing illegal content. The fear is real, even when the threat is fabricated.
Recognizing and Resisting FUD
Understanding FUD is the first step toward resisting it. Once you know the playbook, you start recognizing the moves.
When someone emphasizes risks and unknowns about an option without providing concrete evidence, that's a flag. When they position themselves as the safe choice without explaining why their alternative is actually better, that's a flag. When they create urgency without justification, that's a flag. When they appeal to what "everyone" does or what would happen if something went wrong, rather than analyzing what's likely to go right, that's a flag.
The antidote to FUD is information. Seek out specific, verifiable claims rather than vague warnings. Look for evidence of actual problems, not hypothetical ones. Ask what incentives the person spreading doubt might have. Consider whether the uncertainty being emphasized is genuine or manufactured.
In scientific contexts, look for peer-reviewed research and expert consensus rather than press releases and contrarian voices. A handful of dissenting experts doesn't constitute a genuine controversy; it's the normal variation you'd expect in any field. The question isn't whether anyone disagrees—someone always does—but whether the weight of evidence supports a conclusion.
Perhaps most importantly: recognize that uncertainty is not the same as ignorance. We make decisions under uncertainty all the time. The goal isn't to eliminate uncertainty—that's impossible—but to make the best decision given what we actually know. FUD tries to paralyze decision-making by exaggerating uncertainty. The response is to acknowledge uncertainty while still moving forward based on evidence.
The Persistence of Fear
FUD endures because it exploits something fundamental about human psychology. We're loss-averse: the pain of losing something is typically more intense than the pleasure of gaining something of equal value. We're risk-averse under certain conditions: given a choice between a sure thing and a gamble with the same expected value, we usually take the sure thing. And we're deeply sensitive to social judgment: the fear of being blamed for a bad decision can outweigh our desire to make the objectively best choice.
These aren't bugs in human cognition. They're features that evolved because they were adaptive in our ancestral environment. But they can be exploited, and FUD is one of the most effective exploitation techniques ever devised.
The phrase may have entered the technology lexicon in 1975, when Gene Amdahl needed a name for what IBM's salespeople were doing to his company. But the underlying strategy is as old as persuasion itself. As long as decisions are made under uncertainty—which is to say, as long as decisions are made—there will be someone trying to tilt them by amplifying doubt.
The best defense is awareness. FUD works in darkness. Shine a light on it, name it when you see it, and it loses much of its power. Nobody ever got fired for buying IBM equipment—but plenty of people made worse decisions because they believed that.