← Back to Library
Wikipedia Deep Dive

Prediction market

Based on Wikipedia: Prediction market

Betting on the Future: How Prediction Markets Turn Crowds into Oracles

In 1503, gamblers in Rome placed bets on who would become the next pope. The practice was already old then.

Five centuries later, we're still at it—only now we call it "prediction markets" and dress it up in the language of economics. But strip away the jargon, and the core idea remains beautifully simple: people who put money on their beliefs tend to be more honest than people who merely voice opinions. When your wallet is on the line, virtue-signaling becomes expensive.

The Basic Mechanics

A prediction market works like a stock exchange, except instead of buying shares in companies, you buy shares in outcomes. Will it rain tomorrow? Will a particular movie win an Oscar? Will a certain candidate win an election? Each question gets turned into a contract that trades between zero and one hundred percent.

If you buy a contract at sixty cents, you're essentially saying you think the probability of that outcome is higher than sixty percent. If the event happens, you get a dollar. If it doesn't, you lose your sixty cents. The market price, set by the collective buying and selling of thousands of participants, becomes a real-time estimate of probability.

This is different from a poll, where people can say whatever sounds good without consequences. It's different from expert forecasting, where a single person's biases and blind spots can dominate. The market aggregates information from everyone willing to put skin in the game.

Why Crowds Beat Experts

In 1906, the British scientist Francis Galton attended a livestock fair where visitors could pay to guess the weight of an ox. About 800 people entered the competition. Galton, who was deeply interested in human intelligence (and, unfortunately, eugenics), collected all the guesses to analyze them.

What he found surprised him. The median guess was 1,207 pounds. The actual weight was 1,198 pounds. The crowd, including butchers, farmers, and random fairgoers, had collectively estimated within one percent of the true value.

No individual came that close.

This phenomenon—now called the wisdom of crowds—has a theoretical foundation in economics. Friedrich Hayek, the Austrian economist, argued in 1945 that no single person or committee could ever possess all the knowledge scattered across society. Markets, he suggested, serve as information-processing systems, aggregating dispersed knowledge through prices.

His colleague Ludwig von Mises made a complementary argument: without market prices, there's no way to rationally allocate resources. This was his famous critique of socialist central planning. Modern economists broadly agree these insights are correct.

Prediction markets extend this logic from goods and services to beliefs about the future. Instead of asking "what is this thing worth?" they ask "what will happen?"

Three Conditions for Collective Intelligence

Not every crowd is wise. Mobs can be foolish. Groupthink can lead organizations off cliffs. So what separates a smart crowd from a dumb one?

James Surowiecki, in his 2004 book The Wisdom of Crowds, identified three essential conditions.

First, diversity of information. Each person should bring something different to the table—different experiences, different data sources, different ways of thinking about the problem. When everyone has the same information, combining their views adds nothing.

Second, independence of decision. People need to form their own opinions before hearing what others think. Once you know what the crowd believes, you're tempted to just go along. This is why secret ballots matter in elections and why prediction market trades are anonymous.

Third, decentralization of organization. There shouldn't be a single authority deciding the answer. The whole point is to harness distributed knowledge that no central planner could access.

Prediction markets, when designed well, satisfy all three conditions. Traders come from diverse backgrounds with different information. They make decisions independently, often without knowing who else is trading. And no one controls the final price—it emerges from the interplay of thousands of individual bets.

A Brief History of Betting on Elections

Prediction markets aren't new. Researchers Paul Rhode and Koleman Strumpf have documented election betting on Wall Street going back to 1884. In those days, newspapers would report betting odds alongside polling data. The trading volume was substantial—Rhode and Strumpf estimate that average turnover per presidential election equaled more than half the total campaign spending.

This tradition largely disappeared in the mid-twentieth century as scientific polling became dominant. But the internet brought it back.

The University of Iowa launched its Iowa Electronic Markets in 1988, one of the first modern online prediction markets. Despite involving real money, it operates as a research project and has been granted special permission by regulators to continue. Over decades of elections, it has generally outperformed polls in predicting final outcomes.

Other ventures followed. Intrade, an Irish platform, let traders bet on everything from elections to Federal Reserve decisions to celebrity scandals. At its peak, Intrade was widely cited in financial news. It shut down in 2013 amid regulatory troubles.

The Hollywood Stock Exchange, launched in 1996, lets people trade virtual money on movie box office performance and awards. It correctly predicted 32 of 39 major Oscar nominees in 2006 and seven of the eight top category winners.

More recently, blockchain technology has enabled decentralized prediction markets. Augur launched on the Ethereum network in 2018, allowing anyone anywhere to create and trade prediction contracts without relying on a central company.

And in October 2024, Kalshi won a landmark lawsuit against the Commodity Futures Trading Commission, allowing it to offer fully regulated election markets in the United States for the first time. The decision may reshape how Americans engage with political forecasting.

Inside the Corporate Crystal Ball

Some of the most interesting uses of prediction markets happen behind closed doors.

Around 1990, employees at Project Xanadu—an early hypertext venture—used an internal prediction market to bet on various outcomes, including the cold fusion controversy that was making headlines at the time. It may have been the first corporate prediction market.

Pharmaceutical giant Eli Lilly uses prediction markets to forecast which experimental drugs will succeed in clinical trials. Drug development is notoriously unpredictable—most candidates fail. But the collective judgment of researchers, marketers, and executives, expressed through betting, has proven more accurate than traditional forecasting methods.

Google has run internal prediction markets since 2005, forecasting product launch dates, office openings, and strategic developments. Hewlett-Packard and Microsoft have conducted similar experiments. The markets let employees signal what they actually believe, rather than what they think management wants to hear.

Best Buy once used an internal market to predict whether a new store in Shanghai would open on time. When the virtual market price dropped, signaling growing pessimism among employees, management investigated and discovered problems they hadn't known about. The store opened late, as the market had predicted—but the advance warning helped the company minimize losses.

Predicting Pandemics

In one remarkable experiment, researchers created a prediction market to forecast influenza outbreaks in Iowa. Healthcare workers volunteered clinical data and traded on contracts related to flu activity.

The market predicted statewide outbreaks two to four weeks in advance.

Think about what that means. Traditional disease surveillance relies on collecting reports from doctors, laboratories, and hospitals—a process that takes time. By the time public health officials confirm an outbreak, it's already spreading. But prediction markets can incorporate soft information: a nurse noticing more patients with flu-like symptoms, a pharmacist seeing a run on cold medicine, a teacher observing unusual absences.

This kind of distributed sensing could transform public health, if anyone could figure out how to scale it.

The Limits of Collective Wisdom

Prediction markets aren't magic. They fail, sometimes spectacularly.

On June 23, 2016, the United Kingdom voted to leave the European Union. Until the votes were actually counted, prediction markets heavily favored "Remain." The prices implied Brexit had perhaps a twenty to twenty-five percent chance of happening. Then it happened.

A few months later, prediction markets showed Hillary Clinton with roughly a seventy percent chance of winning the American presidency. Donald Trump won.

What went wrong?

One problem is that prediction markets are only as good as the information their participants have. If everyone is reading the same polls and listening to the same pundits, the market can't correct for systematic errors in those sources. The diversity of information breaks down.

Another problem is what researchers call the favorite-longshot bias. Markets tend to overestimate the chances of likely outcomes and underestimate long shots. When an event is far in the future, prices drift toward fifty percent—perhaps because traders don't want to lock up money for long periods on contracts that might not pay off.

There's also the question of who trades. Prediction markets attract people who are interested in the topics being traded. In political markets, that often means political junkies who follow news obsessively. These traders may share similar blind spots. They may live in the same media bubbles. The independence of decision erodes when everyone consumes the same information ecosystem.

Can You Manipulate a Prediction Market?

People have tried.

During the 2004 presidential election, an anonymous trader on Tradesports (an Irish prediction market) sold so many contracts on George W. Bush winning that the price briefly crashed to zero—implying Bush had no chance of victory. This was obviously wrong. Bush was the incumbent, running even with his challenger in most polls.

The manipulation attempt was a "bear raid"—trying to drive down prices through aggressive selling, perhaps hoping others would panic and sell too. It didn't work. Other traders recognized the opportunity and bought the underpriced contracts. The price snapped back within minutes.

Research by Robin Hanson and colleagues at George Mason University suggests that manipulation attempts often backfire. When someone tries to push prices away from their true level, they're essentially handing money to anyone who bets against them. This creates incentives for other traders to correct the manipulation, potentially making the market more accurate than it would otherwise be.

The more liquid a market—meaning the more money changing hands—the harder it becomes to manipulate for any sustained period. Small markets are more vulnerable.

The Terrorism Futures Controversy

In July 2003, the United States Department of Defense unveiled plans for something called the Policy Analysis Market. The idea was to let traders bet on geopolitical events in the Middle East, including the possibility of terrorist attacks.

The reaction was immediate and furious. Senators called it a "terrorism futures market" and expressed horror at the idea of profiting from tragedy. The Pentagon canceled the program within days.

But the critics may have missed the point. The market wasn't designed to reward terrorism—it was designed to predict it. If intelligence agencies wanted to know whether an attack was likely, they could look at what traders who had access to relevant information believed.

Of course, there were legitimate concerns. Could terrorists profit by attacking after betting on their own attacks? Would the market create perverse incentives? These questions deserved serious analysis. Instead, the proposal died in a firestorm of political outrage before anyone could study whether the benefits might outweigh the risks.

The Efficient Market Hypothesis and Its Discontents

Prediction markets rest on the same theoretical foundation as stock markets: the efficient market hypothesis. This idea, associated with economist Eugene Fama, holds that market prices incorporate all available information. If something is known, it's already in the price.

The hypothesis comes in different strengths. The weak form says prices reflect all past trading data. The semi-strong form says they reflect all public information. The strong form says they reflect all information, including insider knowledge.

In reality, markets are not perfectly efficient. There are anomalies, bubbles, and crashes. People are not purely rational calculators of probability. But markets are usually more efficient than any individual, and prediction markets appear to be no exception.

Eric Zitzewitz, an economist at Dartmouth who has studied prediction markets extensively, puts it this way: "Financial markets are generally pretty efficient, and the evidence suggests that the same is true of prediction markets. There's no virtue-signaling in an anonymous market when you're betting."

He adds an important point: "What you're seeing with the market is some average of all of those different opinions, weighted by their willingness to put their money where their mouth is."

That weighting matters. In a poll, everyone's opinion counts equally. In a prediction market, people who are more confident—and more willing to risk money—have more influence. This tends to give informed traders more weight than uninformed ones.

Prediction Markets Versus Polls

Which is better for forecasting elections—a prediction market or a poll?

The research suggests prediction markets generally have a slight edge. They update continuously in response to new information, while polls are snapshots that become stale. They force participants to commit resources, filtering out people who don't care enough to engage seriously. And they aggregate information through prices, which may be a more sophisticated method than simply averaging responses.

But there's a twist. Some researchers have found that you can beat prediction markets by asking poll respondents not just what they think will happen, but how confident they are in their answer. When you weight responses by confidence, the resulting forecasts can outperform market prices.

In 2017, researchers at the Massachusetts Institute of Technology developed what they called the "surprisingly popular" algorithm. Instead of just asking people what they think the answer is, they also ask what they think most people will say. When there's a gap—when one answer is more popular than expected—that suggests it's based on special knowledge that most people don't have.

For example, if you ask people whether Philadelphia is the capital of Pennsylvania, most will say yes. (It's not; Harrisburg is.) But if you also ask what they think others will say, knowledgeable people will predict that most people will incorrectly say Philadelphia. The "surprisingly popular" answer is Harrisburg—it gets fewer votes than expected, which suggests the people voting for it know something others don't.

This kind of meta-question could potentially improve prediction markets, though it's not clear how to implement it in a trading context.

The Psychology of Betting

Why do prediction markets work when they work, and fail when they fail? The answer probably lies in psychology.

Humans are subject to all kinds of cognitive biases. We anchor on initial estimates and adjust insufficiently. We overweight recent information. We see patterns in random noise. We're overconfident in our own judgment and susceptible to peer pressure.

Markets can correct some of these biases by aggregating many different perspectives. Your overconfidence is balanced by someone else's caution. Your anchoring on one number is offset by someone else's different anchor.

But markets can also amplify biases when participants share them. If everyone in the market is reading the same news sources and talking to the same people, their errors will be correlated rather than independent. The wisdom of crowds becomes the folly of the herd.

Speculative bubbles are the most dramatic example. When prices rise, people become optimistic and buy more, driving prices higher still. The feedback loop continues until reality intrudes. Prediction markets are not immune.

Hedging and Insurance

There's another complication that can throw off market prices: hedging.

Suppose you're a financial trader whose bonus depends on economic conditions. If a certain politician wins an election and implements policies that hurt your income, you might buy shares in that politician winning—not because you think it's likely, but as insurance. If your worst-case scenario happens, at least you'll collect on your prediction market bet.

This is entirely rational behavior, but it distorts the market's information-aggregating function. The price no longer purely reflects beliefs about probability; it also reflects people's desire to hedge their other risks.

In traditional financial markets, this kind of hedging is common and well-understood. The price of oil futures, for example, is influenced by airlines and trucking companies hedging against fuel price increases, not just by people betting on what oil will cost. Prediction markets face the same issue whenever the events being predicted are correlated with participants' other economic interests.

The Future of Prediction Markets

We may be entering a golden age of prediction markets.

Kalshi's court victory in 2024 opens the door for regulated election markets in the United States. Blockchain technology enables decentralized markets that no government can easily shut down. Researchers continue refining our understanding of when and why these markets work.

There are still obstacles. Gambling laws restrict prediction markets in many jurisdictions. Regulators worry about manipulation and fraud. And there's the fundamental challenge that some questions may not attract enough traders to generate meaningful prices.

But the potential applications are vast. Climate forecasting. Pandemic preparedness. Geopolitical risk assessment. Scientific replication. Anywhere that good predictions have value—which is almost everywhere—prediction markets offer a tool for aggregating dispersed knowledge.

The ancient Romans betting on papal succession understood something important: people reveal their true beliefs when money is on the line. Five centuries later, we're still learning how to harness that insight.

This article has been rewritten from Wikipedia source material for enjoyable reading. Content may have been condensed, restructured, or simplified.