Stochastic terrorism
Based on Wikipedia: Stochastic terrorism
The King's Wish
In December 1170, King Henry II of England was furious. Thomas Becket, the Archbishop of Canterbury and once his closest friend, had become an infuriating obstacle to royal power. In a moment of exasperation, the king uttered something like: "Will no one rid me of this turbulent priest?"
Four knights heard him. They rode to Canterbury Cathedral. They murdered Becket at the altar.
Henry II never ordered the killing. He never named the knights. He never specified a time or place. Yet his words—broadcast to an audience primed to act on his desires—produced a dead archbishop. The king could claim innocence. After all, it was just a rhetorical question.
This ancient incident captures something that modern scholars have given a technical name: stochastic terrorism. It's the idea that certain kinds of public speech, repeated loudly enough and to enough people, can make violence statistically likely without anyone ever directly commanding it.
What "Stochastic" Actually Means
The word "stochastic" comes from the Greek stochastikós, meaning "aiming" or "guessing." In mathematics and science, a stochastic process is one that involves randomness—you can predict the overall pattern but not the specific outcome.
Think of it like this: if you roll a fair die once, you cannot predict whether you'll get a three or a five. But if you roll it a thousand times, you can predict with great accuracy that each number will appear roughly one-sixth of the time. The individual result is unpredictable. The aggregate pattern is not.
Stochastic terrorism applies this logic to violence. The argument goes: if you broadcast hostile, dehumanizing rhetoric to millions of people, you cannot predict who will become violent, or when, or how. But you can predict that someone probably will.
The speaker never needs to say "go kill that person." They just need to say, over and over, that certain people are dangerous enemies, existential threats, traitors, vermin. Eventually, someone in the audience decides to take action.
From Risk Models to Political Discourse
The term "stochastic terrorism" has two distinct origin stories, which helps explain why people sometimes talk past each other when using it.
The first origin is technical. In 2002, a risk analyst named Gordon Woo published an article in The Journal of Risk Finance introducing "a stochastic terrorism model." Woo was trying to do for terrorism what actuaries had done for hurricanes and earthquakes: build mathematical models that could predict the probability of attacks, even if the specific attacks remained unknowable in advance. He looked at publicity cycles, copycat effects, and how the overall "state of the system" influenced the likelihood of violence. This was quantitative, academic work aimed at helping insurers and security agencies assess risk.
The second origin is political. In 2011, a blogger writing under the pseudonym "G2geek" on the progressive website Daily Kos reframed the concept entirely. Instead of focusing on the terrorists themselves, this new usage centered on the speakers—public figures who use mass media to generate violence they can later disavow. In this framing, the "stochastic terrorist" isn't the person who pulls the trigger. It's the person who pulls the rhetorical strings.
This second usage took off. By the early 2020s, the term had migrated from obscure blog posts into mainstream journalism, academic papers, and political debate. A spot-check of Google Scholar found only 22 uses of the term from 1900 through 2019. From 2020 through 2022, there were 108—nearly five times as many in three years as in the previous century.
How the Circuit Works
Retired FBI profiler Molly Amman and forensic psychologist J. Reid Meloy have developed a detailed model of how stochastic terrorism operates. They describe it as a circuit with three components: originators, amplifiers, and receivers.
Originators are public figures or organizations who deploy hostile rhetoric against identified out-groups. Crucially, they avoid explicit calls for violence. They don't say "go attack these people." Instead, they describe the targets as existential threats, as enemies of the nation, as subhuman, as dangers that must be stopped. Sometimes they use jokes or coded language that provides plausible deniability. "I'm just asking questions." "It was obviously sarcasm." "I never told anyone to do anything."
Amplifiers are the media platforms and networks that repeat and spread these messages. In the age of social media, amplification can be explosive. Research has shown that false information spreads faster and farther than true information online—falsehoods are about 70 percent more likely to be reshared, and accurate stories take roughly six times longer to reach comparable audiences. Echo chambers form where the same messages bounce around, intensifying with each repetition.
Receivers are the audience members who consume this content. Most will not act on it. But some subset—impossible to identify in advance—will internalize the messages until they reach a personal threshold for action. They may have pre-existing grievances, mental health challenges, or simply be more susceptible to the rhetoric. When they act, they act alone, without coordination from the originator.
This is the key insight: no conspiracy is required. No secret orders need to be passed. The originator can genuinely not know who will act or when. But if they keep broadcasting inflammatory rhetoric to large audiences, the model predicts that violence becomes statistically more likely.
The Psychology of the Crowd
Amman and Meloy draw on the work of Cornell psychiatry professor Otto F. Kernberg to explain a phenomenon they call "poliregression"—a term combining "political" and "regression."
In psychology, regression refers to reverting to more primitive mental states under stress. Poliregression describes how crowds can shift from relatively benign gatherings into something more dangerous. The crowd begins in a state of "narcissistic dependency"—they admire a leader and look to that leader for guidance. Under the right conditions, they can shift into a "paranoid posture" that divides the world sharply into us and them.
In this paranoid state, the crowd identifies an external enemy. They come to see that enemy as an imminent, existential threat. Violence becomes rationalized as "defensive"—we must strike first, or they will destroy us. Democratic norms get overridden. Binary thinking takes hold. Time seems to compress: we must act now.
Amman and Meloy argue this helps explain paradoxes like police officers attacking other police officers during the January 6, 2021, attack on the United States Capitol. Some participants were pre-planned actors who came prepared for violence. But others appear to have been swept up in a crowd dynamic that shifted from a rally into something else—a "regressed" state in which normal inhibitions dissolved.
The Rhetorical Toolkit
Scholars have identified specific persuasive techniques commonly associated with stochastic outcomes.
Dehumanization is perhaps the most powerful. When targets are described as vermin, as disease, as animals, it becomes psychologically easier for some listeners to contemplate violence against them. Dr. James Angove has cited the labeling of COVID-19 as "the China virus" as an example of dehumanizing rhetoric—it essentialized an entire ethnic group as a disease vector, contributing to subsequent xenophobic violence.
Scapegoating focuses diffuse grievances onto specific targets. Economic anxiety, cultural change, personal disappointment—all of it gets attributed to the designated enemy.
The threat-fear-solution script presents a danger as urgent and existential, then offers action as the only response. This is an ancient persuasive technique, but modern media amplifies its reach.
Coercive legitimization elevates the speaker's authority while simultaneously delegitimizing opponents. The speaker becomes the only trustworthy source; critics are dismissed as corrupt, crazy, or treasonous.
Implicature and "joking" allow speakers to communicate violent ideas while maintaining deniability. The wink, the nudge, the "I'm just saying"—these let audiences understand the real message while providing legal and rhetorical cover.
The Legal Problem
Here's the fundamental tension: stochastic terrorism, as a concept, describes something that may not be illegal.
In the United States, the governing precedent is Brandenburg v. Ohio, a 1969 Supreme Court decision that set an extremely high bar for prosecuting speech. Under Brandenburg, the government can only punish advocacy that is "directed to inciting or producing imminent lawless action and is likely to incite or produce such action."
Note those requirements: the speech must be directed at producing action, the action must be imminent, and it must be likely. Stochastic terrorism, by its nature, typically fails all three tests. The speaker isn't directing anyone to do anything specific. The action may not occur for weeks or months. And while violence may be statistically likely at a population level, it's not likely that any particular listener will act.
The "true threats" doctrine offers another legal avenue. Speech counts as a true threat when the speaker purposefully communicates a serious intent to commit unlawful violence. But this doctrine also has limitations. Many statements that might contribute to stochastic terrorism aren't threats in any direct sense—they're characterizations, warnings about enemies, expressions of grievance.
This creates a gap. Behavior that scholars describe as stochastic terrorism may be protected speech under the First Amendment. The analytic label identifies a risk mechanism, not a crime.
The Evidentiary Challenge
Even if someone wanted to prosecute stochastic terrorism, proving it would be extraordinarily difficult.
Bart Kemper, a forensic engineering researcher at the University of Louisiana at Lafayette, has written extensively about this problem. In a 2025 feature for IEEE Reliability Magazine, he argued that anyone trying to prove stochastic terrorism in court would need to use machine learning and artificial intelligence to analyze the violent actor's entire social media history—every post they engaged with, every account they followed, every rabbit hole they went down.
Even then, you'd face the problem of distinguishing decisive influence from background noise. If someone who committed violence had watched a hundred hours of inflammatory content from a dozen different sources, how do you prove which source—if any—was the proximate cause? Under American rules of evidence, you'd need validated models and quantified uncertainty. You'd need to meet the Daubert standard, which requires that expert testimony be based on sufficient facts, reliable methods, and proper application of those methods.
Kemper warns that many popular uses of the term "stochastic terrorism" amount to what lawyers call ipse dixit—"he himself said it"—assertions that something is true simply because an authority claims it's true, without rigorous proof.
This doesn't mean the concept is useless. It means the concept's value lies in describing a pattern, not in providing a legal cause of action.
What's Different from Regular Incitement
Traditional incitement is direct: someone tells a crowd to attack, and the crowd attacks. There's a clear line from speech to action. The speaker intended violence; the listeners delivered it.
Stochastic terrorism describes something more diffuse. The speaker may genuinely not intend any specific act of violence. They may even condemn violence when it occurs. But their rhetoric, repeated at scale, creates conditions where violence becomes more probable.
Terrorist organizations have understood this for years. Al-Qaeda and the Islamic State have long used propaganda to inspire "lone wolf" attackers—people who have no direct contact with the organization but who absorb its messaging and decide to act. The organizations then claim credit for attacks they never planned. The internet simply made this strategy available to anyone with a platform.
Prevention Over Prosecution
Given the legal and evidentiary challenges, most scholarly work on stochastic terrorism focuses on prevention rather than punishment.
Some researchers advocate "prebunking" or "inoculation"—exposing people to weak versions of manipulative arguments before they encounter the real thing, much like a vaccine exposes the immune system to weakened pathogens. The idea is to build psychological resistance to propaganda.
Others focus on "counterspeech" that explains manipulative tactics, helping audiences recognize when they're being played.
Platform governance is another lever. Social media companies can adjust algorithms to reduce amplification of inflammatory content, label or remove false information, and disrupt echo chambers. This is controversial—it raises questions about who decides what counts as harmful speech—but it's one of the few interventions that operates at the scale of the problem.
A 2021 report from Arizona State University's Threatcasting Lab argued that addressing stochastic terrorism resembles public health more than traditional counterterrorism. The authors recommended containment of harmful narratives, improved attribution of online amplification, and resilience programs to help communities resist radicalization. Think less "hunting terrorists" and more "preventing epidemics."
Watching for Warning Signs
On the receiver end—the people who might actually commit violence—threat assessment experts have developed tools to identify warning signs.
The Terrorist Radicalization Assessment Protocol-18, or TRAP-18, identifies late-stage "pathway" behaviors that predict action. These include: research and planning, acquiring weapons, adopting a "warrior" or pseudo-commando self-image, expressing affinity with previous attackers, and exhibiting a "last-resort" time imperative—a sense that action must happen now or never.
Interestingly, mere ideological fixation is not a strong predictor. Lots of people become fixated on extreme ideas without ever becoming violent. The discriminating indicators are behavioral, not ideological: when someone shifts from consuming content to preparing for action.
The Limits of the Concept
Critics raise important concerns about how "stochastic terrorism" gets deployed in public discourse.
The term can become a way to blame speakers for actions they didn't commit and may not have wanted. Kemper points out that terrorist organizations like Al-Qaeda use media to encourage attacks and then claim credit—that's a clear case of the phenomenon Woo originally described. But the popular usage often assigns culpability to people who deny any connection to violence and may even condemn it. That's a significant leap.
There's also a risk of concept creep. If any heated political rhetoric can be labeled "stochastic terrorism," the term loses its analytical value. It becomes just another way of calling your opponents dangerous, rather than a precise description of a specific mechanism.
The scholarly community continues to debate where to draw boundaries. What distinguishes stochastic terrorism from ordinary political conflict? How much amplification is required? What kinds of rhetoric count? These questions don't have settled answers.
Different Countries, Different Rules
Legal frameworks vary dramatically across jurisdictions. The United States, with its robust First Amendment protections, sets an extremely high bar for restricting speech. Most rhetoric that might contribute to stochastic terrorism is constitutionally protected.
European countries often take different approaches. Germany's Volksverhetzung law criminalizes incitement to hatred against segments of the population, with penalties including imprisonment. France has laws against provocation to discrimination, hatred, or violence. These frameworks make it easier to prosecute speech that Americans would consider protected.
This creates interesting dilemmas in a globalized media environment. Content that's legal to produce in the United States may be illegal to distribute in Germany. Platforms that operate worldwide must navigate conflicting legal requirements.
What We're Left With
Stochastic terrorism is best understood as a lens, not a verdict.
It describes a real phenomenon: the way mass-mediated hostile rhetoric can elevate the statistical risk of violence, even without direct coordination between speakers and attackers. It provides a framework for thinking about the relationship between inflammatory speech and lone-actor violence. It highlights how modern communication technology—with its capacity for viral amplification and echo-chamber formation—may accelerate dynamics that have existed throughout human history.
But it's not a crime in most legal systems. It's difficult to prove in specific cases. And it can be misused as a rhetorical weapon rather than an analytical tool.
Perhaps the most honest conclusion is that some things can be true without being prosecutable, and dangerous without being illegal. King Henry II's rhetorical question produced a dead archbishop. He wasn't convicted of murder. That doesn't mean his words were innocent.
Understanding stochastic terrorism means grappling with this uncomfortable space—where speech has consequences, but those consequences are probabilistic rather than deterministic, diffuse rather than direct. It means accepting that we may not be able to draw clean lines of legal responsibility while still recognizing that some kinds of rhetoric make violence more likely.
In an age of mass media and algorithmic amplification, that recognition may be the most important thing.