Gain-of-function research
Based on Wikipedia: Gain-of-function research
In 2011, two teams of scientists did something that terrified the world: they took a bird flu virus that had killed hundreds of people but couldn't spread between humans, and they made it airborne.
The experiments worked. The modified virus could now float through the air from one infected ferret to another, carried on microscopic droplets of coughs and sneezes. Some called it the most dangerous research ever conducted. Others called it essential to preventing the next pandemic.
This is the strange and contentious world of gain-of-function research—experiments that deliberately make pathogens more dangerous in order to understand them better. It's a field where the line between breakthrough and catastrophe can feel impossibly thin.
What Gain-of-Function Actually Means
The term sounds technical, but the concept is straightforward. Every organism—from bacteria to humans—has genes that produce proteins, and those proteins do specific jobs. "Gain-of-function" means giving an organism a new ability it didn't have before, or enhancing an ability it already possesses.
Consider influenza B, a common flu virus. It can only infect two species on Earth: humans and harbor seals. If scientists introduced a genetic mutation that allowed influenza B to also infect rabbits, that would be a gain-of-function experiment. The virus gained a function—the ability to infect a new species—that it previously lacked.
Why would anyone want to do this? Because understanding which genetic changes enable a virus to jump between species helps scientists develop medicines that block exactly those changes. It's like studying how a burglar picks locks so you can build unpickable locks.
The opposite of gain-of-function is, predictably, loss-of-function research. These experiments delete or disable genes to see what happens when something stops working. Both approaches are fundamental tools in biology. But only one keeps biosecurity experts awake at night.
The Nightmare Scenario
Here's the fear: What if a virus engineered to be more dangerous escaped from a laboratory?
This isn't paranoia. Accidents happen in even the most secure facilities. In 2014, the United States Centers for Disease Control and Prevention—the CDC, the nation's premier public health agency—experienced what one might charitably call a bad year. Workers at their Atlanta headquarters were accidentally exposed to live anthrax. Six vials of viable smallpox, a disease officially eradicated in 1980, turned up in a storage room at another government facility, apparently forgotten since the 1950s. And a CDC lab accidentally shipped samples of H5N1 avian flu to another laboratory, where they contaminated less dangerous flu samples.
Three incidents. Three different deadly pathogens. One summer.
These mishaps didn't cause outbreaks, but they crystallized fears that had been building for years. If routine accidents could happen with existing pathogens, what might happen with viruses deliberately engineered to be more transmissible?
The Experiments That Started the Debate
The controversy exploded in 2011 with those two ferret studies. Yoshihiro Kawaoka at the University of Wisconsin-Madison and Ron Fouchier at Erasmus University Medical Center in the Netherlands were both investigating the same question: Could the H5N1 bird flu, which had killed roughly 60 percent of the humans it infected but spread poorly between people, ever evolve to become easily transmissible?
Their method was elegant and unsettling. They infected ferrets with the virus, then manually transferred it from sick ferrets to healthy ones, again and again. Each time the virus replicated, it accumulated small genetic changes. Eventually, after enough passages through ferret lungs, the virus developed the ability to spread through the air.
Why ferrets? Their respiratory systems are remarkably similar to ours, making them the standard model for studying how flu viruses might behave in humans. If a flu can go airborne in ferrets, it could potentially do the same in people.
The experiments revealed something important: the genetic changes needed for airborne transmission were surprisingly few. Just a handful of mutations separated a virus that could only spread through direct contact from one that could waft through a room. This was scientifically valuable—it showed exactly which parts of the virus to monitor for dangerous evolution. But it also felt like publishing a recipe for a biological weapon.
There was a silver lining in the data, though one that received less attention in the ensuing panic. As the virus became more transmissible, it became significantly less deadly. This trade-off between spreadability and lethality appears to be a general pattern in viral evolution. A virus that kills its host too quickly doesn't get many chances to spread.
The Scientific Community Fractures
The response to these publications was immediate and divided.
Members of Congress called the experiments an "engineered doomsday." Critics argued that the risks far outweighed any scientific benefits. Marc Lipsitch, an epidemiologist at Harvard's T.H. Chan School of Public Health, became one of the most prominent voices urging extreme caution. He would later help form the Cambridge Working Group, a coalition of scientists who called for halting all research on potential pandemic pathogens until the risks could be properly assessed.
On the other side, proponents argued that understanding how viruses become dangerous was essential to preparing for natural pandemics. Nature, after all, runs its own gain-of-function experiments constantly—every time a virus replicates and mutates in a new host. Better to study these possibilities in controlled laboratory conditions than to be blindsided when they emerge in the wild.
The World Health Organization convened an international consultation. Their conclusion was nuanced: the research contributed meaningfully to public health surveillance but required broader global discussion about oversight. The European Academies Science Advisory Council examined the work and concluded that existing regulations in several European Union countries were adequate for responsible continuation of such research.
In the United States, where regulations had been less stringent than in Europe, the government eventually implemented a new framework called Potential Pandemic Pathogen Care and Oversight, or P3CO. The acronym is appropriately bureaucratic for a system designed to add layers of review to the most dangerous experiments.
China Enters the Picture
The debate intensified in 2013 when a Chinese research group published experiments that critics found even more alarming.
Hualan Chen, director of China's National Avian Influenza Reference Laboratory, led a team that investigated what might happen if a human flu virus and a bird flu virus infected the same cell simultaneously. In nature, this kind of co-infection can allow viruses to swap genetic material—a process called reassortment that has sparked pandemics before.
Chen's team created hybrid viruses combining genes from the 2009 H1N1 pandemic strain with genes from H5N1 bird flu. Some of these chimeras could spread between guinea pigs, demonstrating that certain gene combinations could allow H5N1 to transmit more easily in mammals.
The timing was awkward. These experiments had been conducted before the scientific community agreed to pause H5N1 research following the Kawaoka and Fouchier controversy. Chen's team had essentially continued work that many thought should have stopped.
Simon Wain-Hobson of the Pasteur Institute in Paris called the experiments "appallingly irresponsible." Robert May, a former president of Britain's Royal Society, questioned whether the conclusions justified the risks. Others raised concerns about the biosafety standards at the Harbin Veterinary Research Institute where the work was conducted.
But the research also had defenders. Masato Tashiro, director of the World Health Organization Collaborating Centre on Influenza in Tokyo, described Chen's laboratory as "state of the art." Jeremy Farrar, then directing the Oxford University Clinical Research Unit in Vietnam, called the work "remarkable" and said it demonstrated the "very real threat" posed by H5N1 strains still circulating in Asia and Egypt.
Once again, the modified viruses were less lethal than their parents. The pattern held.
Two Camps, One Goal
By 2014, the scientific community had organized into two distinct groups with contrasting approaches to the same problem.
The Cambridge Working Group, led by Lipsitch, published a consensus statement signed by eighteen founding members that eventually gathered over three hundred signatures from scientists, academics, and physicians. Their position was clear: all work on potential pandemic pathogens should stop until someone could provide an objective, quantitative assessment of the risks. They wanted numbers, not assurances. And they wanted alternative research approaches that didn't involve creating dangerous new viruses.
Shortly after, a countergroup called Scientists for Science emerged with thirty-seven initial signatories and eventually over two hundred supporters. Their position was equally clear: this research was essential, it could be done safely, and existing regulations provided adequate oversight. W. Paul Duprex, a virologist at the University of Pittsburgh, argued that the recent safety incidents were exceptions to an overall excellent record. Better to improve lab safety and oversight, he contended, than to shut down research that could save lives during the next pandemic.
Ian Lipkin, a Columbia University virologist with impeccable credentials in emerging disease research, took the unusual step of signing both statements. "There has to be a coming together of what should be done," he said.
The founders of both groups eventually published a series of letters detailing their discussions. Despite the heated public debate, they found more common ground than the headlines suggested. Everyone agreed that more public education was needed. Everyone agreed that open discussion of risks and benefits was essential. And everyone agreed that sensationalized media coverage—framing the discussion as a "debate" with "opposing sides"—had made things worse. The reality, they wrote, was much more collegial than it appeared.
The Oversight Question
How do you regulate research that could either prevent or cause a pandemic?
Different countries have developed different answers. The United States now requires that certain high-risk experiments undergo review by institutional committees and federal agencies. The National Institutes of Health's recombinant DNA advisory committee examines proposed studies. The European Union has its own Dual Use Coordination Group.
A crucial detail: both American and European regulations mandate that at least one member of the public—someone unaffiliated with the research institution—participate in the oversight process. This reflects a recognition that decisions about potentially civilization-ending research shouldn't be made entirely by the scientists conducting it.
The World Health Organization has developed guidance documents, though they're non-binding. International agreements like the Biological and Toxin Weapons Convention provide some framework, but enforcement remains challenging. Each nation ultimately decides its own policies, which creates obvious gaps. A researcher blocked from conducting an experiment in one country can potentially find a laboratory elsewhere with looser rules.
In December 2014, the U.S. government suspended funding for certain gain-of-function research, though it granted exceptions to seven of the eighteen affected projects. The National Academies of Sciences, Engineering, and Medicine held symposiums in 2014 and 2016 to discuss optimal oversight approaches, bringing together scientists, ethicists, and policymakers from around the world.
Germany's National Ethics Council presented a report to the Bundestag in 2014 calling for national legislation on dual-use research. The European Academies Science Advisory Council formed a working group to examine the issues and develop recommendations. Tentative discussions began about harmonizing American and European approaches.
Progress was being made. Slowly.
The Boston University Controversy
In October 2022, a preprint—a scientific paper not yet peer-reviewed—landed like a bomb in the ongoing debate.
Researchers at Boston University had created a chimeric coronavirus by splicing the spike protein from the Omicron variant onto an ancestral strain of SARS-CoV-2, the virus that causes COVID-19. They then infected mice with various versions of the virus. The original ancestral strain killed all six mice. The Omicron variant killed none of ten mice. The chimera? It killed eight of ten.
The scientific point was legitimate: the researchers wanted to understand why Omicron, despite being far more transmissible than earlier variants, caused less severe disease. The spike protein alone couldn't explain it. Something else in the viral genome—the "mutations outside of spike," as the researchers put it—must account for Omicron's reduced deadliness.
But the headlines were inflammatory. The Daily Mail ran with "Boston University CREATES a new COVID strain that has an 80% kill rate—echoing dangerous experiments feared to have started the pandemic." Facebook eventually flagged the headline as part of its efforts to combat misinformation.
The numbers were accurate but misleading. Yes, the chimera killed 80 percent of mice. But the ancestral strain killed 100 percent. The chimera was actually less lethal than the original virus it was based on. And these were specially engineered mice with human-like ACE2 receptors, not normal mice, making the results difficult to translate to real-world human risk.
More substantive criticism focused on oversight. The National Institutes of Health requires extra review for any funded research that could make COVID more virulent or transmissible. Critics charged that this experiment—which combined Omicron's high transmissibility with the ancestral strain's genetic backbone—should have undergone that additional scrutiny. The researchers countered that their experiment didn't qualify as gain-of-function at all, since they hadn't made the virus more dangerous than what already existed in nature.
The NIH initially stated that grants had supported the work, then clarified that the specific experiments described weren't directly funded by NIH money. The semantic distinctions satisfied no one.
The Broader Stakes
What makes gain-of-function research so contentious isn't just the immediate risks. It's the asymmetry of potential outcomes.
If the research succeeds and nothing goes wrong, we gain valuable knowledge about how viruses evolve and how to stop them. We might develop better vaccines, more effective antivirals, improved surveillance systems. These benefits accumulate gradually and are hard to quantify. We'll never know how many outbreaks were prevented by insights from controversial experiments.
If something goes wrong—a lab accident, a theft, a researcher who becomes infected and doesn't realize it—the consequences could be catastrophic and immediate. A pandemic that kills millions. A permanent erosion of public trust in science. International recriminations that make future cooperation impossible.
The problem is that we don't know how to compare these outcomes. How many prevented deaths from future pandemics justify what level of risk? The calculation requires knowing probabilities that nobody can reliably estimate. How likely is a lab accident? How likely is a natural pandemic that this research might help prevent? How deadly would either scenario be? The numbers are guesses wrapped in assumptions surrounded by uncertainty.
Some argue this uncertainty itself counsels extreme caution. When you can't quantify the risk of destroying civilization, maybe you shouldn't take it. Others counter that nature is running gain-of-function experiments constantly, and the only question is whether we'll understand the results before or after they kill us.
Where Things Stand
The debate continues. No consensus has emerged on where to draw the line between acceptable and unacceptable research. The COVID-19 pandemic made everything more urgent and more polarized—particularly given the unresolved questions about whether the virus itself might have emerged from a laboratory accident.
What has emerged is a thicket of oversight mechanisms, review committees, and regulatory frameworks that vary by country and sometimes by institution. Whether these structures provide adequate protection is itself contested. Defenders argue they've prevented accidents and encouraged responsibility. Critics argue they're either too lax to prevent disaster or too strict to allow beneficial research.
The scientists themselves remain divided but collegial. The public remains mostly unaware of debates that could shape the future of human civilization. And somewhere in laboratories around the world, researchers continue to probe the boundaries of what viruses can do—sometimes enhancing their capabilities, always hoping they're learning faster than nature is evolving.
The ferrets in those 2011 experiments are long dead. The viruses they carried have been destroyed or frozen. But the questions those experiments raised—about risk, about benefit, about who gets to decide what knowledge is too dangerous to pursue—remain very much alive.