Inventing the Methods Section
Deep Dives
Explore related topics with these Wikipedia articles, rewritten for enjoyable reading:
-
Vitruvius
14 min read
Linked in the article (24 min read)
-
Venetian Patent Statute
1 min read
Linked in the article (5 min read)
-
Two New Sciences
1 min read
Linked in the article (23 min read)
By Andrew Hunt
When the researchers at Google DeepMind unveiled AlphaFold3 in Nature in May 2024, they did something controversial. Instead of releasing the code to enable other researchers to verify and build upon their protein structure prediction model, they restricted access via a web server.
The computational biology community erupted. More than 1,000 researchers signed an open letter condemning the decision as a failure to follow scientific norms. Roland Dunbrack, a computational structural biologist who reviewed the paper, called the decision “an incredible disservice to science.” The backlash worked, and DeepMind released the code in November 2024.
When it comes to wet lab research, however, we’ve been withholding our “code” for centuries with no outcry. Unlike computer code, which captures a model’s process in machine-readable format, biological protocols operate through layers of human interpretation and tacit knowledge. Seemingly trivial details, such as the brand of plastic tubes used, are often lost in translation from bench to page and can hinder attempts to reproduce the results.
In theory, we already have a solution to this problem. Critical experimental details should be provided in the Methods section of scientific papers (part of the standard IMRaD structure: Introduction, Methods, Results, and Discussion), where researchers outline procedures in sufficient detail for others to evaluate, replicate, and build upon their work. And yet, the canonical Methods section is flawed. In 2021, the Center for Open Science found that none of the 193 cancer experiments they examined were “described in sufficient detail to design a replication without seeking clarifications from the original authors.”
Such findings illustrate science’s systematic failure to value and communicate the “how” of research. The scientific community celebrates discoveries but often treats the experiments that generate them as mere housekeeping, relegating their methods to supplementary materials or omitting them entirely.
Insufficient methods limit cumulative scientific capabilities. When researchers spend months trying to reproduce something that should take days, or when improved techniques are shared only among well-connected labs, time is wasted rebuilding foundations rather than advancing the scientific frontier.
This isn’t a modern problem, either. Scientists have struggled to transfer capability for centuries, as it is usually more difficult than transferring concepts. It requires capturing “tacit” knowledge and equipping new human hands to execute it. Furthermore, our primary scientific communication tool, the journal article, has increasingly been optimized to stake claims and present results, rather than to help researchers
...This excerpt is provided for preview purposes. Full article content is available on the original publication.
