Impact factor
Based on Wikipedia: Impact factor
The Number That Rules Science
Here's a number that can make or break a scientist's career, determine who gets funding, influence who gets hired, and shape what research gets done in the first place: 41.577. That was the impact factor of the journal Nature in 2017, and if you're a researcher hoping to publish there, it represents something like the academic equivalent of getting into an Ivy League school, winning a prestigious award, and receiving a career golden ticket all rolled into one.
The strange thing is, this number was never meant to measure the quality of individual scientists or their work. It was designed as a tool for librarians.
A Librarian's Shortcut
In the mid-twentieth century, university librarians faced a practical problem: which journals should they subscribe to? Academic journals are expensive, shelf space is limited, and nobody could afford to buy everything. Eugene Garfield, founder of the Institute for Scientific Information (ISI) in Philadelphia, came up with an elegant solution in the 1970s. What if you could calculate how often the average article in a journal gets cited by other researchers?
The math is straightforward. Take all the citations a journal received in a given year for articles it published in the previous two years. Divide that by the number of articles it published in those two years. The result is the journal's impact factor.
Consider Nature again. In 2017, articles that Nature had published in 2015 and 2016 were cited a total of 74,090 times. The journal had published 1,782 articles during those two years. Divide 74,090 by 1,782 and you get 41.577. On average, each Nature article from that period received about 42 citations in 2017 alone.
For a librarian deciding which journals to stock, this was incredibly useful. Journals with higher impact factors presumably contained research that other scientists found valuable enough to reference. The impact factor became a kind of proxy for importance.
From Shelf Selection to Career Evaluation
What happened next reveals something important about how metrics take on lives of their own.
Universities and funding agencies noticed that the impact factor provided a convenient shorthand for journal prestige. If a journal had a high impact factor, the thinking went, then publishing in that journal must indicate high-quality work. And if publishing in high-impact journals indicated quality, then surely you could evaluate researchers by looking at where they published.
The shift happened gradually, then seemed to become universal. By 2007, The Journal of Cell Biology observed that impact factor data "have a strong influence on the scientific community, affecting decisions on where to publish, whom to promote or hire, the success of grant applications, and even salary bonuses."
A 2019 review examined how American and Canadian universities evaluate professors for promotion and tenure. The study found that 40 percent of research-focused universities specifically mentioned the journal impact factor in their official evaluation criteria. The number had become embedded in the formal machinery of academic careers.
Why This Is Problematic
Here's the fundamental issue: the impact factor measures journals, not individual papers.
Within any journal, even one with a stratospheric impact factor, citation counts vary wildly from article to article. Some papers in Nature get cited thousands of times and reshape entire fields. Others receive almost no attention at all. Averaging these together produces a number that tells you nothing about any individual paper.
Eugene Garfield himself warned about this. The inventor of the impact factor explicitly cautioned against its "misuse in evaluating individuals" precisely because of "a wide variation [of citations] from article to article within a single journal." It's a bit like judging a book's quality by the average reviews of all books published by the same press. The information is tangentially related at best.
There's also a deeper statistical problem. The citations within a journal don't follow a normal bell-curve distribution. A few papers get enormous numbers of citations while most get very few. When data is distributed this way, the mean (average) is a poor measure of what's typical. The median would be more honest, but nobody uses median impact factors.
The Apples and Oranges Problem
Impact factors can't be compared across different fields, and this matters enormously when the same metric is used to evaluate researchers across a university.
The reason is simple: different disciplines have different citation practices. In the biological sciences, around 5 to 8 percent of all citations to a paper happen in the first two years after publication. In mathematics and physics, that figure is only 1 to 3 percent. Mathematical work often takes years or decades to be fully absorbed and cited by the field.
This means that a biology journal will naturally have a much higher impact factor than a mathematics journal of equivalent importance within its field. A mathematician publishing groundbreaking work might have an impact factor of 3, while a biologist doing routine work might publish in journals with impact factors of 10 or 15. If you compare their journals side by side, the mathematician looks unproductive. The metric is simply measuring how fast different fields cite recent work.
Gaming the System
When careers depend on a number, people find ways to manipulate that number. Journal editors are no exception.
One strategy is publishing more review articles. Review articles summarize and synthesize existing research rather than presenting new findings, and they tend to get cited more heavily because they serve as convenient reference points. A journal can boost its impact factor by shifting toward reviews, even if that means publishing less original research.
Another approach targets the denominator of the impact factor equation. Remember, the calculation divides citations by the number of "citable items." If a journal can reclassify some of its content as non-citable—editorials, letters, corrections—those pieces won't count against the denominator. But if they do get cited, those citations still count in the numerator. Negotiations between journals and the company that calculates impact factors over what counts as "citable" have resulted in impact factor variations of more than 300 percent for some journals.
Timing matters too. Papers published earlier in the calendar year have more time to accumulate citations before the annual impact factor is calculated. Some journals have learned to frontload their most promising papers into January and February.
Then there's outright manipulation. In 2007, the journal Folia Phoniatrica et Logopaedica, a specialist publication with an impact factor of 0.66, published an editorial that cited every single article the journal had published in the previous two years. The editor was protesting what he called the "absurd scientific situation" created by impact factor obsession. His protest worked—the journal's impact factor jumped to 1.44—but the journal was subsequently excluded from the Journal Citation Reports for two years as punishment.
Coercive Citation
Perhaps the most troubling manipulation involves direct pressure on authors.
Coercive citation occurs when a journal editor tells an author, essentially: "We'll publish your paper, but first you need to add some citations to articles from our journal." The goal is to boost the journal's citation count and thus its impact factor. The added citations often have nothing to do with the paper's actual content.
A 2012 survey found that one in five researchers in economics, sociology, psychology, and business had experienced coercive citation demands. The practice was more common at lower-impact journals, perhaps because those journals were more desperate to improve their numbers. Major business journal editors eventually banded together to publicly denounce the practice, though it continues to occur.
The Privilege Paradox
There's a particularly cruel irony in how impact factors affect younger and less privileged researchers.
Senior scientists with established reputations can afford to publish in less prestigious venues when it suits them. They're known quantities; their work will be read and cited regardless of where it appears. Junior researchers, especially those from institutions or countries without strong academic networks, don't have this luxury. The impact factor of their publication venue may be the only signal of quality that hiring committees recognize.
This creates what might be called the privilege paradox. The researchers who are most harmed by the narrow focus on high-impact journals—those without existing reputation or connections—are also the ones who can least afford to ignore the metric. They must chase high-impact publications even when doing so means abandoning interesting research questions that might not result in flashy, publishable findings.
A 2017 study of life scientists found that "everyday decision-making practices" were "highly governed by pressures to publish in high-impact journals." This affects not just where research gets published but what research gets done in the first place. As one analysis put it, "risky, lengthy, and unorthodox projects rarely take center stage" when researchers are constantly optimizing for impact factor.
The Business of Prestige
It's worth noting who actually calculates and controls the impact factor.
Garfield's Institute for Scientific Information was acquired by Thomson Scientific & Healthcare in 1992, becoming Thomson ISI. In 2018, Thomson-Reuters spun off and sold the operation to Onex Corporation and Baring Private Equity Asia, who created a new company called Clarivate. The Journal Citation Reports, which publishes impact factors annually, is a proprietary product. The underlying data used to calculate impact factors isn't fully accessible to independent researchers.
When users check citation counts in the publicly accessible Web of Science database, they can verify the number of "citable items" a journal published. But the citation counts used in the actual impact factor calculation come from a separate, restricted database. The commonly used Journal Impact Factor is, quite literally, a proprietary number that external users cannot independently verify.
This means that a metric with enormous influence over academic careers and research directions is controlled by a private corporation with strong incentives to maintain that influence. The more important the impact factor becomes, the more valuable Clarivate's products are.
Pushing Back
Criticism of the impact factor has grown louder over the past two decades, and some institutions are beginning to respond.
By 2010, major national and international research funding organizations were issuing statements that numerical indicators like the impact factor should not be considered measures of quality. Various declarations and manifestos—most notably the San Francisco Declaration on Research Assessment, or DORA—have called for eliminating impact factor use in hiring, funding, and promotion decisions.
In 2004, the British House of Commons urged the Higher Education Funding Council to remind its research assessment panels that they must evaluate "the quality of the content of individual articles, not the reputation of the journal in which they are published." This seems like an obvious point, but it required parliamentary intervention to make.
Some researchers have called for more sophisticated metrics that might capture research quality more accurately. Others argue that the entire project of quantifying research quality is misguided, and that the obsession with metrics reflects problematic changes in how universities are managed and funded—what critics describe as the influence of neoliberal politics on academia.
The Persistence of a Flawed Measure
Despite years of criticism, the impact factor persists. Why?
Part of the answer is simple inertia. Evaluation systems already use the impact factor, and changing those systems requires coordinated effort across many institutions. A university that unilaterally stopped considering impact factors might find itself at a disadvantage in recruiting, as candidates might prefer institutions that still reward high-impact publications.
Part of the answer is convenience. Impact factors are easy to calculate and compare. Reading and evaluating the actual content of research takes expertise and time. In a world where committees must make quick decisions about hiring, tenure, and funding, a simple number is seductive even if everyone knows it's flawed.
And part of the answer is that the impact factor does correlate with something real, even if that something isn't individual paper quality. Journals with high impact factors do tend to be more selective, have more rigorous peer review, and publish more influential work on average. The problem isn't that the impact factor is meaningless—it's that it's being used for purposes far beyond what it can actually measure.
A Number's Long Shadow
What began as a practical tool for librarians in the 1970s has become a central organizing principle of academic life. The impact factor determines not just which journals libraries subscribe to but which research gets done, which scientists get jobs, and which ideas receive attention.
The founder of the metric warned against using it to evaluate individuals. Statisticians have explained why it's technically flawed. Researchers have documented how it distorts scientific practices. National funding bodies have issued statements against its misuse. And yet it endures, shaping the careers of scientists who may have never read its original definition.
This is what happens when a convenient number fills an inconvenient need. Evaluating research quality is genuinely difficult. It requires expertise, time, and judgment. The impact factor offers an escape from that difficulty—a single figure that promises to do the work of evaluation for you. That promise is false, but it's exactly what busy administrators and committee members want to hear.
The next time you read about a breakthrough published in a "prestigious" journal, it might be worth asking: prestigious according to whom? And measured how? The answer, it turns out, involves a surprisingly simple formula, a long history of mission creep, and a lot of unintended consequences.