Epistemic injustice
Based on Wikipedia: Epistemic injustice
In 1993, a young Black man named Duwayne Brooks watched his friend Stephen Lawrence get murdered on a London street. When police arrived, Brooks tried desperately to help. He told them what happened. He pointed out where the attackers had fled. He offered to help search the area.
The officers ignored him.
An official inquiry later found that the police "failed to concentrate upon Mr. Brooks and to follow up energetically the information which he gave them." Nobody asked him to accompany them on searches, even though he was the only person who knew where the assailants had last been seen. Nobody properly tried to calm him. Nobody accepted that what he said was true.
The police had a credible eyewitness standing right in front of them, and they chose not to believe him. Why? The inquiry concluded that racial bias played a significant role. Brooks wasn't viewed as someone whose testimony could be trusted—not because of anything he said or did, but because of who he was.
This case became a landmark example of something the British philosopher Miranda Fricker would later call epistemic injustice: being wronged specifically in your capacity as someone who knows things.
The Two Faces of Epistemic Injustice
Fricker coined the term in 1998, and her 2007 book on the subject laid out a framework that has since reshaped how philosophers, sociologists, and legal scholars think about knowledge, power, and fairness. Her core insight was deceptively simple: some people get believed, and some people don't, and this gap isn't random.
Socially privileged groups receive what Fricker calls an "excess of credibility." Not only are they treated as authorities on their own experiences, they're often treated as authorities on everyone's experiences—including those of people very different from themselves. Meanwhile, oppressed groups face a "credibility deficit." Their expertise about their own lives gets dismissed. Their testimony gets doubted. Their knowledge gets erased.
The assignment of credibility, or its absence, flows from existing hierarchies and social norms that are often so deeply embedded in society that even members of marginalized groups may internalize them. You might not trust your own experience because you've been taught, in countless subtle ways, that people like you don't really know what they're talking about.
Fricker identified two distinct forms this injustice takes.
Testimonial Injustice: When Your Word Isn't Good Enough
The first form is testimonial injustice—unfairness related to whether someone's word gets trusted. This is what happened to Duwayne Brooks. It happens when someone is ignored, doubted, or disbelieved not because their claims are implausible or their reasoning is faulty, but because of their identity: their race, gender, sexuality, disability, age, or any other characteristic that triggers prejudice in the listener.
Testimonial injustice occurs, in Fricker's words, when "prejudice causes a hearer to give a deflated level of credibility to a speaker's word."
Notice the mechanism here. The problem isn't that the listener consciously decides to be bigoted. Often it's far more subtle. The listener simply finds the speaker less convincing, less authoritative, less reliable—without ever examining why they feel that way. The prejudice operates below the surface, invisible even to the person harboring it.
This makes testimonial injustice particularly insidious. It's hard to fight something that the perpetrator genuinely doesn't believe they're doing.
Hermeneutical Injustice: When You Can't Even Name What's Happening
The second form is hermeneutical injustice, and it's stranger and in some ways more devastating. The word hermeneutical means "relating to interpretation," and this type of injustice makes someone less able to interpret their own life.
Hermeneutical injustice occurs when your experiences don't fit the concepts available in your language. You suffer something real, but there's no word for it. No framework to understand it. No way to communicate it to others—or even fully to yourself.
The classic example is sexual harassment before the term existed.
Before the 1970s, countless women experienced unwanted sexual attention, pressure, and coercion in their workplaces. They knew something was wrong. They felt violated, uncomfortable, afraid. But what could they call it? The concept of sexual harassment as a recognized phenomenon, with a name and a legal definition and a shared social understanding, simply didn't exist.
A woman in 1965 who faced persistent unwanted advances from her boss had no ready language to describe her situation. She might have struggled to explain it even to herself. Was she overreacting? Was this just how things were? Was there something wrong with her for being bothered by it?
And if she tried to explain it to someone else—a friend, a family member, a lawyer—she faced an even steeper challenge. Without a shared concept, the listener would have difficulty understanding what she was describing, let alone taking it seriously.
Fricker argues this gap in language was no accident. Women had been historically excluded from the activities that shape how we talk about things: scholarship, journalism, publishing, law, academia. The people who had the power to coin terms, define concepts, and set the vocabulary of public life were overwhelmingly men. And men, for the most part, had no reason to create a term for something they weren't experiencing.
Once the phrase "sexual harassment" was introduced, something remarkable happened. Suddenly women could name their experiences. They could recognize that what happened to them had also happened to others. They could organize around a shared concept. They could seek legal remedies for something that now had a definition.
The creation of the term didn't change what had happened to them. But it changed everything about their ability to understand it and communicate it.
The Willful Version
Sometimes marginalized groups fight back against hermeneutical injustice by creating their own interpretive resources—new words, new frameworks, new ways of understanding their experiences. They then try to share these concepts with the broader society.
And sometimes the broader society refuses to listen.
The philosopher Gaile Pohlhaus called this "willful hermeneutical ignorance." It describes situations where more privileged members of society actively resist adopting the interpretive tools that marginalized groups have developed. They don't just lack understanding—they choose not to understand.
Pohlhaus grounds this idea in a view of knowledge as fundamentally social. No one knows things in isolation. We depend on each other to contribute our perspectives, share our experiences, and build collective understanding. When privileged groups use their social and political power to block marginalized groups from contributing to this shared knowledge, they break the system that makes reliable understanding possible.
The philosophers Henry Lara-Steidel and Winston C. Thompson have argued that recent American laws banning the teaching of so-called "divisive concepts" in schools constitute willful hermeneutical injustice. In their analysis, concepts like those found in Critical Race Theory were developed specifically to expand the available interpretive resources—to give people new tools for understanding experiences of racism and marginalization. Banning these concepts from classrooms, they argue, is a deliberate choice to prevent students from acquiring the vocabulary they might need to understand their world.
Whether you agree with that specific application or not, the underlying dynamic is worth recognizing. Sometimes ignorance is not passive. Sometimes it's a choice.
When Others Speak for You
When people experience epistemic injustice—when they're not believed or can't articulate their experiences—someone else typically ends up speaking for them. The philosopher Linda Martín Alcoff has argued that this creates its own dangers.
Whenever anyone speaks about another person, they're creating a representation of that person's needs, desires, and identity. This representation depends on the speaker's interpretation, which is inevitably shaped by their own perspective and social position.
When privileged groups speak on behalf of marginalized groups, the risk of distortion is especially high. The speakers may view the people they're representing as lesser in some way, often without realizing it. Even well-meaning advocates may understand the marginalized group primarily through the lens of their difference from the norm—their "otherness."
This can lead to stereotyping. Consider the popular image of autistic people as possessing extraordinary "savant" abilities—brilliant at mathematics or music or memorization. This stereotype might seem flattering on its surface. But it creates pressure on autistic individuals to demonstrate exceptional abilities, and it constructs those who don't fit the expectation as somehow disappointing or deficient.
More fundamentally, the savant stereotype suggests that autistic people can only be considered equal to neurotypical people if they compensate for their autism with extraordinary gifts. It accepts the premise that autism is inherently a deficit that needs offsetting.
The stereotype emerged largely from representations created by people who aren't autistic, based on limited interactions and dramatic fictional portrayals. It's what happens when one group tells another group's story.
The Policy Problem
Epistemic injustice becomes especially consequential in lawmaking. Policies designed to address problems experienced by specific groups will be only as good as the understanding that informs them.
Consider anti-poverty legislation. Who gets to say what causes poverty? What solutions get proposed? Historically, these decisions have been made primarily by wealthy politicians whose understanding of poverty comes from statistics, briefings, and perhaps occasional staged visits to poor neighborhoods—not from lived experience.
A wealthy policymaker might genuinely want to help poor people. But their understanding of poverty will be filtered through their own experiences and assumptions. They might focus on job training programs while underestimating the role of transportation barriers. They might emphasize personal responsibility while missing the impact of predatory lending practices. They might design solutions that look good on paper but fail because they're built on incomplete understanding.
The people who actually live in poverty have detailed, nuanced knowledge about what their lives are like and what might actually help. But they're rarely asked. And when they do speak, they often face credibility deficits that diminish the weight of their testimony.
This is epistemic injustice translated into political power. The people most affected by a policy have the least influence over what it looks like.
A Distinct Cousin: Epistemological Violence
Related to epistemic injustice but somewhat different is what some scholars call epistemological violence. This typically occurs within academic research, particularly in how data gets interpreted.
Here's the distinction. Epistemic injustice is about being wronged in your capacity as a knower—not being believed, not having language for your experiences, not being consulted. Epistemological violence is about being wronged through theoretical interpretations that construct your group as inferior, even when the data could support other conclusions.
The psychologist Monique Botha has argued that much academic research on Theory of Mind in autistic children constitutes epistemological violence. Theory of Mind refers to the ability to understand that other people have mental states—beliefs, desires, intentions—that may differ from one's own. Some influential studies have concluded that autistic people lack or have deficient Theory of Mind.
But Botha and others point out that the data could be interpreted differently. Perhaps autistic people have a different style of understanding others' minds rather than a deficient one. Perhaps the experimental designs themselves are biased in ways that disadvantage autistic participants. Perhaps the whole framework of Theory of Mind is built on neurotypical assumptions.
When researchers choose the interpretation that frames autistic people as deficient—especially when drawing sweeping conclusions about the entire group from limited studies—they're committing epistemological violence. They're using the authority of academic research to construct a group as inferior.
Before Fricker
Although Miranda Fricker gave epistemic injustice its name in 1998, she wasn't the first to notice the phenomenon.
The scholar Vivian May has traced similar ideas to the civil rights activist Anna Julia Cooper in the 1890s. Cooper argued that Black women were denied full recognition as knowers—that their capacity to produce knowledge about their own lives and the world was systematically dismissed.
In 1988, a decade before Fricker's work, the theorist Gayatri Chakravorty Spivak published a famous essay called "Can the Subaltern Speak?" Spivak described what she called epistemic violence—using language similar to but distinct from how that term is used today. Her concern was that subaltern people, those at the bottom of colonial hierarchies, were prevented from speaking for themselves because others claimed to already know what their interests were.
Spivak's essay raised a troubling paradox. When someone from a privileged position tries to "give voice" to the subaltern, they may inadvertently speak over them. But if the subaltern tries to speak directly, they may not be heard—because the very definition of being subaltern is lacking access to the channels of legitimate discourse.
After Fricker: An Expanding Field
Since Fricker's foundational work, scholars have expanded and refined the concept of epistemic injustice in numerous directions.
Kristie Dotson introduced the concepts of "testimonial quieting" (being silenced before you can speak) and "testimonial smothering" (choosing not to speak because you know you won't be believed). She warned that narrow definitions of epistemic injustice might inadvertently exclude important contributions to the discussion.
José Medina has advocated for accounts of epistemic injustice that incorporate more voices and pay closer attention to context and relationships. Elizabeth Anderson has emphasized the need to address structural causes and structural remedies—not just individual instances of bias but the systems that produce them.
Scholars have also identified related phenomena. "Contributory injustice" occurs when someone's contributions to collective knowledge are dismissed. "Distributive epistemic injustice" refers to unequal access to educational resources and epistemic tools. "Epistemic exploitation" describes situations where marginalized people are unfairly burdened with educating others about their oppression.
The Indian political theorist Rajeev Bhargava has applied the concept to colonialism, describing how colonized peoples were epistemically wronged when colonizers replaced or degraded the concepts and categories those peoples used to understand themselves. More recently, the scholar Sabelo Ndlovu-Gatsheni has used terms like "epistemicide" and "cognitive empire" to describe discrimination against scholars and intellectuals from the Global South.
There's been substantial work connecting epistemic injustice to health and medicine. When patients aren't believed about their symptoms, when certain conditions are dismissed as "not real," when medical research ignores certain populations, the harms are both epistemic and physical. Scholars like Himani Bhakuni, Seye Abimbola, and Soumyadeep Bhaumik have explored these connections, particularly in the context of neglected tropical diseases and calls for decolonizing global health.
Robert Chapman and others have examined the relationship between epistemic injustice and neurodiversity—how the experiences of autistic people and those with other neurological differences are systematically dismissed or distorted.
Genocide Denial as Epistemic Injustice
Some scholars have argued that genocide denial constitutes a form of epistemic injustice. When survivors and descendants of genocide have their history questioned, minimized, or outright denied, they're being wronged in their capacity as knowers. Their testimony about what happened to their communities is dismissed. Their interpretation of events is contested not on evidential grounds but on political or prejudicial ones.
This represents epistemic injustice operating at a civilizational scale—the attempt to erase not just people but the memory of their destruction.
An Open Concept
Gaile Pohlhaus has suggested that epistemic injustice should be treated as an "open concept"—one that continues to evolve as scholars and activists identify new forms and applications. This openness reflects something important about the phenomenon itself. Epistemic injustice is not a fixed, stable category but a lens for seeing certain kinds of wrongs that might otherwise go unnamed.
And that, in a sense, brings us back to where we started. Part of what makes epistemic injustice so powerful as a concept is that it gives us language for something many people have long experienced but struggled to articulate.
You knew something. You tried to tell people. They didn't believe you. Or you experienced something. You couldn't find the words. Now there are words.
The concept of epistemic injustice is itself a hermeneutical resource—a tool for understanding a pattern of wrongdoing that might otherwise remain invisible. Like "sexual harassment" before it, the term doesn't create the phenomenon. But it lets us see it, name it, and maybe do something about it.