← Back to Library

Does the UK’s liver transplant matching algorithm systematically exclude younger patients?

By Arvind Narayanan, Angelina Wang, Sayash Kapoor, and Solon Barocas

Predictive algorithms are used in many life-or-death situations. In the paper Against Predictive Optimization, we argued that the use of predictive logic for making decisions about people has recurring, inherent flaws, and should be rejected in many cases.

A wrenching case study comes from the UK’s liver allocation algorithm, which appears to discriminate by age, with some younger patients seemingly unable to receive a transplant, no matter how ill. What went wrong here? Can it be fixed? Or should health systems avoid using algorithms for liver transplant matching?

How the liver allocation algorithm works

The UK nationalized its liver transplant system in 2018, replacing previous regional systems where livers were prioritized based on disease severity.1 When a liver becomes available, the new algorithm uses predictive logic to calculate how much each patient on the national waiting list would benefit from being given that liver. 

Specifically, the algorithm predicts how long each patient would live if they were given that liver, and how long they would live if they didn’t get a transplant. The difference between the two is the patient’s Transplant Benefit Score (TBS). Patients are sorted in decreasing order of the score, and the top patient is offered the liver (if they decline, the next patient is offered, and so on).

Given this description, one would expect that the algorithm would favor younger patients, as they will potentially gain many more decades of life through a transplant compared to older patients. If the algorithm has the opposite effect, either the score has been inaccurately portrayed or it is being calculated incorrectly. We’ll see which one it is. But first, let’s discuss a more basic question.

Why is predictive AI even needed?

Discussions of the ethics of algorithmic decision making often narrowly focus on bias, ignoring the question of whether it is legitimate to use an algorithm in the first place. For example, consider pretrial risk prediction in the criminal justice system. While bias is a serious concern, a deeper question is whether it is morally justified to deny defendants their freedom based on a prediction of what they might do rather than a determination of guilt, especially when that prediction is barely more accurate than a coin flip. 

Organ transplantation is different in many ways. The health system needs to make efficient and ethical use of ...

Read full article on AI Snake Oil →