On the Advantages of Distinguishing Between Predictive and Allocative Fairness in Algorithmic Decision-Making

Minds and Machines 32 (4):655-682 (2022)
  Copy   BIBTEX

Abstract

The problem of algorithmic fairness is typically framed as the problem of finding a unique formal criterion that guarantees that a given algorithmic decision-making procedure is morally permissible. In this paper, I argue that this is conceptually misguided and that we should replace the problem with two sub-problems. If we examine how most state-of-the-art machine learning systems work, we notice that there are two distinct stages in the decision-making process. First, a prediction of a relevant property is made. Secondly, a decision is taken based (at least partly) on this prediction. These two stages have different aims: the prediction is aimed at accuracy, while the decision is aimed at allocating a given good in a way that maximizes some context-relative utility measure. Correspondingly, two different fairness issues can arise. First, predictions could be biased in discriminatory ways. This means that the predictions contain systematic errors for a specific group of individuals. Secondly, the system’s decisions could result in an allocation of goods that is in tension with the principles of distributive justice. These two fairness issues are distinct problems that require different types of solutions. I here provide a formal framework to address both issues and argue that this way of conceptualizing them resolves some of the paradoxes present in the discussion of algorithmic fairness.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,682

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Call for papers.[author unknown] - 1999 - Minds and Machines 9 (3):459-459.
Correction to: What Might Machines Mean?Mitchell Green & Jan G. Michel - 2022 - Minds and Machines 32 (2):339-339.
Rawls’s Original Position and Algorithmic Fairness.Ulrik Franke - 2021 - Philosophy and Technology 34 (4):1803-1817.
Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
A Moral Framework for Understanding of Fair ML through Economic Models of Equality of Opportunity.Hoda Heidari - 2019 - Proceedings of the Conference on Fairness, Accountability, and Transparency 1.

Analytics

Added to PP
2022-12-01

Downloads
52 (#312,443)

6 months
36 (#102,239)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Citations of this work

Reconciling Algorithmic Fairness Criteria.Fabian Beigang - 2023 - Philosophy and Public Affairs 51 (2):166-190.

Add more citations

References found in this work

On statistical criteria of algorithmic fairness.Brian Hedden - 2021 - Philosophy and Public Affairs 49 (2):209-231.
Epistemic democracy: Generalizing the Condorcet jury theorem.Christian List & Robert E. Goodin - 2001 - Journal of Political Philosophy 9 (3):277–306.
What is “Race” in Algorithmic Discrimination on the Basis of Race?Lily Hu - 2023 - Journal of Moral Philosophy 21 (1-2):1-26.

View all 8 references / Add more references