Why Moral Agreement is Not Enough to Address Algorithmic Structural Bias

Communications in Computer and Information Science 1551:323-334 (2022)
  Copy   BIBTEX

Abstract

One of the predominant debates in AI Ethics is the worry and necessity to create fair, transparent and accountable algorithms that do not perpetuate current social inequities. I offer a critical analysis of Reuben Binns’s argument in which he suggests using public reason to address the potential bias of the outcomes of machine learning algorithms. In contrast to him, I argue that ultimately what is needed is not public reason per se, but an audit of the implicit moral assumptions of societies within which algorithms are built and applied. Public justification is appealing since it offers us the possibility to align the decision-making outcomes of the algorithm with the core moral values of stakeholders within a constitutional democratic society. My concern is that the common moral principles that form the foundation of public reason are not necessarily neutral, as they still express specific moral ideals and normative standards even if there is moral agreement by society as a whole, or among different stakeholders within society. Appealing to such normative standards may thus still lead to algorithmic outcomes being biased as common moral values may very well still be discriminatory even though they are formed from a consensus, and even if public reason is applied as a kind of filter for potential algorithmic outcomes. Hence, I argue that these implicit moral norms within society that we take as a given in public reasoning, need to be audited from generation to generation in order to effectively mitigate potential algorithmic bias.

Links

PhilArchive

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

(Some) algorithmic bias as institutional bias.Camila Hernandez Flowerman - 2023 - Ethics and Information Technology 25 (2):1-10.
Attributability, Accountability, and Implicit Bias.Robin Zheng - 2016 - In Michael Brownstein & Jennifer Saul (eds.), Implicit Bias and Philosophy, Volume 2: Moral Responsibility, Structural Injustice, and Ethics. Oxford, GB: Oxford University Press UK. pp. 62-89.
Disambiguating Algorithmic Bias: From Neutrality to Justice.Elizabeth Edenberg & Alexandra Wood - 2023 - In Francesca Rossi, Sanmay Das, Jenny Davis, Kay Firth-Butterfield & Alex John (eds.), AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery. pp. 691-704.
On algorithmic fairness in medical practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
Algorithmic Accountability and Public Reason.Reuben Binns - 2018 - Philosophy and Technology 31 (4):543-556.
Moral Compromise.David Archard - 2012 - Philosophy 87 (3):403-420.
Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
Algorithmic Political Bias in Artificial Intelligence Systems.Uwe Peters - 2022 - Philosophy and Technology 35 (2):1-23.
Algorithmic Political Bias—an Entrenchment Concern.Ulrik Franke - 2022 - Philosophy and Technology 35 (3):1-6.

Analytics

Added to PP
2024-04-17

Downloads
57 (#282,333)

6 months
57 (#82,324)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Paige Benton
University of Johannesburg

Citations of this work

No citations found.

Add more citations