Contents
151 found
Order:
1 — 50 / 151
  1. From Model Performance to Claim: How a Change of Focus in Machine Learning Replicability Can Help Bridge the Responsibility Gap.Tianqi Kou - manuscript
    Two goals - improving replicability and accountability of Machine Learning research respectively, have accrued much attention from the AI ethics and the Machine Learning community. Despite sharing the measures of improving transparency, the two goals are discussed in different registers - replicability registers with scientific reasoning whereas accountability registers with ethical reasoning. Given the existing challenge of the Responsibility Gap - holding Machine Learning scientists accountable for Machine Learning harms due to them being far from sites of application, this paper (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2. Beneficent Intelligence: A Capability Approach to Modeling Benefit, Assistance, and Associated Moral Failures through AI Systems.Alex John London & Hoda Heidari - manuscript
    The prevailing discourse around AI ethics lacks the language and formalism necessary to capture the diverse ethical concerns that emerge when AI systems interact with individuals. Drawing on Sen and Nussbaum's capability approach, we present a framework formalizing a network of ethical concepts and entitlements necessary for AI systems to confer meaningful benefit or assistance to stakeholders. Such systems enhance stakeholders' ability to advance their life plans and well-being while upholding their fundamental rights. We characterize two necessary conditions for morally (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  3. Algorithmic neutrality.Milo Phillips-Brown - manuscript
    Algorithms wield increasing control over our lives—over which jobs we get, whether we're granted loans, what information we're exposed to online, and so on. Algorithms can, and often do, wield their power in a biased way, and much work has been devoted to algorithmic bias. In contrast, algorithmic neutrality has gone largely neglected. I investigate three questions about algorithmic neutrality: What is it? Is it possible? And when we have it in mind, what can we learn about algorithmic bias?
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  4. Practical, epistemic and normative implications of algorithmic bias in healthcare artificial intelligence: a qualitative study of multidisciplinary expert perspectives.Yves Saint James Aquino, Stacy M. Carter, Nehmat Houssami, Annette Braunack-Mayer, Khin Than Win, Chris Degeling, Lei Wang & Wendy A. Rogers - forthcoming - Journal of Medical Ethics.
    BackgroundThere is a growing concern about artificial intelligence (AI) applications in healthcare that can disadvantage already under-represented and marginalised groups (eg, based on gender or race).ObjectivesOur objectives are to canvas the range of strategies stakeholders endorse in attempting to mitigate algorithmic bias, and to consider the ethical question of responsibility for algorithmic bias.MethodologyThe study involves in-depth, semistructured interviews with healthcare workers, screening programme managers, consumer health representatives, regulators, data scientists and developers.ResultsFindings reveal considerable divergent views on three key issues. First, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  5. AI Decision Making with Dignity? Contrasting Workers’ Justice Perceptions of Human and AI Decision Making in a Human Resource Management Context.Sarah Bankins, Paul Formosa, Yannick Griep & Deborah Richards - forthcoming - Information Systems Frontiers.
    Using artificial intelligence (AI) to make decisions in human resource management (HRM) raises questions of how fair employees perceive these decisions to be and whether they experience respectful treatment (i.e., interactional justice). In this experimental survey study with open-ended qualitative questions, we examine decision making in six HRM functions and manipulate the decision maker (AI or human) and decision valence (positive or negative) to determine their impact on individuals’ experiences of interactional justice, trust, dehumanization, and perceptions of decision-maker role appropriate- (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  6. MinMax fairness: from Rawlsian Theory of Justice to solution for algorithmic bias.Flavia Barsotti & Rüya Gökhan Koçer - forthcoming - AI and Society:1-14.
    This paper presents an intuitive explanation about why and how Rawlsian Theory of Justice (Rawls in A theory of justice, Harvard University Press, Harvard, 1971) provides the foundations to a solution for algorithmic bias. The contribution of the paper is to discuss and show why Rawlsian ideas in their original form (e.g. the veil of ignorance, original position, and allowing inequalities that serve the worst-off) are relevant to operationalize fairness for algorithmic decision making. The paper also explains how this leads (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Ethical assurance: a practical approach to the responsible design, development, and deployment of data-driven technologies.Christopher Burr & David Leslie - forthcoming - AI and Ethics.
    This article offers several contributions to the interdisciplinary project of responsible research and innovation in data science and AI. First, it provides a critical analysis of current efforts to establish practical mechanisms for algorithmic auditing and assessment to identify limitations and gaps with these approaches. Second, it provides a brief introduction to the methodology of argument-based assurance and explores how it is currently being applied in the development of safety cases for autonomous and intelligent systems. Third, it generalises this method (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  8. Investigating gender and racial biases in DALL-E Mini Images.Marc Cheong, Ehsan Abedin, Marinus Ferreira, Ritsaart Willem Reimann, Shalom Chalson, Pamela Robinson, Joanne Byrne, Leah Ruppanner, Mark Alfano & Colin Klein - forthcoming - Acm Journal on Responsible Computing.
    Generative artificial intelligence systems based on transformers, including both text-generators like GPT-4 and image generators like DALL-E 3, have recently entered the popular consciousness. These tools, while impressive, are liable to reproduce, exacerbate, and reinforce extant human social biases, such as gender and racial biases. In this paper, we systematically review the extent to which DALL-E Mini suffers from this problem. In line with the Model Card published alongside DALL-E Mini by its creators, we find that the images it produces (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  9. Informational richness and its impact on algorithmic fairness.Marcello Di Bello & Ruobin Gong - forthcoming - Philosophical Studies:1-29.
    The literature on algorithmic fairness has examined exogenous sources of biases such as shortcomings in the data and structural injustices in society. It has also examined internal sources of bias as evidenced by a number of impossibility theorems showing that no algorithm can concurrently satisfy multiple criteria of fairness. This paper contributes to the literature stemming from the impossibility theorems by examining how informational richness affects the accuracy and fairness of predictive algorithms. With the aid of a computer simulation, we (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  10. Equal accuracy for Andrew and Abubakar—detecting and mitigating bias in name-ethnicity classification algorithms.Lena Hafner, Theodor Peter Peifer & Franziska Sofia Hafner - forthcoming - AI and Society:1-25.
    Uncovering the world’s ethnic inequalities is hampered by a lack of ethnicity-annotated datasets. Name-ethnicity classifiers (NECs) can help, as they are able to infer people’s ethnicities from their names. However, since the latest generation of NECs rely on machine learning and artificial intelligence (AI), they may suffer from the same racist and sexist biases found in many AIs. Therefore, this paper offers an algorithmic fairness audit of three NECs. It finds that the UK-Census-trained EthnicityEstimator displays large accuracy biases with regards (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  11. A Framework for Assurance Audits of Algorithmic Systems.Benjamin Lange, Khoa Lam, Borhane Hamelin, Davidovic Jovana, Shea Brown & Ali Hasan - forthcoming - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency.
    An increasing number of regulations propose the notion of ‘AI audits’ as an enforcement mechanism for achieving transparency and accountability for artificial intelligence (AI) systems. Despite some converging norms around various forms of AI auditing, auditing for the purpose of compliance and assurance currently have little to no agreed upon practices, procedures, taxonomies, and standards. We propose the ‘criterion audit’ as an operationalizable compliance and assurance external audit framework. We model elements of this approach after financial auditing practices, and argue (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  12. Fair equality of chances for prediction-based decisions.Michele Loi, Anders Herlitz & Hoda Heidari - forthcoming - Economics and Philosophy:1-24.
    This article presents a fairness principle for evaluating decision-making based on predictions: a decision rule is unfair when the individuals directly impacted by the decisions who are equal with respect to the features that justify inequalities in outcomes do not have the same statistical prospects of being benefited or harmed by them, irrespective of their socially salient morally arbitrary traits. The principle can be used to evaluate prediction-based decision-making from the point of view of a wide range of antecedently specified (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  13. Algorithmic Profiling as a Source of Hermeneutical Injustice.Silvia Milano & Carina Prunkl - forthcoming - Philosophical Studies:1-19.
    It is well-established that algorithms can be instruments of injustice. It is less frequently discussed, however, how current modes of AI deployment often make the very discovery of injustice difficult, if not impossible. In this article, we focus on the effects of algorithmic profiling on epistemic agency. We show how algorithmic profiling can give rise to epistemic injustice through the depletion of epistemic resources that are needed to interpret and evaluate certain experiences. By doing so, we not only demonstrate how (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Conformism, Ignorance & Injustice: AI as a Tool of Epistemic Oppression.Martin Miragoli - forthcoming - Episteme: A Journal of Social Epistemology.
    From music recommendation to assessment of asylum applications, machine-learning algorithms play a fundamental role in our lives. Naturally, the rise of AI implementation strategies has brought to public attention the ethical risks involved. However, the dominant anti-discrimination discourse, too often preoccupied with identifying particular instances of harmful AIs, has yet to bring clearly into focus the more structural roots of AI-based injustice. This paper addresses the problem of AI-based injustice from a distinctively epistemic angle. More precisely, I argue that the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  15. Understanding user sensemaking in fairness and transparency in algorithms: algorithmic sensemaking in over-the-top platform.Donghee Shin, Joon Soo Lim, Norita Ahmad & Mohammed Ibahrine - forthcoming - AI and Society:1-14.
    A number of artificial intelligence systems have been proposed to assist users in identifying the issues of algorithmic fairness and transparency. These AI systems use diverse bias detection methods from various perspectives, including exploratory cues, interpretable tools, and revealing algorithms. This study explains the design of AI systems by probing how users make sense of fairness and transparency as they are hypothetical in nature, with no specific ways for evaluation. Focusing on individual perceptions of fairness and transparency, this study examines (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  16. An Impossibility Theorem for Base Rate Tracking and Equalised Odds.Rush T. Stewart, Benjamin Eva, Shanna Slank & Reuben Stern - forthcoming - Analysis.
    There is a theorem that shows that it is impossible for an algorithm to jointly satisfy the statistical fairness criteria of Calibration and Equalised Odds non-trivially. But what about the recently advocated alternative to Calibration, Base Rate Tracking? Here, we show that Base Rate Tracking is strictly weaker than Calibration, and then take up the question of whether it is possible to jointly satisfy Base Rate Tracking and Equalised Odds in non-trivial scenarios. We show that it is not, thereby establishing (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  17. Legitimate Power, Illegitimate Automation: The problem of ignoring legitimacy in automated decision systems.Jake Iain Stone & Brent Mittelstadt - forthcoming - The Association for Computing Machinery Conference on Fairness, Accountability, and Transparency 2024.
    Progress in machine learning and artificial intelligence has spurred the widespread adoption of automated decision systems (ADS). An extensive literature explores what conditions must be met for these systems' decisions to be fair. However, questions of legitimacy -- why those in control of ADS are entitled to make such decisions -- have received comparatively little attention. This paper shows that when such questions are raised theorists often incorrectly conflate legitimacy with either public acceptance or other substantive values such as fairness, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  18. From Explanation to Recommendation: Ethical Standards for Algorithmic Recourse.Emily Sullivan & Philippe Verreault-Julien - forthcoming - Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (AIES’22).
    People are increasingly subject to algorithmic decisions, and it is generally agreed that end-users should be provided an explanation or rationale for these decisions. There are different purposes that explanations can have, such as increasing user trust in the system or allowing users to contest the decision. One specific purpose that is gaining more traction is algorithmic recourse. We first pro- pose that recourse should be viewed as a recommendation problem, not an explanation problem. Then, we argue that the capability (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  19. Criteria for Assessing AI-Based Sentencing Algorithms: A Reply to Ryberg.Thomas Douglas - 2024 - Philosophy and Technology 37 (1):1-4.
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  20. ACROCPoLis: A Descriptive Framework for Making Sense of Fairness.Andrea Aler Tubella, Dimitri Coelho Mollo, Adam Dahlgren, Hannah Devinney, Virginia Dignum, Petter Ericson, Anna Jonsson, Tim Kampik, Tom Lenaerts, Julian Mendez & Juan Carlos Nieves Sanchez - 2023 - Proceedings of the 2023 Acm Conference on Fairness, Accountability, and Transparency:1014-1025.
    Fairness is central to the ethical and responsible development and use of AI systems, with a large number of frameworks and formal notions of algorithmic fairness being available. However, many of the fairness solutions proposed revolve around technical considerations and not the needs of and consequences for the most impacted communities. We therefore want to take the focus away from definitions and allow for the inclusion of societal and relational aspects to represent how the effects of AI systems impact and (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  21. Apropos of "Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals".Ognjen Arandjelović - 2023 - AI and Ethics.
    The present comment concerns a recent AI & Ethics article which purports to report evidence of speciesist bias in various popular computer vision (CV) and natural language processing (NLP) machine learning models described in the literature. I examine the authors' analysis and show it, ironically, to be prejudicial, often being founded on poorly conceived assumptions and suffering from fallacious and insufficiently rigorous reasoning, its superficial appeal in large part relying on the sequacity of the article's target readership.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  22. Big Data as Tracking Technology and Problems of the Group and its Members.Haleh Asgarinia - 2023 - In Kevin Macnish & Adam Henschke (eds.), The Ethics of Surveillance in Times of Emergency. Oxford University Press. pp. 60-75.
    Digital data help data scientists and epidemiologists track and predict outbreaks of disease. Mobile phone GPS data, social media data, or other forms of information updates such as the progress of epidemics are used by epidemiologists to recognize disease spread among specific groups of people. Targeting groups as potential carriers of a disease, rather than addressing individuals as patients, risks causing harm to groups. While there are rules and obligations at the level of the individual, we have to reach a (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23. From AI for people to AI for the world and the universe.Seth D. Baum & Andrea Owe - 2023 - AI and Society 38 (2):679-680.
    Recent work in AI ethics often calls for AI to advance human values and interests. The concept of “AI for people” is one notable example. Though commendable in some respects, this work falls short by excluding the moral significance of nonhumans. This paper calls for a shift in AI ethics to more inclusive paradigms such as “AI for the world” and “AI for the universe”. The paper outlines the case for more inclusive paradigms and presents implications for moral philosophy and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  24. Fairness and Risk: An Ethical Argument for a Group Fairness Definition Insurers Can Use.Joachim Baumann & Michele Loi - 2023 - Philosophy and Technology 36 (3):1-31.
    Algorithmic predictions are promising for insurance companies to develop personalized risk models for determining premiums. In this context, issues of fairness, discrimination, and social injustice might arise: Algorithms for estimating the risk based on personal data may be biased towards specific social groups, leading to systematic disadvantages for those groups. Personalized premiums may thus lead to discrimination and social injustice. It is well known from many application fields that such biases occur frequently and naturally when prediction models are applied to (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  25. Reconciling Algorithmic Fairness Criteria.Fabian Beigang - 2023 - Philosophy and Public Affairs 51 (2):166-190.
    Philosophy &Public Affairs, Volume 51, Issue 2, Page 166-190, Spring 2023.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  26. Yet Another Impossibility Theorem in Algorithmic Fairness.Fabian Beigang - 2023 - Minds and Machines 33 (4):715-735.
    In recent years, there has been a surge in research addressing the question which properties predictive algorithms ought to satisfy in order to be considered fair. Three of the most widely discussed criteria of fairness are the criteria called equalized odds, predictive parity, and counterfactual fairness. In this paper, I will present a new impossibility result involving these three criteria of algorithmic fairness. In particular, I will argue that there are realistic circumstances under which any predictive algorithm that satisfies counterfactual (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  27. Knowledge representation and acquisition for ethical AI: challenges and opportunities.Vaishak Belle - 2023 - Ethics and Information Technology 25 (1):1-12.
    Machine learning (ML) techniques have become pervasive across a range of different applications, and are now widely used in areas as disparate as recidivism prediction, consumer credit-risk analysis, and insurance pricing. Likewise, in the physical world, ML models are critical components in autonomous agents such as robotic surgeons and self-driving cars. Among the many ethical dimensions that arise in the use of ML technology in such applications, analyzing morally permissible actions is both immediate and profound. For example, there is the (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28. The Fair Chances in Algorithmic Fairness: A Response to Holm.Clinton Castro & Michele Loi - 2023 - Res Publica 29 (2):231–237.
    Holm (2022) argues that a class of algorithmic fairness measures, that he refers to as the ‘performance parity criteria’, can be understood as applications of John Broome’s Fairness Principle. We argue that the performance parity criteria cannot be read this way. This is because in the relevant context, the Fairness Principle requires the equalization of actual individuals’ individual-level chances of obtaining some good (such as an accurate prediction from a predictive system), but the performance parity criteria do not guarantee any (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  29. Correction: The Fair Chances in Algorithmic Fairness: A Response to Holm.Clinton Castro & Michele Loi - 2023 - Res Publica 29 (2):339-340.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30. Egalitarian Machine Learning.Clinton Castro, David O’Brien & Ben Schwan - 2023 - Res Publica 29 (2):237–264.
    Prediction-based decisions, which are often made by utilizing the tools of machine learning, influence nearly all facets of modern life. Ethical concerns about this widespread practice have given rise to the field of fair machine learning and a number of fairness measures, mathematically precise definitions of fairness that purport to determine whether a given prediction-based decision system is fair. Following Reuben Binns (2017), we take ‘fairness’ in this context to be a placeholder for a variety of normative egalitarian considerations. We (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  31. Having Your Day in Robot Court.Benjamin Chen, Alexander Stremitzer & Kevin Tobia - 2023 - Harvard Journal of Law and Technology 36.
    Should machines be judges? Some say no, arguing that citizens would see robot-led legal proceedings as procedurally unfair because “having your day in court” is having another human adjudicate your claims. Prior research established that people obey the law in part because they see it as procedurally just. The introduction of artificially intelligent (AI) judges could therefore undermine sentiments of justice and legal compliance if citizens intuitively take machine-adjudicated proceedings to be less fair than the human-adjudicated status quo. Two original (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  32. Disambiguating Algorithmic Bias: From Neutrality to Justice.Elizabeth Edenberg & Alexandra Wood - 2023 - In Francesca Rossi, Sanmay Das, Jenny Davis, Kay Firth-Butterfield & Alex John (eds.), AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery. pp. 691-704.
    As algorithms have become ubiquitous in consequential domains, societal concerns about the potential for discriminatory outcomes have prompted urgent calls to address algorithmic bias. In response, a rich literature across computer science, law, and ethics is rapidly proliferating to advance approaches to designing fair algorithms. Yet computer scientists, legal scholars, and ethicists are often not speaking the same language when using the term ‘bias.’ Debates concerning whether society can or should tackle the problem of algorithmic bias are hampered by conflations (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  33. An Epistemic Lens on Algorithmic Fairness.Elizabeth Edenberg & Alexandra Wood - 2023 - Eaamo '23: Proceedings of the 3Rd Acm Conference on Equity and Access in Algorithms, Mechanisms, and Optimization.
    In this position paper, we introduce a new epistemic lens for analyzing algorithmic harm. We argue that the epistemic lens we propose herein has two key contributions to help reframe and address some of the assumptions underlying inquiries into algorithmic fairness. First, we argue that using the framework of epistemic injustice helps to identify the root causes of harms currently framed as instances of representational harm. We suggest that the epistemic lens offers a theoretical foundation for expanding approaches to algorithmic (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  34. (Some) algorithmic bias as institutional bias.Camila Hernandez Flowerman - 2023 - Ethics and Information Technology 25 (2):1-10.
    In this paper I argue that some examples of what we label ‘algorithmic bias’ would be better understood as cases of institutional bias. Even when individual algorithms appear unobjectionable, they may produce biased outcomes given the way that they are embedded in the background structure of our social world. Therefore, the problematic outcomes associated with the use of algorithmic systems cannot be understood or accounted for without a kind of structural account. Understanding algorithmic bias as institutional bias in particular (as (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  35. Algorithmic Fairness, Risk, and the Dominant Protective Agency.Ulrik Franke - 2023 - Philosophy and Technology 36 (4):1-7.
    With increasing use of automated algorithmic decision-making, issues of algorithmic fairness have attracted much attention lately. In this growing literature, existing concepts from ethics and political philosophy are often applied to new contexts. The reverse—that novel insights from the algorithmic fairness literature are fed back into ethics and political philosophy—is far less established. However, this short commentary on Baumann and Loi (Philosophy & Technology, 36(3), 45 2023) aims to do precisely this. Baumann and Loi argue that among algorithmic group fairness (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  36. Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms.Benedetta Giovanola & Simona Tiribelli - 2023 - AI and Society 38 (2):549-563.
    The increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and healthcare have made the need for fairness in ML, and more specifically in healthcare ML algorithms (HMLA), a very important and urgent task. However, while the debate on fairness in the ethics of artificial intelligence (AI) and in HMLA has grown significantly over the last decade, the very concept of fairness as an ethical value has not yet been sufficiently (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  37. Equalized Odds is a Requirement of Algorithmic Fairness.David Gray Grant - 2023 - Synthese 201 (3).
    Statistical criteria of fairness are formal measures of how an algorithm performs that aim to help us determine whether an algorithm would be fair to use in decision-making. In this paper, I introduce a new version of the criterion known as “Equalized Odds,” argue that it is a requirement of procedural fairness, and show that it is immune to a number of objections to the standard version.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  38. What we owe to decision-subjects: beyond transparency and explanation in automated decision-making.David Gray Grant, Jeff Behrends & John Basl - 2023 - Philosophical Studies 2003:1-31.
    The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even to the experts (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  39. Correction to: Escaping the Impossibility of Fairness: From Formal to Substantive Algorithmic Fairness.Ben Green - 2023 - Philosophy and Technology 36 (1):1-1.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  40. Artificial intelligence ELSI score for science and technology: a comparison between Japan and the US.Tilman Hartwig, Yuko Ikkatai, Naohiro Takanashi & Hiromi M. Yokoyama - 2023 - AI and Society 38 (4):1609-1626.
    Artificial intelligence (AI) has become indispensable in our lives. The development of a quantitative scale for AI ethics is necessary for a better understanding of public attitudes toward AI research ethics and to advance the discussion on using AI within society. For this study, we developed an AI ethics scale based on AI-specific scenarios. We investigated public attitudes toward AI ethics in Japan and the US using online questionnaires. We designed a test set using four dilemma scenarios and questionnaire items (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  41. Measurement invariance, selection invariance, and fair selection revisited.Remco Heesen & Jan-Willem Romeijn - 2023 - Psychological Methods 28 (3):687-690.
    This note contains a corrective and a generalization of results by Borsboom et al. (2008), based on Heesen and Romeijn (2019). It highlights the relevance of insights from psychometrics beyond the context of psychological testing.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  42. The Fairness in Algorithmic Fairness.Sune Holm - 2023 - Res Publica 29 (2):265-281.
    With the increasing use of algorithms in high-stakes areas such as criminal justice and health has come a significant concern about the fairness of prediction-based decision procedures. In this article I argue that a prominent class of mathematically incompatible performance parity criteria can all be understood as applications of John Broome’s account of fairness as the proportional satisfaction of claims. On this interpretation these criteria do not disagree on what it means for an algorithm to be _fair_. Rather they express (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  43. Algorithmic legitimacy in clinical decision-making.Sune Holm - 2023 - Ethics and Information Technology 25 (3):1-10.
    Machine learning algorithms are expected to improve referral decisions. In this article I discuss the legitimacy of deferring referral decisions in primary care to recommendations from such algorithms. The standard justification for introducing algorithmic decision procedures to make referral decisions is that they are more accurate than the available practitioners. The improvement in accuracy will ensure more efficient use of scarce health resources and improve patient care. In this article I introduce a proceduralist framework for discussing the legitimacy of algorithmic (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  44. Egalitarianism and Algorithmic Fairness.Sune Holm - 2023 - Philosophy and Technology 36 (1):1-18.
    What does it mean for algorithmic classifications to be fair to different socially salient groups? According to classification parity criteria, what is required is equality across groups with respect to some performance measure such as error rates. Critics of classification parity object that classification parity entails that achieving fairness may require us to choose an algorithm that makes no group better off and some groups worse off than an alternative. In this article, I interpret the problem of algorithmic fairness as (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  45. Dirty data labeled dirt cheap: epistemic injustice in machine learning systems.Gordon Hull - 2023 - Ethics and Information Technology 25 (3):1-14.
    Artificial intelligence (AI) and machine learning (ML) systems increasingly purport to deliver knowledge about people and the world. Unfortunately, they also seem to frequently present results that repeat or magnify biased treatment of racial and other vulnerable minorities. This paper proposes that at least some of the problems with AI’s treatment of minorities can be captured by the concept of epistemic injustice. To substantiate this claim, I argue that (1) pretrial detention and physiognomic AI systems commit testimonial injustice because their (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  46. Predictive policing and algorithmic fairness.Tzu-Wei Hung & Chun-Ping Yen - 2023 - Synthese 201 (6):1-29.
    This paper examines racial discrimination and algorithmic bias in predictive policing algorithms (PPAs), an emerging technology designed to predict threats and suggest solutions in law enforcement. We first describe what discrimination is in a case study of Chicago’s PPA. We then explain their causes with Broadbent’s contrastive model of causation and causal diagrams. Based on the cognitive science literature, we also explain why fairness is not an objective truth discoverable in laboratories but has context-sensitive social meanings that need to be (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  47. Are Algorithms Value-Free?Gabbrielle M. Johnson - 2023 - Journal Moral Philosophy 21 (1-2):1-35.
    As inductive decision-making procedures, the inferences made by machine learning programs are subject to underdetermination by evidence and bear inductive risk. One strategy for overcoming these challenges is guided by a presumption in philosophy of science that inductive inferences can and should be value-free. Applied to machine learning programs, the strategy assumes that the influence of values is restricted to data and decision outcomes, thereby omitting internal value-laden design choice points. In this paper, I apply arguments from feminist philosophy of (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  48. Self-fulfilling Prophecy in Practical and Automated Prediction.Owen C. King & Mayli Mertens - 2023 - Ethical Theory and Moral Practice 26 (1):127-152.
    A self-fulfilling prophecy is, roughly, a prediction that brings about its own truth. Although true predictions are hard to fault, self-fulfilling prophecies are often regarded with suspicion. In this article, we vindicate this suspicion by explaining what self-fulfilling prophecies are and what is problematic about them, paying special attention to how their problems are exacerbated through automated prediction. Our descriptive account of self-fulfilling prophecies articulates the four elements that define them. Based on this account, we begin our critique by showing (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  49. Algorithmic Transparency and Manipulation.Michael Klenk - 2023 - Philosophy and Technology 36 (4):1-20.
    A series of recent papers raises worries about the manipulative potential of algorithmic transparency (to wit, making visible the factors that influence an algorithm’s output). But while the concern is apt and relevant, it is based on a fraught understanding of manipulation. Therefore, this paper draws attention to the ‘indifference view’ of manipulation, which explains better than the ‘vulnerability view’ why algorithmic transparency has manipulative potential. The paper also raises pertinent research questions for future studies of manipulation in the context (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  50. Algorithmic fairness through group parities? The case of COMPAS-SAPMOC.Francesca Lagioia, Riccardo Rovatti & Giovanni Sartor - 2023 - AI and Society 38 (2):459-478.
    Machine learning classifiers are increasingly used to inform, or even make, decisions significantly affecting human lives. Fairness concerns have spawned a number of contributions aimed at both identifying and addressing unfairness in algorithmic decision-making. This paper critically discusses the adoption of group-parity criteria (e.g., demographic parity, equality of opportunity, treatment equality) as fairness standards. To this end, we evaluate the use of machine learning methods relative to different steps of the decision-making process: assigning a predictive score, linking a classification to (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 151