Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making

Abstract

While artificial intelligence (AI) is increasingly applied for decision- making processes, ethical decisions pose challenges for AI applica- tions. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collabora- tion? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that partici- pants consider humans to be more morally trustworthy but less capable than their AI equivalent. This shows in participants’ re- liance on AI: AI recommendations and decisions are accepted more often than the human expert’s. However, AI team experts are per- ceived to be less responsible than humans, while programmers and sellers of AI systems are deemed partially responsible instead.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,261

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

  • Only published works are available at libraries.

Similar books and articles

Ethical Decision Making: Special or No Different? [REVIEW]Dawn R. Elm & Tara J. Radin - 2012 - Journal of Business Ethics 107 (3):313-329.

Analytics

Added to PP
2024-03-07

Downloads
6 (#1,465,900)

6 months
6 (#530,265)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Markus Kneer
University of Graz

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references