Three Risks That Caution Against a Premature Implementation of Artificial Moral Agents for Practical and Economical Use

Science and Engineering Ethics 27 (1):1-15 (2021)
  Copy   BIBTEX

Abstract

In the present article, I will advocate caution against developing artificial moral agents based on the notion that the utilization of preliminary forms of AMAs will potentially negatively feed back on the human social system and on human moral thought itself and its value—e.g., by reinforcing social inequalities, diminishing the breadth of employed ethical arguments and the value of character. While scientific investigations into AMAs pose no direct significant threat, I will argue against their premature utilization for practical and economical use. I will base my arguments on two thought experiments. The first thought experiment deals with the potential to generate a replica of an individual’s moral stances with the purpose to increase, what I term, ’moral efficiency’. Hence, as a first risk, an unregulated utilization of premature AMAs in a neoliberal capitalist system is likely to disadvantage those who cannot afford ’moral replicas’ and further reinforce social inequalities. The second thought experiment deals with the idea of a ’moral calculator’. As a second risk, I will argue that, even as a device equally accessible to all and aimed at augmenting human moral deliberation, ’moral calculators’ as preliminary forms of AMAs are likely to diminish the breadth and depth of concepts employed in moral arguments. Again, I base this claim on the idea that the current most dominant economic system rewards increases in productivity. However, increases in efficiency will mostly stem from relying on the outputs of ’moral calculators’ without further scrutiny. Premature AMAs will cover only a limited scope of moral argumentation and, hence, over-reliance on them will narrow human moral thought. In addition and as the third risk, I will argue that an increased disregard of the interior of a moral agent may ensue—a trend that can already be observed in the literature.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,682

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Artificial Moral Agents: Moral Mentors or Sensible Tools?Fabio Fossa - 2018 - Ethics and Information Technology (2):1-12.
Moral Machines?Michael S. Pritchard - 2012 - Science and Engineering Ethics 18 (2):411-417.
Artificial moral agents: an intercultural perspective.Michael Nagenborg - 2007 - International Review of Information Ethics 7 (9):129-133.
Robot Morals and Human Ethics.Wendell Wallach - 2010 - Teaching Ethics 11 (1):87-92.
Artificial Moral Agents Within an Ethos of AI4SG.Bongani Andy Mabaso - 2020 - Philosophy and Technology 34 (1):7-21.
A challenge for machine ethics.Ryan Tonkens - 2009 - Minds and Machines 19 (3):421-438.
The Human Side of Artificial Intelligence.Matthew A. Butkus - 2020 - Science and Engineering Ethics 26 (5):2427-2437.

Analytics

Added to PP
2021-01-26

Downloads
20 (#784,799)

6 months
6 (#572,748)

Historical graph of downloads
How can I increase my downloads?