A Conceptual and Computational Model of Moral Decision Making in Human and Artificial Agents

Topics in Cognitive Science 2 (3):454-485 (2010)
  Copy   BIBTEX

Abstract

Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks for computational approaches to higher-order cognition. The need for increasingly autonomous artificial agents to factor moral considerations into their choices and actions has given rise to another new field of inquiry variously known as Machine Morality, Machine Ethics, Roboethics, or Friendly AI. In this study, we discuss how LIDA, an AGI model of human cognition, can be adapted to model both affective and rational features of moral decision making. Using the LIDA model, we will demonstrate how moral decisions can be made in many domains using the same mechanisms that enable general decision making. Comprehensive models of human cognition typically aim for compatibility with recent research in the cognitive and neural sciences. Global workspace theory, proposed by the neuropsychologist Bernard Baars (1988), is a highly regarded model of human cognition that is currently being computationally instantiated in several software implementations. LIDA (Franklin, Baars, Ramamurthy, & Ventura, 2005) is one such computational implementation. LIDA is both a set of computational tools and an underlying model of human cognition, which provides mechanisms that are capable of explaining how an agent’s selection of its next action arises from bottom-up collection of sensory data and top-down processes for making sense of its current situation. We will describe how the LIDA model helps integrate emotions into the human decision-making process, and we will elucidate a process whereby an agent can work through an ethical problem to reach a solution that takes account of ethically relevant factors.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,227

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Analytics

Added to PP
2010-08-11

Downloads
253 (#80,936)

6 months
32 (#104,837)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Colin Allen
University of Pittsburgh

Citations of this work

Moral Status and Intelligent Robots.John-Stewart Gordon & David J. Gunkel - 2021 - Southern Journal of Philosophy 60 (1):88-117.
Artificial Moral Agents Within an Ethos of AI4SG.Bongani Andy Mabaso - 2020 - Philosophy and Technology 34 (1):7-21.
Building Moral Robots: Ethical Pitfalls and Challenges.John-Stewart Gordon - 2020 - Science and Engineering Ethics 26 (1):141-157.
Consciousness and ethics: Artificially conscious moral agents.Wendell Wallach, Colin Allen & Stan Franklin - 2011 - International Journal of Machine Consciousness 3 (01):177-192.

View all 23 citations / Add more citations

References found in this work

Principles of biomedical ethics.Tom L. Beauchamp - 1979 - New York: Oxford University Press. Edited by James F. Childress.
The Principles of Psychology.William James - 1890 - London, England: Dover Publications.
Elements of Episodic Memory.Endel Tulving - 1983 - Oxford University Press.

View all 54 references / Add more references