Evaluating Risks of Astronomical Future Suffering: False Positives vs. False Negatives Regarding Artificial Sentience

Abstract

Failing to recognise sentience in AI systems (false negatives) poses a far greater risk of potentially astronomical suffering compared to mistakenly attributing sentience to non-sentient systems (false positives). This paper analyses the issue through the moral frameworks of longtermism, utilitarianism, and deontology, concluding that all three assign greater urgency to avoiding false negatives. Given the astronomical number of AIs that may exist in the future, even a small chance of overlooking sentience is an unacceptable risk. To address this, the paper proposes a comprehensive approach including research, field-building, and tentative policy development. Humanity must take steps to ensure the well-being of all sentient minds, both biological and artificial.

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

  • Only published works are available at libraries.

Similar books and articles

Robustness to Fundamental Uncertainty in AGI Alignment.G. G. Worley Iii - 2020 - Journal of Consciousness Studies 27 (1-2):225-241.
Is there an ethics of algorithms?Martin Peterson - 2011 - Ethics and Information Technology 13 (3):251-260.
Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Kaj Sotala & Lukas Gloor - 2017 - Informatica: An International Journal of Computing and Informatics 41 (4):389-400.

Analytics

Added to PP
2024-04-04

Downloads
123 (#148,371)

6 months
123 (#33,310)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references