Large Language Models: Assessment for Singularity

Abstract

The potential for Large Language Models (LLMs) to attain technological singularity—the point at which artificial intelligence (AI) surpasses human intellect and autonomously improves itself—is a critical concern in AI research. This paper explores the feasibility of current LLMs achieving singularity by examining the philosophical and practical requirements for such a development. We begin with a historical overview of AI and intelligence amplification, tracing the evolution of LLMs from their origins to state-of-the-art models. We then proposes a theoretical framework to assess whether existing LLM technologies could satisfy the conditions for singularity, with a focus on Recursive Self-Improvement (RSI) and autonomous code generation. We integrate key component technologies, such as Reinforcement Learning from Human Feedback (RLHF) and Direct Preference Optimization (DPO), into our analysis, illustrating how these could enable LLMs to independently enhance their reasoning and problem-solving capabilities. By mapping out a potential singularity model lifecycle and examining the dynamics of exponential growth models, we elucidate the conditions under which LLMs might self-replicate and rapidly escalate their intelligence. We conclude with a discussion of the ethical and safety implications of such developments, underscoring the need for responsible and controlled advancement in AI research to mitigate existential risks. Our work aims to contribute to the ongoing dialogue on the future of AI and the critical importance of proactive measures to ensure its beneficial development.

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

  • Only published works are available at libraries.

Similar books and articles

An Overview of Models of Technological Singularity.Anders Sandberg - 2013 - In Max More & Natasha Vita‐More (eds.), The Transhumanist Reader. Oxford: Wiley. pp. 376–394.
The Epistemological Danger of Large Language Models.Elise Li Zheng & Sandra Soo-Jin Lee - 2023 - American Journal of Bioethics 23 (10):102-104.
Large Language Models and Biorisk.William D’Alessandro, Harry R. Lloyd & Nathaniel Sharadin - 2023 - American Journal of Bioethics 23 (10):115-118.
Holding Large Language Models to Account.Ryan Miller - 2023 - In Berndt Müller (ed.), Proceedings of the AISB Convention. Society for the Study of Artificial Intelligence and the Simulation of Behaviour. pp. 7-14.
Large Language Models and Inclusivity in Bioethics Scholarship.Sumeeta Varma - 2023 - American Journal of Bioethics 23 (10):105-107.

Analytics

Added to PP
2024-04-23

Downloads
89 (#192,096)

6 months
89 (#53,458)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Ryunosuke Ishizaki
National Institute of Informatics

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references