Can Deep CNNs Avoid Infinite Regress/Circularity in Content Constitution?

Minds and Machines 33 (3):507-524 (2023)
  Copy   BIBTEX

Abstract

The representations of deep convolutional neural networks (CNNs) are formed from generalizing similarities and abstracting from differences in the manner of the empiricist theory of abstraction (Buckner, Synthese 195:5339–5372, 2018). The empiricist theory of abstraction is well understood to entail infinite regress and circularity in content constitution (Husserl, Logical Investigations. Routledge, 2001). This paper argues these entailments hold a fortiori for deep CNNs. Two theses result: deep CNNs require supplementation by Quine’s “apparatus of identity and quantification” in order to (1) achieve concepts, and (2) represent objects, as opposed to “half-entities” corresponding to similarity amalgams (Quine, Quintessence, Cambridge, 2004, p. 107). Similarity amalgams are also called “approximate meaning[s]” (Marcus & Davis, Rebooting AI, Pantheon, 2019, p. 132). Although Husserl inferred the “complete abandonment of the empiricist theory of abstraction” (a fortiori deep CNNs) due to the infinite regress and circularity arguments examined in this paper, I argue that the statistical learning of deep CNNs may be incorporated into a Fodorian hybrid account that supports Quine’s “sortal predicates, negation, plurals, identity, pronouns, and quantifiers” which are representationally necessary to overcome the regress/circularity in content constitution and achieve objective (as opposed to similarity-subjective) representation (Burge, Origins of Objectivity. Oxford, 2010, p. 238). I base myself initially on Yoshimi’s (New Frontiers in Psychology, 2011) attempt to explain Husserlian phenomenology with neural networks but depart from him due to the arguments and consequently propose a two-system view which converges with Weiskopf’s proposal (“Observational Concepts.” The Conceptual Mind. MIT, 2015. 223–248).

Similar books and articles

Errata.[author unknown] - 1999 - Minds and Machines 9 (3):457-457.
Erratum.[author unknown] - 2004 - Minds and Machines 14 (2):279-279.
Book Reviews. [REVIEW][author unknown] - 1997 - Minds and Machines 7 (1):115-155.
Call for papers.[author unknown] - 1999 - Minds and Machines 9 (3):459-459.
Editor’s Note.[author unknown] - 2003 - Minds and Machines 13 (3):337-337.
Book Reviews. [REVIEW][author unknown] - 2004 - Minds and Machines 14 (2):241-278.
Instructions for authors.[author unknown] - 1998 - Minds and Machines 8 (4):587-590.
Volume contents.[author unknown] - 1998 - Minds and Machines 8 (4):591-594.
Editor's Note.[author unknown] - 2001 - Minds and Machines 11 (1):1-1.
Book Reviews. [REVIEW][author unknown] - 1997 - Minds and Machines 7 (2):289-320.
Erratum.[author unknown] - 1997 - Journal of Applied Non-Classical Logics 7 (3):473-473.
Correction to: What Might Machines Mean?Mitchell Green & Jan G. Michel - 2022 - Minds and Machines 32 (2):339-339.

Analytics

Added to PP
2023-07-19

Downloads
167 (#116,264)

6 months
122 (#33,008)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Jesse Lopes
Boston College

Citations of this work

No citations found.

Add more citations

References found in this work

Origins of Objectivity.Tyler Burge - 2010 - Oxford, GB: Oxford University Press.
The origin of concepts.Susan Carey - 2009 - New York: Oxford University Press.
Concepts: Where Cognitive Science Went Wrong.Jerry A. Fodor - 1998 - Oxford, GB: Oxford University Press.
Doing without concepts.Edouard Machery - 2009 - New York: Oxford University Press.
Logical investigations.Edmund Husserl - 2000 - New York: Routledge. Edited by Dermot Moran.

View all 25 references / Add more references