Deep convolutional neural networks are not mechanistic explanations of object recognition

Synthese 203 (1):1-28 (2024)
  Copy   BIBTEX

Abstract

Given the extent of using deep convolutional neural networks to model the mechanism of object recognition, it becomes important to analyse the evidence of their similarity and the explanatory potential of these models. I focus on one frequent method of their comparison—representational similarity analysis, and I argue, first, that it underdetermines these models as how-actually mechanistic explanations. This happens because different similarity measures in this framework pick out different mechanisms across DCNNs and the brain in order to correspond them, and there is no arbitration between them in terms of relevance for object recognition. Second, the reason similarity measures are underdetermining to a large degree stems from the highly idealised nature of these models, which undermines their status as how-possibly mechanistic explanatory models of object recognition as well. Thus, building models with more theoretical consideration and choosing relevant similarity measures may bring us closer to the goal of mechanistic explanation.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,283

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Structural-parametric synthesis of deep learning neural networks.Sineglazov V. M. & Chumachenko O. I. - 2020 - Artificial Intelligence Scientific Journal 25 (4):42-51.
Some Neural Networks Compute, Others Don't.Gualtiero Piccinini - 2008 - Neural Networks 21 (2-3):311-321.

Analytics

Added to PP
2024-01-13

Downloads
8 (#1,323,248)

6 months
8 (#370,373)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references