Functional Concept Proxies and the Actually Smart Hans Problem: What’s Special About Deep Neural Networks in Science

Synthese 203 (1):1-39 (2023)
  Copy   BIBTEX

Abstract

Deep Neural Networks (DNNs) are becoming increasingly important as scientific tools, as they excel in various scientific applications beyond what was considered possible. Yet from a certain vantage point, they are nothing but parametrized functions fθ(x)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{f}_{\varvec{\theta }}(\varvec{x})$$\end{document} of some data vector x\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{x}$$\end{document}, and their ‘learning’ is nothing but an iterative, algorithmic fitting of the parameters to data. Hence, what could be special about them as a scientific tool or model? I will here suggest an integrated perspective that mediates between extremes, by arguing that what makes DNNs in science special is their ability to develop functional concept proxies (FCPs): Substructures that occasionally provide them with abilities that correspond to those facilitated by concepts in human reasoning. Furthermore, I will argue that this introduces a problem that has so far barely been recognized by practitioners and philosophers alike: That DNNs may succeed on some vast and unwieldy data sets because they develop FCPs for features that are not transparent to human researchers. The resulting breach between scientific success and human understanding I call the ‘Actually Smart Hans Problem’.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,440

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Understanding Deep Learning with Statistical Relevance.Tim Räz - 2022 - Philosophy of Science 89 (1):20-41.
Some Neural Networks Compute, Others Don't.Gualtiero Piccinini - 2008 - Neural Networks 21 (2-3):311-321.
A proxy culture.Luciano Floridi - 2015 - Philosophy and Technology 28 (4):487–490.
Localization and Intrinsic Function.Charles A. Rathkopf - 2013 - Philosophy of Science 80 (1):1-21.
Structural-parametric synthesis of deep learning neural networks.Sineglazov V. M. & Chumachenko O. I. - 2020 - Artificial Intelligence Scientific Journal 25 (4):42-51.
Mechanisms and Functional Hypotheses in Social Science.Daniel Steel - 2005 - Philosophy of Science 72 (5):941-952.
Functional relations and causality in fechner and Mach.Michael Heidelberger - 2010 - Philosophical Psychology 23 (2):163 – 172.

Analytics

Added to PP
2023-12-29

Downloads
31 (#520,144)

6 months
31 (#106,448)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Florian J. Boge
Bergische Universität Wuppertal

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references