Nonmonotonic Inferences and Neural Networks

Synthese 142 (2):143-174 (2004)
  Copy   BIBTEX

Abstract

There is a gap between two different modes of computation: the symbolic mode and the subsymbolic (neuron-like) mode. The aim of this paper is to overcome this gap by viewing symbolism as a high-level description of the properties of (a class of) neural networks. Combining methods of algebraic semantics and non-monotonic logic, the possibility of integrating both modes of viewing cognition is demonstrated. The main results are (a) that certain activities of connectionist networks can be interpreted as non-monotonic inferences, and (b) that there is a strict correspondence between the coding of knowledge in Hopfield networks and the knowledge representation in weight-annotated Poole systems. These results show the usefulness of non-monotonic logic as a descriptive and analytic tool for analyzing emerging properties of connectionist networks. Assuming an exponential development of the weight function, the present account relates to optimality theory – a general framework that aims to integrate insights from symbolism and connectionism. The paper concludes with some speculations about extending the present ideas.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,197

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Some Neural Networks Compute, Others Don't.Gualtiero Piccinini - 2008 - Neural Networks 21 (2-3):311-321.
Out of their minds: Legal theory in neural networks. [REVIEW]Dan Hunter - 1999 - Artificial Intelligence and Law 7 (2-3):129-151.
Nonmonotonic theories and their axiomatic varieties.Zbigniew Stachniak - 1995 - Journal of Logic, Language and Information 4 (4):317-334.
Nonmonotonic Inconsistency.Charles B. Cross - 2003 - Artificial Intelligence 149 (2):161-178.

Analytics

Added to PP
2009-01-28

Downloads
99 (#176,166)

6 months
16 (#160,013)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

A Computational Learning Semantics for Inductive Empirical Knowledge.Kevin T. Kelly - 2014 - In Alexandru Baltag & Sonja Smets (eds.), Johan van Benthem on Logic and Information Dynamics. Springer International Publishing. pp. 289-337.
Illustrating a neural model of logic computations: The case of Sherlock Holmes’ old maxim.Eduardo Mizraji - 2016 - Theoria: Revista de Teoría, Historia y Fundamentos de la Ciencia 31 (1):7-25.

Add more citations