Neural networks discover a near-identity relation to distinguish simple syntactic forms

Minds and Machines 16 (2):107-139 (2006)
  Copy   BIBTEX

Abstract

Computer simulations show that an unstructured neural-network model [Shultz, T. R., & Bale, A. C. (2001). Infancy, 2, 501–536] covers the essential features␣of infant learning of simple grammars in an artificial language [Marcus, G. F., Vijayan, S., Bandi Rao, S., & Vishton, P. M. (1999). Science, 283, 77–80], and generalizes to examples both outside and inside of the range of training sentences. Knowledge-representation analyses confirm that these networks discover that duplicate words in the sentences are nearly identical and that they use this near-identity relation to distinguish sentences that are consistent or inconsistent with a familiar grammar. Recent simulations that were claimed to show that this model did not really learn these grammars [Vilcu, M., & Hadley, R. F. (2005). Minds and Machines, 15, 359–382] confounded syntactic types with speech sounds and did not perform standard statistical tests of results.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,197

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2009-01-28

Downloads
81 (#207,736)

6 months
8 (#368,968)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

Connectionism.James Garson & Cameron Buckner - 2019 - Stanford Encyclopedia of Philosophy.

Add more citations