Why Can Computers Understand Natural Language?

Philosophy and Technology 34 (1):149-214 (2020)
  Copy   BIBTEX

Abstract

The present paper intends to draw the conception of language implied in the technique of word embeddings that supported the recent development of deep neural network models in computational linguistics. After a preliminary presentation of the basic functioning of elementary artificial neural networks, we introduce the motivations and capabilities of word embeddings through one of its pioneering models, word2vec. To assess the remarkable results of the latter, we inspect the nature of its underlying mechanisms, which have been characterized as the implicit factorization of a word-context matrix. We then discuss the ordinary association of the “distributional hypothesis” with a “use theory of meaning,” often justifying the theoretical basis of word embeddings, and contrast them to the theory of meaning stemming from those mechanisms through the lens of matrix models (such as vector space models and distributional semantic models). Finally, we trace back the principles of their possible consistency through Harris’s original distributionalism up to the structuralist conception of language of Saussure and Hjelmslev. Other than giving access to the technical literature and state of the art in the field of natural language processing to non-specialist readers, the paper seeks to reveal the conceptual and philosophical stakes involved in the recent application of new neural network techniques to the computational treatment of language.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,347

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Chomskyan Theory of Language: A Phenomenological Re-evaluation.Shiva Rahman - 2019 - Journal of the Indian Council of Philosophical Research 36 (2):281-294.
Representation and inference for natural language: a first course in computational semantics.Patrick Blackburn - 2005 - Stanford, Calif.: Center for the Study of Language and Information. Edited by Johannes Bos.
Natural Language Semantics and Computability.Richard Moot & Christian Retoré - 2019 - Journal of Logic, Language and Information 28 (2):287-307.
Meaning and Linguistic Sound: Why Are Sounds Imposed on Our Minds?Abolfazl Sabramiz - 2013 - Dialogue: Journal of Phi Sigma Tau 56 (1):14-23.
The language of word meaning.Pierrette Bouillon & Federica Busa (eds.) - 2001 - New York: Cambridge University Press.

Analytics

Added to PP
2020-05-15

Downloads
74 (#224,589)

6 months
23 (#121,411)

Historical graph of downloads
How can I increase my downloads?

References found in this work

Difference and repetition.Gilles Deleuze - 1994 - London: Athlone Press.
Finding Structure in Time.Jeffrey L. Elman - 1990 - Cognitive Science 14 (2):179-211.
{Finding structure in time}.J. Elman - 1993 - {Cognitive Science} 48:71-99.
Logical Syntax of Language.Rudolf Carnap - 1937 - London,: Routledge. Edited by Amethe Smeaton.

View all 16 references / Add more references