Testing Ordinary Meaning

Harvard Law Review 134 (2020)
  Copy   BIBTEX

Abstract

Within legal scholarship and practice, among the most pervasive tasks is the interpretation of texts. And within legal interpretation, perhaps the most pervasive inquiry is the search for “ordinary meaning.” Jurists often treat ordinary meaning analysis as an empirical inquiry, aiming to discover a fact about how people understand language. When evaluating ordinary meaning, interpreters rely on dictionary definitions or patterns of common usage, increasingly via “legal corpus linguistics” approaches. However, the most central question about these popular methods remains open: Do they reliably reflect ordinary meaning? This Article presents a series of experiments that assess whether (a) dictionary definitions and (b) common usage data reflect (c) how people actually understand language today. The Article elaborates the implications of two main experimental results. First, neither the dictionary nor legal corpus linguistics methods reliably track ordinary people’s judgments about meaning. This shifts the argumentative burden to jurists who rely on these tools to identify “ordinary meaning” or “original public meaning”: these views must articulate and demonstrate a reliable method of analysis. Moreover, this divergence illuminates several interpretive fallacies. For example, advocates of legal corpus linguistics often contend that the nonappearance of a specific use in a corpus indicates that the use is not part of the relevant term’s ordinary meaning. The experiments reveal this claim to be a “Nonappearance Fallacy.” Ordinary meaning exceeds datasets of common usage — even very large ones. Second, dictionary and legal corpus linguistics verdicts diverge dramatically from each other. Part of that divergence is explained by the finding that broad dictionary definitions tend to direct interpreters to extensive interpretations, while data of common usage tends to point interpreters to more prototypical cases. This suggests two different criteria that are often relevant in interpretation: a more extensive criterion and a more narrow, prototypical criterion. Although dictionaries and corpus linguistics might, in some cases, help us identify these criteria, a hard legal-philosophical question remains: Which of these two criteria should guide the interpretation of terms and phrases in legal texts? Insofar as there is no compelling case to prefer one, the results suggest that dictionary definitions, legal corpus linguistics, or even other more scientific measures of meaning may not be equipped in principle to deliver simple and unequivocal answers to inquiries about the so-called “ordinary meaning” of legal texts

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,227

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Corpus Linguistics as a Method of Legal Interpretation: Some Progress, Some Questions.Lawrence M. Solan - 2020 - International Journal for the Semiotics of Law - Revue Internationale de Sémiotique Juridique 33 (2):283-298.
An Analytical Foundation of Rule Scepticism.Riccardo Guastini - 2019 - In David Duarte, Pedro Moniz Lopes & Jorge Silva Sampaio (eds.), Legal Interpretation and Scientific Knowledge. Springer Verlag. pp. 13-27.
On the Incoherence of Legal Language to the General Public.Sol Azuelos-Atias - 2011 - International Journal for the Semiotics of Law - Revue Internationale de Sémiotique Juridique 24 (1):41-59.
In Defense of Thomas Reid's Use of 'Suggestion'.Ronald E. Beanblossom - 1975 - Grazer Philosophische Studien 1 (1):19-24.
In Defense of Thomas Reid's Use of 'Suggestion'.Ronald E. Beanblossom - 1975 - Grazer Philosophische Studien 1 (1):19-24.
The Authenticity of the Ordinary.David Egan - 2013 - In David Egan Stephen Reynolds & Aaron James Wendland (eds.), Wittgenstein and Heidegger. Routledge. pp. 66-81.
The Ordinary Hero.Yves Bertrand - 1998 - Madison, WI: Atwood Publishing.

Analytics

Added to PP
2023-01-30

Downloads
22 (#712,914)

6 months
13 (#200,551)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Kevin Tobia
Georgetown University

References found in this work

No references found.

Add more references