The Lack of A Priori Distinctions Between Learning Algorithms

Neural Computation 8 (7):1341–1390 (1996)
  Copy   BIBTEX

Abstract

This is the first of two papers that use off-training set (OTS) error to investigate the assumption-free relationship between learning algorithms. This first paper discusses the senses in which there are no a priori distinctions between learning algorithms. (The second paper discusses the senses in which there are such distinctions.) In this first paper it is shown, loosely speaking, that for any two algorithms A and B, there are “as many” targets (or priors over targets) for which A has lower expected OTS error than B as vice versa, for loss functions like zero-one loss. In particular, this is true if A is cross-validation and B is “anti-cross-validation” (choose the learning algorithm with largest cross-validation error). This paper ends with a discussion of the implications of these results for computational learning theory. It is shown that one cannot say: if empirical misclassification rate is low, the Vapnik-Chervonenkis dimension of your generalizer is small, and the training set is large, then with high probability your OTS error is small. Other implications for “membership queries” algorithms and “punting” algorithms are also discussed.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,197

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

No Free Lunch Theorems for Optimization.D. H. Wolpert & W. G. Macready - 1997 - IEEE Transactions on Evolutionary Computation 1 (1):67–82.
Imagination and the A Priori.Jared Warren - 2022 - Synthese 201 (1):1-16.

Analytics

Added to PP
2023-09-18

Downloads
0

6 months
0

Historical graph of downloads

Sorry, there are not enough data points to plot this chart.
How can I increase my downloads?

References found in this work

No references found.

Add more references