Home

About

Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network
FAQHELPSITE MAPCONTACT US


  Advanced Search  

 
The Central Classifier Bound --A New Error Bound for the Classifier Chosen by Early
 

Summary: The Central Classifier Bound -- A New Error
Bound for the Classifier Chosen by Early
Stopping
Eric Bax \Lambda , Zehra Cataltepe, and Joe Sill
California Institute of Technology
September 15, 1997
Key words machine learning, learning theory, validation, early stopping,
Vapnik­Chervonenkis.
1 Introduction
Training with early stopping is the following process. Partition the in­sample
data into training and validation sets. Begin with a random classifier g 1 . Use
an iterative method to decrease the error rate on the training data. Record the
classifier at each iteration, producing a series of snapshots g 1 ; : : : ; g M . Evaluate
the error rate of each snapshot over the validation data. Deliver a minimum
validation error classifier, g \Lambda , as the result of training.
The purpose of this paper is to develop a good probabilistic upper bound on
the error rate of g \Lambda over out­of­sample (test) data. First, we use a validation­
oriented version of VC analysis [8, 9] to develop a bound. Because of the nature
of VC analysis, this initial bound is based on worst­case assumptions about the
rates of agreement among snapshots. In practice, though, successive snapshots

  

Source: Abu-Mostafa, Yaser S. - Department of Mechanical Engineering & Computer Science Department, California Institute of Technology

 

Collections: Computer Technologies and Information Sciences