Home

About

Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network
FAQHELPSITE MAPCONTACT US


  Advanced Search  

 
The Annals of Statistics 2009, Vol. 37, No. 4, 15911646
 

Summary: The Annals of Statistics
2009, Vol. 37, No. 4, 1591­1646
DOI: 10.1214/08-AOS623
© Institute of Mathematical Statistics, 2009
FAST LEARNING RATES IN STATISTICAL INFERENCE
THROUGH AGGREGATION
BY JEAN-YVES AUDIBERT
Université Paris Est
We develop minimax optimal risk bounds for the general learning task
consisting in predicting as well as the best function in a reference set G up
to the smallest possible additive term, called the convergence rate. When the
reference set is finite and when n denotes the size of the training data, we
provide minimax convergence rates of the form C(
log|G|
n )v with tight eval-
uation of the positive constant C and with exact 0 < v 1, the latter value
depending on the convexity of the loss function and on the level of noise in
the output distribution.
The risk upper bounds are based on a sequential randomized algorithm,
which at each step concentrates on functions having both low risk and low

  

Source: Audibert, Jean-Yves - Département d'Informatique, École Normale Supérieure

 

Collections: Computer Technologies and Information Sciences