 
Summary: The Annals of Statistics
2009, Vol. 37, No. 4, 15911646
DOI: 10.1214/08AOS623
© Institute of Mathematical Statistics, 2009
FAST LEARNING RATES IN STATISTICAL INFERENCE
THROUGH AGGREGATION
BY JEANYVES AUDIBERT
Université Paris Est
We develop minimax optimal risk bounds for the general learning task
consisting in predicting as well as the best function in a reference set G up
to the smallest possible additive term, called the convergence rate. When the
reference set is finite and when n denotes the size of the training data, we
provide minimax convergence rates of the form C(
logG
n )v with tight eval
uation of the positive constant C and with exact 0 < v 1, the latter value
depending on the convexity of the loss function and on the level of noise in
the output distribution.
The risk upper bounds are based on a sequential randomized algorithm,
which at each step concentrates on functions having both low risk and low
