 
Summary: NOTE Communicated by Todd Leen
The Early Restart Algorithm
Malik MagdonIsmail
Amir F. Atiya
Learning Systems Group, Electrical Engineering Department, California Institute of
Technology, Pasadena, CA 91125, U.S.A.
Consider an algorithm whose time to convergence is unknown (because
of some random element in the algorithm, such as a random initial weight
choice for neural network training). Consider the following strategy. Run
the algorithm for a speci c time T. If it has not converged by time T, cut the
run short and rerun it from the start (repeat the same strategy for every
run). This socalled restart mechanism has been proposed by Fahlman
(1988) in the context of backpropagation training. It is advantageous in
problems that are prone to local minima or when there is a large variability
in convergence time from run to run, and may lead to a speedup in such
cases. In this article, we analyze theoretically the restart mechanism, and
obtain conditions on the probability density of the convergence time for
which restart will improve the expected convergence time. We also derive
the optimal restart time. We apply the derived formulas to several cases,
including steepestdescent algorithms.
