 
Summary: SIOPT#060526, second revision, submitted on 22 Mar 2005
Convergence of the Iterates of Descent Methods for Analytic
Cost Functions
P.A. Absil
R. Mahony
B. Andrews§
Abstract
In the early eighties Lojasiewicz [Loj84] proved that a bounded solution of a gradient
flow for an analytic cost function converges to a welldefined limit point. In this paper,
we show that the iterates of numerical descent algorithms, for an analytic cost function,
share this convergence property if they satisfy certain natural descent conditions. The results
obtained are applicable to a broad class of optimization schemes and strengthen classical
"weak convergence" results for descent methods to "strong limitpoint convergence" for a
large class of cost functions of practical interest. The result does not require that the cost
has isolated critical points, requires no assumptions on the convexity of the cost, nor any
nondegeneracy conditions on the Hessian of the cost at critical points.
Key words. gradient flows, descent methods, real analytic functions, Lojasiewicz gradient in
equality, single limitpoint convergence, linesearch, trustregion, Mexican Hat
1 Introduction
Unconstrained numerical optimization schemes can be classified into two principal categories: line
