Home

About

Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network
FAQHELPSITE MAPCONTACT US


  Advanced Search  

 
SIOPT#060526, second revision, submitted on 22 Mar 2005 Convergence of the Iterates of Descent Methods for Analytic
 

Summary: SIOPT#060526, second revision, submitted on 22 Mar 2005
Convergence of the Iterates of Descent Methods for Analytic
Cost Functions
P.-A. Absil
R. Mahony
B. Andrews§
Abstract
In the early eighties Lojasiewicz [Loj84] proved that a bounded solution of a gradient
flow for an analytic cost function converges to a well-defined limit point. In this paper,
we show that the iterates of numerical descent algorithms, for an analytic cost function,
share this convergence property if they satisfy certain natural descent conditions. The results
obtained are applicable to a broad class of optimization schemes and strengthen classical
"weak convergence" results for descent methods to "strong limit-point convergence" for a
large class of cost functions of practical interest. The result does not require that the cost
has isolated critical points, requires no assumptions on the convexity of the cost, nor any
non-degeneracy conditions on the Hessian of the cost at critical points.
Key words. gradient flows, descent methods, real analytic functions, Lojasiewicz gradient in-
equality, single limit-point convergence, line-search, trust-region, Mexican Hat
1 Introduction
Unconstrained numerical optimization schemes can be classified into two principal categories: line-

  

Source: Absil, Pierre-Antoine - Département d'ingénierie Mathématique, Université Catholique de Louvain

 

Collections: Mathematics