Augmented penality algorithms
We coin the term augmented penalty method to refer to an augmented Lagrangian like method in which the penalty parameter is driven to zero instead of being kept bounded away from zero. The spirit of the algorithms is thus that of a classical penalty in which an estimate of the Lagrange multipliers is updated. For the classical augmented Lagrangian function (using the quadratic loss penalty term) applied to equality constrained optimization, Gould has obtained a two-steps superlinear local convergence results using a constant Lagrange multiplier estimate. Dussault et al have generalized those results to inequality constrained optimization using other penalty terms. On the other hand, convergence and rate of convergence results for Augmented Lagrangians methods concern the convergence of the dual variables, usually without relation to the actual effort required to obtain the approximate solutions of the unconstrained primal minimizations. In this paper, we consider several updating rules for the Lagrange multiplier estimates and we obtain rate of convergence results for both primal and dual variables, a two-steps superlinear convergence of order {alpha}, with {alpha} < 2. In this result, each iteration uses the solution of a single primal-dual linear system. While this does not improve on the rate of convergence of simple penalty methods, it may alleviate some cancellation errors in internal computations. Finally, we discuss augmentability issues for other penalty functions.
- OSTI ID:
- 35971
- Report Number(s):
- CONF-9408161--
- Country of Publication:
- United States
- Language:
- English
Similar Records
Augmented Lagrangian methods for constrained optimization: the role of the penalty constant
The {open_quotes}Hot{close_quotes} start phenomenon in constrained optimization