Augmented Lagrangian methods for constrained optimization: the role of the penalty constant
In recent years there has been considerable research activity in the area of penalty function and augmented Lagrangian methods for constrained optimization. The role that the penalty constant plays with respect to local convergence and rate of convergence is reviewed here. As the emphasis has changed from the penalty function methods to the multiplier methods, and lately to the quasi-Newton methods, there has been a corresponding decrease in the importance of the penalty constant. Specifically, in the penalty function method one obtains local convergence if and only if the penalty constant becomes infinite. It is possible to obtain local convergence in the multiplier method for a fixed penalty constant, provided that this constant is sufficiently large. However, one obtains superlinear convergence if and only if the penalty constant becomes infinite. Finally, the quasi-Newton methods are locally superlinearly convergent for fixed values of the penalty constant, and actually the most natural formulation gives an algorithm that is independent of the penalty constant.
- Research Organization:
- Rice Univ., Houston, TX (USA). Dept. of Mathematical Sciences
- OSTI ID:
- 5632600
- Report Number(s):
- DOE/ER/05046-16; CONF-7906140-2
- Country of Publication:
- United States
- Language:
- English
Similar Records
Differentiable exact penalty functions via Hestenis-Powell-Rockafellar`s augmented Lagrangian function
An Augmented Lagrangian Method for a Class of Inverse Quadratic Programming Problems