An adaptive sampling augmented Lagrangian method for stochastic optimization with deterministic constraints
Journal Article
·
· Computers and Mathematics with Applications (Oxford)
- University of Texas, Austin, TX (United States)
- Brown University, Providence, RI (United States)
- Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States). Center for Design Optimization
- Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing
The primary goal of this paper is to provide an efficient solution algorithm based on the augmented Lagrangian framework for optimization problems with a stochastic objective function and deterministic constraints. Our main contribution is combining the augmented Lagrangian framework with adaptive sampling, resulting in an efficient optimization methodology validated with practical examples. To achieve the presented efficiency, here we consider inexact solutions for the augmented Lagrangian subproblems, and through an adaptive sampling mechanism, we control the variance in the gradient estimates. Furthermore, we analyze the theoretical performance of the proposed scheme by showing equivalence to a gradient descent algorithm on a Moreau envelope function, and we prove sublinear convergence for convex objectives and linear convergence for strongly convex objectives with affine equality constraints. The worst-case sample complexity of the resulting algorithm, for an arbitrary choice of penalty parameter in the augmented Lagrangian function, is $$\mathscr{O}$$(ϵ-3-δ) , where ϵ > 0 is the expected error of the solution and δ > 0 is a user-defined parameter. If the penalty parameter is chosen to be $$\mathscr{O}$$(ϵ-1), we demonstrate that the result can be improved to $$\mathscr{O}$$(ϵ-2) , which is competitive with the other methods employed in the literature. Moreover, if the objective function is strongly convex with affine equality constraints, we obtain $$\mathscr{O}$$(ϵ-1log(1/ϵ)) complexity. Finally, we empirically verify the performance of our adaptive sampling augmented Lagrangian framework in machine learning optimization and engineering design problems, including topology optimization of a heat sink with environmental uncertainty.
- Research Organization:
- Brown University, Providence, RI (United States); Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States)
- Sponsoring Organization:
- USDOE Laboratory Directed Research and Development (LDRD) Program; USDOE National Nuclear Security Administration (NNSA); USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR)
- Grant/Contract Number:
- AC52-07NA27344; SC0024335
- OSTI ID:
- 2335998
- Alternate ID(s):
- OSTI ID: 2395941
- Report Number(s):
- LLNL--JRNL-848453; 1073522
- Journal Information:
- Computers and Mathematics with Applications (Oxford), Journal Name: Computers and Mathematics with Applications (Oxford) Journal Issue: N/A Vol. 149; ISSN 0898-1221
- Publisher:
- ElsevierCopyright Statement
- Country of Publication:
- United States
- Language:
- English
Similar Records
Short-term generation scheduling with transmission and environmental constraints using an augmented Lagrangian relaxation
An Augmented Lagrangian Method for a Class of Inverse Quadratic Programming Problems
Journal Article
·
Tue Aug 01 00:00:00 EDT 1995
· IEEE Transactions on Power Systems
·
OSTI ID:163022
An Augmented Lagrangian Method for a Class of Inverse Quadratic Programming Problems
Journal Article
·
Sun Feb 14 23:00:00 EST 2010
· Applied Mathematics and Optimization
·
OSTI ID:21480277