An adaptive sampling augmented Lagrangian method for stochastic optimization with deterministic constraints
- University of Texas, Austin, TX (United States)
- Brown University, Providence, RI (United States)
- Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States). Center for Design Optimization
- Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing
The primary goal of this paper is to provide an efficient solution algorithm based on the augmented Lagrangian framework for optimization problems with a stochastic objective function and deterministic constraints. Our main contribution is combining the augmented Lagrangian framework with adaptive sampling, resulting in an efficient optimization methodology validated with practical examples. To achieve the presented efficiency, here we consider inexact solutions for the augmented Lagrangian subproblems, and through an adaptive sampling mechanism, we control the variance in the gradient estimates. Furthermore, we analyze the theoretical performance of the proposed scheme by showing equivalence to a gradient descent algorithm on a Moreau envelope function, and we prove sublinear convergence for convex objectives and linear convergence for strongly convex objectives with affine equality constraints. The worst-case sample complexity of the resulting algorithm, for an arbitrary choice of penalty parameter in the augmented Lagrangian function, is $$\mathscr{O}$$(ϵ-3-δ) , where ϵ > 0 is the expected error of the solution and δ > 0 is a user-defined parameter. If the penalty parameter is chosen to be $$\mathscr{O}$$(ϵ-1), we demonstrate that the result can be improved to $$\mathscr{O}$$(ϵ-2) , which is competitive with the other methods employed in the literature. Moreover, if the objective function is strongly convex with affine equality constraints, we obtain $$\mathscr{O}$$(ϵ-1log(1/ϵ)) complexity. Finally, we empirically verify the performance of our adaptive sampling augmented Lagrangian framework in machine learning optimization and engineering design problems, including topology optimization of a heat sink with environmental uncertainty.
- Research Organization:
- Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States)
- Sponsoring Organization:
- USDOE National Nuclear Security Administration (NNSA); USDOE Laboratory Directed Research and Development (LDRD) Program
- Grant/Contract Number:
- AC52-07NA27344
- OSTI ID:
- 2335998
- Report Number(s):
- LLNL--JRNL-848453; 1073522
- Journal Information:
- Computers and Mathematics with Applications (Oxford), Journal Name: Computers and Mathematics with Applications (Oxford) Journal Issue: N/A Vol. 149; ISSN 0898-1221
- Publisher:
- ElsevierCopyright Statement
- Country of Publication:
- United States
- Language:
- English
Similar Records
An Augmented Lagrangian Method for a Class of Inverse Quadratic Programming Problems