A primal–dual algorithm for risk minimization
- Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
- Philipps-Univ. Marburg (Germany)
In this paper, we develop an algorithm to efficiently solve risk-averse optimization problems posed in reflexive Banach space. Such problems often arise in many practical applications as, e.g., optimization problems constrained by partial differential equations with uncertain inputs. Unfortunately, for many popular risk models including the coherent risk measures, the resulting risk-averse objective function is nonsmooth. Here, this lack of differentiability complicates the numerical approximation of the objective function as well as the numerical solution of the optimization problem. To address these challenges, we propose a primal–dual algorithm for solving large-scale nonsmooth risk-averse optimization problems. This algorithm is motivated by the classical method of multipliers and by epigraphical regularization of risk measures. As a result, the algorithm solves a sequence of smooth optimization problems using derivative-based methods. We prove convergence of the algorithm even when the subproblems are solved inexactly and conclude with numerical examples demonstrating the efficiency of our method.
- Research Organization:
- Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
- Sponsoring Organization:
- USDOE National Nuclear Security Administration (NNSA); USDOE Laboratory Directed Research and Development (LDRD) Program; Defense Advanced Research Projects Agency (DARPA); US Air Force Office of Scientific Research (AFOSR); German Research Foundation (DFG)
- Grant/Contract Number:
- AC04-94AL85000; NA0003525; 014150709; F4FGA09135G001; SU-963/1-1
- OSTI ID:
- 1765742
- Report Number(s):
- SAND-2021-0818J; 693611
- Journal Information:
- Mathematical Programming, Vol. 193, Issue 1; ISSN 0025-5610
- Publisher:
- SpringerCopyright Statement
- Country of Publication:
- United States
- Language:
- English
Similar Records
A globally convergent LCL method for nonlinear optimization.
ALESQP: An Augmented Lagrangian Equality-Constrained SQP Method for Optimization with General Constraints