Benchmarking ADMM in Nonconvex NLPs
Abstract
Here, we study connections between the alternating direction method of multipliers (ADMM), the classical method of multipliers (MM), and progressive hedging (PH). The connections are used to derive benchmark metrics and strategies to monitor and accelerate convergence and to help explain why ADMM and PH are capable of solving complex nonconvex NLPs. Specifically, we observe that ADMM is an inexact version of MM and approaches its performance when multiple coordination steps are performed. In addition, we use the observation that PH is a specialization of ADMM and borrow Lyapunov function and primal-dual feasibility metrics used in ADMM to explain why PH is capable of solving nonconvex NLPs. This analysis also highlights that specialized PH schemes can be derived to tackle a wider range of stochastic programs and even other problem classes. Our exposition is tutorial in nature and seeks to to motivate algorithmic improvements and new decomposition strategies.
- Authors:
-
- Purdue Univ., West Lafayette, IN (United States). Chemical Engineering
- Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Center for Computer Research
- Univ. of Wisconsin, Madison, WI (United States). Dept. of Chemical and Biological Engineering
- Publication Date:
- Research Org.:
- Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
- Sponsoring Org.:
- USDOE Office of Fossil Energy (FE); USDOE National Nuclear Security Administration (NNSA)
- OSTI Identifier:
- 1472254
- Report Number(s):
- SAND-2018-9574J
Journal ID: ISSN 0098-1354; 667498
- Grant/Contract Number:
- AC04-94AL85000; NA0003525; SC0014114
- Resource Type:
- Accepted Manuscript
- Journal Name:
- Computers and Chemical Engineering
- Additional Journal Information:
- Journal Volume: 119; Journal ID: ISSN 0098-1354
- Publisher:
- Elsevier
- Country of Publication:
- United States
- Language:
- English
- Subject:
- 97 MATHEMATICS AND COMPUTING; 42 ENGINEERING; decomposition; augmented Lagrangian; ADMM; NLP; large-scale; coordination
Citation Formats
Rodriguez, Jose S., Nicholson, Bethany, Laird, Carl, and Zavala, Victor M. Benchmarking ADMM in Nonconvex NLPs. United States: N. p., 2018.
Web. doi:10.1016/j.compchemeng.2018.08.036.
Rodriguez, Jose S., Nicholson, Bethany, Laird, Carl, & Zavala, Victor M. Benchmarking ADMM in Nonconvex NLPs. United States. https://doi.org/10.1016/j.compchemeng.2018.08.036
Rodriguez, Jose S., Nicholson, Bethany, Laird, Carl, and Zavala, Victor M. Mon .
"Benchmarking ADMM in Nonconvex NLPs". United States. https://doi.org/10.1016/j.compchemeng.2018.08.036. https://www.osti.gov/servlets/purl/1472254.
@article{osti_1472254,
title = {Benchmarking ADMM in Nonconvex NLPs},
author = {Rodriguez, Jose S. and Nicholson, Bethany and Laird, Carl and Zavala, Victor M.},
abstractNote = {Here, we study connections between the alternating direction method of multipliers (ADMM), the classical method of multipliers (MM), and progressive hedging (PH). The connections are used to derive benchmark metrics and strategies to monitor and accelerate convergence and to help explain why ADMM and PH are capable of solving complex nonconvex NLPs. Specifically, we observe that ADMM is an inexact version of MM and approaches its performance when multiple coordination steps are performed. In addition, we use the observation that PH is a specialization of ADMM and borrow Lyapunov function and primal-dual feasibility metrics used in ADMM to explain why PH is capable of solving nonconvex NLPs. This analysis also highlights that specialized PH schemes can be derived to tackle a wider range of stochastic programs and even other problem classes. Our exposition is tutorial in nature and seeks to to motivate algorithmic improvements and new decomposition strategies.},
doi = {10.1016/j.compchemeng.2018.08.036},
journal = {Computers and Chemical Engineering},
number = ,
volume = 119,
place = {United States},
year = {Mon Aug 27 00:00:00 EDT 2018},
month = {Mon Aug 27 00:00:00 EDT 2018}
}
Web of Science