Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization

Journal Article · · Mathematical Programming Computation
 [1];  [2]
  1. University of Texas at Austin, TX (United States)
  2. Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States)
Here, we consider unconstrained stochastic optimization problems with no available gradient information. Such problems arise in settings from derivative-free simulation optimization to reinforcement learning. We propose an adaptive sampling quasi-Newton method where we estimate the gradients using finite differences of stochastic function evaluations within a common random number framework. We develop modified versions of a norm test and an inner product quasi-Newton test to control the sample sizes used in the stochastic approximations and provide global convergence results to the neighborhood of a locally optimal solution. We present numerical experiments on simulation optimization problems to illustrate the performance of the proposed algorithm. When compared with classical zeroth-order stochastic gradient methods, we observe that our strategies of adapting the sample sizes significantly improve performance in terms of the number of stochastic function evaluations required.
Research Organization:
Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States)
Sponsoring Organization:
USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR)
Grant/Contract Number:
AC02-05CH11231
OSTI ID:
2001234
Journal Information:
Mathematical Programming Computation, Journal Name: Mathematical Programming Computation Journal Issue: 2 Vol. 15; ISSN 1867-2949
Publisher:
SpringerCopyright Statement
Country of Publication:
United States
Language:
English

References (44)

A Guide to Sample Average Approximation book September 2014
Derivative-Free and Blackbox Optimization book January 2017
Variable-Number Sample-Path Optimization journal July 2007
Sample size selection in optimization methods for machine learning journal June 2012
Global convergence rate analysis of unconstrained optimization methods based on probabilistic models journal April 2017
Stochastic optimization using a trust-region method and random models journal April 2017
Conditional gradient type methods for composite nonlinear and stochastic optimization journal January 2018
Sub-sampled Newton methods journal November 2018
Newton-type methods for non-convex optimization under inexact Hessian information journal May 2019
Random Gradient-Free Minimization of Convex Functions journal November 2015
Zeroth-Order Nonconvex Stochastic Optimization: Handling Constraints, High Dimensionality, and Saddle Points journal March 2021
A Theoretical and Empirical Comparison of Gradient Approximations in Derivative-Free Optimization journal May 2021
Stochastic derivative-free optimization using a trust region framework journal February 2016
Stochastic mesh adaptive direct search for blackbox optimization using probabilistic estimates journal March 2021
A zeroth order method for stochastic weakly convex optimization journal September 2021
Optimization with hidden constraints and embedded Monte Carlo computations journal December 2015
Stochastic Nelder–Mead simplex method – A new globally convergent direct search method for simulation optimization journal August 2012
Derivative-free optimization methods journal May 2019
A robust multi-batch L-BFGS method for machine learning journal August 2019
Optimization and supervised machine learning methods for fitting numerical physics models without derivatives journal January 2021
Exact and inexact subsampled Newton methods for optimization journal April 2018
Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations journal May 2015
Adaptation of the Uobyqa Algorithm for Noisy Functions conference December 2006
ASTRO-DF: Adaptive sampling trust-region optimization algorithms, heuristics, and numerical experience conference December 2016
Stochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex case journal February 2017
Benchmarking Derivative-Free Optimization Algorithms journal January 2009
Estimating Computational Noise journal January 2011
Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming journal January 2013
On Sampling Rates in Simulation-Based Recursions journal January 2018
ASTRO-DF: A Class of Adaptive Sampling Trust-Region Algorithms for Derivative-Free Stochastic Optimization journal January 2018
Optimization Methods for Large-Scale Machine Learning journal January 2018
A Smoothing Direct Search Method for Monte Carlo-Based Bound Constrained Composite Nonsmooth Optimization journal January 2018
Adaptive Sampling Strategies for Stochastic Optimization journal January 2018
Derivative-Free Optimization of Noisy Functions via Quasi-Newton Methods journal January 2019
Analysis of the BFGS Method with Errors journal January 2020
Budget-Dependent Convergence Rate of Stochastic Approximation journal February 1998
Estimating Derivatives of Noisy Simulations journal April 2012
CUTEr and SifDec: A constrained and unconstrained testing environment, revisited journal December 2003
Multidimensional Stochastic Approximation Methods journal December 1954
Stochastic Estimation of the Maximum of a Regression Function journal September 1952
A Stochastic Approximation Method journal September 1951
Convergence Rate Analysis of a Stochastic Trust-Region Method via Supermartingales journal April 2019
Simulation-Based Optimization with Stochastic Approximation Using Common Random Numbers journal November 1999
Faster Gradient-Free Proximal Stochastic Methods for Nonconvex Nonsmooth Optimization journal July 2019

Figures / Tables (13)


Similar Records

Derivative-free stochastic optimization via adaptive sampling strategies
Journal Article · Tue Sep 16 20:00:00 EDT 2025 · Optimization Methods and Software · OSTI ID:2998214

A STRUCTURED QUASI-NEWTON ALGORITHM FOR OPTIMIZING WITH INCOMPLETE HESSIAN INFORMATION
Journal Article · Mon Dec 31 23:00:00 EST 2018 · SIAM Journal on Optimization · OSTI ID:1573037

A Structured Quasi-Newton Algorithm for Optimizing with Incomplete Hessian Information
Journal Article · Wed Apr 10 20:00:00 EDT 2019 · SIAM Journal on Optimization · OSTI ID:1574637

Related Subjects