Derivative-free stochastic optimization via adaptive sampling strategies
Journal Article
·
· Optimization Methods and Software
- Univ. of Texas, Austin, TX (United States)
- Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States); Northwestern Univ., Evanston, IL (United States)
Here, in this paper, we present a novel derivative-free framework for solving unconstrained stochastic optimization problems. Many problems in fields ranging from simulation optimization to reinforcement learning to quantum computing involve settings where only stochastic function values are obtained via a zeroth-order oracle, which has no available gradient information and necessitates the usage of derivative-free optimization methodologies. Our approach includes estimating gradients using stochastic function evaluations and integrating adaptive sampling techniques to control the accuracy in these stochastic approximations. Our framework encapsulates several gradient estimation techniques, including standard finite-difference, Gaussian smoothing, sphere smoothing, randomized coordinate finite-difference, and randomized subspace finite-difference methods. We provide theoretical convergence guarantees for our framework and analyze the worst-case iteration and sample complexities associated with each gradient estimation method. Finally, we demonstrate the empirical performance of the methods on logistic regression and nonlinear least squares problems.
- Research Organization:
- Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States)
- Sponsoring Organization:
- National Science Foundation (NSF); USDOE Laboratory Directed Research and Development (LDRD) Program; USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR); USDOE Office of Science (SC), Basic Energy Sciences (BES). Scientific User Facilities (SUF)
- Grant/Contract Number:
- AC02-05CH11231; SC0019902
- OSTI ID:
- 2998214
- Journal Information:
- Optimization Methods and Software, Journal Name: Optimization Methods and Software; ISSN 1055-6788; ISSN 1029-4937
- Publisher:
- Informa UK LimitedCopyright Statement
- Country of Publication:
- United States
- Language:
- English
Similar Records
Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization
Derivative-free optimization methods
Stochastic Trust-Region Algorithm in Random Subspaces with Convergence and Expected Complexity Analyses
Journal Article
·
Mon Mar 13 20:00:00 EDT 2023
· Mathematical Programming Computation
·
OSTI ID:2001234
Derivative-free optimization methods
Journal Article
·
Tue Apr 30 20:00:00 EDT 2019
· Acta Numerica
·
OSTI ID:1545343
Stochastic Trust-Region Algorithm in Random Subspaces with Convergence and Expected Complexity Analyses
Journal Article
·
Wed Jul 24 20:00:00 EDT 2024
· SIAM Journal on Optimization
·
OSTI ID:2480304