A randomized sketching trust-region secant method for low-memory dynamic optimization
- Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
The numerical solution of dynamic optimization problems is often limited by the memory required to store the state trajectory, which is used to evaluate the objective function and its derivatives. Recently, [R. Muthukumar et al., SIAM Journal on Optimization 31(2), pp. 1242–1275 (2021)] introduced a trust-region method for dynamic optimization that employs randomized sketching to compress the state trajectory, resulting in inexact derivative computations. By adaptively learning the sketch rank, the trust-region algorithm achieves rigorous convergence guarantees. Here, we extend this approach to use secant Hessian approximations. Due to the randomness introduced by the sketch, the traditional secant update formulae can produce poor Hessian approximations. In particular, the difference of two gradients, computed from two different sketches, may be inconsistent. To overcome this, we employ a sketched approximation of the Hessian application, in lieu of computing the gradient difference. We numerically demonstrate the improved stability of this approach on an example from PDE-constrained optimization.
- Research Organization:
- Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
- Sponsoring Organization:
- USDOE National Nuclear Security Administration (NNSA)
- Grant/Contract Number:
- NA0003525
- OSTI ID:
- 2999462
- Report Number(s):
- SAND--2025-12569J; 1742859
- Journal Information:
- Optimization Letters, Journal Name: Optimization Letters; ISSN 1862-4472; ISSN 1862-4480
- Publisher:
- Springer NatureCopyright Statement
- Country of Publication:
- United States
- Language:
- English
Similar Records
An inexact semismooth Newton method with application to adaptive randomized sketching for dynamic optimization
Inexact Newton-CG algorithms with complexity guarantees