Stochastic projective splitting
- Brookhaven National Laboratory (BNL), Upton, NY (United States)
- Rutgers University, Newark, NJ (United States)
Here, we present a new, stochastic variant of the projective splitting (PS) family of algorithms for inclusion problems involving the sum of any finite number of maximal monotone operators. This new variant uses a stochastic oracle to evaluate one of the operators, which is assumed to be Lipschitz continuous, and (deterministic) resolvents to process the remaining operators. Our proposal is the first version of PS with such stochastic capabilities. We envision the primary application being machine learning (ML) problems, with the method’s stochastic features facilitating “mini-batch” sampling of datasets. Since it uses a monotone operator formulation, the method can handle not only Lipschitz-smooth loss minimization, but also min–max and noncooperative game formulations, with better convergence properties than the gradient descent-ascent methods commonly applied in such settings. The proposed method can handle any number of constraints and nonsmooth regularizers via projection and proximal operators. We prove almost-sure convergence of the iterates to a solution and a convergence rate result for the expected residual, and close with numerical experiments on a distributionally robust sparse logistic regression problem.
- Research Organization:
- Brookhaven National Laboratory (BNL), Upton, NY (United States)
- Sponsoring Organization:
- USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR)
- Grant/Contract Number:
- SC0012704
- OSTI ID:
- 2007522
- Report Number(s):
- BNL--224851-2023-JAAM
- Journal Information:
- Computational Optimization and Applications, Journal Name: Computational Optimization and Applications Journal Issue: 2 Vol. 87; ISSN 0926-6003
- Publisher:
- SpringerCopyright Statement
- Country of Publication:
- United States
- Language:
- English
Similar Records
Adversarial classification via distributional robustness with Wasserstein ambiguity
A backward SDE method for uncertainty quantification in deep learning