A deterministic gradient-based approach to avoid saddle points
- OSTI
Abstract
Loss functions with a large number of saddle points are one of the major obstacles for training modern machine learning (ML) models efficiently. First-order methods such as gradient descent (GD) are usually the methods of choice for training ML models. However, these methods converge to saddle points for certain choices of initial guesses. In this paper, we propose a modification of the recently proposed Laplacian smoothing gradient descent (LSGD) [Osher et al.,arXiv:1806.06317], called modified LSGD (mLSGD), and demonstrate its potential to avoid saddle points without sacrificing the convergence rate. Our analysis is based on the attraction region, formed by all starting points for which the considered numerical scheme converges to a saddle point. We investigate the attraction region’s dimension both analytically and numerically. For a canonical class of quadratic functions, we show that the dimension of the attraction region for mLSGD is$$\lfloor (n-1)/2\rfloor$$, and hence it is significantly smaller than that of GD whose dimension is$n-1$.
- Research Organization:
- Hysitron, Inc., Minneapolis, MN (United States); Purdue Univ., West Lafayette, IN (United States)
- Sponsoring Organization:
- USDOE Office of Science (SC)
- DOE Contract Number:
- SC0002722; SC0021142
- OSTI ID:
- 2419645
- Journal Information:
- European Journal of Applied Mathematics, Journal Name: European Journal of Applied Mathematics Journal Issue: 4 Vol. 34; ISSN 0956-7925
- Publisher:
- Cambridge University Press
- Country of Publication:
- United States
- Language:
- English
Similar Records
Large gyro-orbit model of ion velocity distribution in plasma near a wall in a grazing-angle magnetic field
Discovery of the most luminous quasar of the last 9 Gyr