Train Like a (Var)Pro: Efficient Training of Neural Networks with Variable Projection
Journal Article
·
· SIAM Journal on Mathematics of Data Science
- Emory Univ., Atlanta, GA (United States)
- Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
Deep neural networks (DNNs) have achieved state-of-the-art performance across a variety of traditional machine learning tasks, e.g., speech recognition, image classification, and segmentation. The ability of DNNs to efficiently approximate high-dimensional functions has also motivated their use in scientific applications, e.g., to solve partial differential equations and to generate surrogate models. In this paper, we consider the supervised training of DNNs, which arises in many of the above applications. We focus on the central problem of optimizing the weights of the given DNN such that it accurately approximates the relation between observed input and target data. Devising effective solvers for this optimization problem is notoriously challenging due to the large number of weights, nonconvexity, data sparsity, and nontrivial choice of hyperparameters. To solve the optimization problem more efficiently, we propose the use of variable projection (VarPro), a method originally designed for separable nonlinear least-squares problems. Our main contribution is the Gauss--Newton VarPro method (GNvpro) that extends the reach of the VarPro idea to nonquadratic objective functions, most notably cross-entropy loss functions arising in classification. These extensions make GNvpro applicable to all training problems that involve a DNN whose last layer is an affine mapping, which is common in many state-of-the-art architectures. In our four numerical experiments from surrogate modeling, segmentation, and classification, GNvpro solves the optimization problem more efficiently than commonly used stochastic gradient descent (SGD) schemes. Finally, GNvpro finds solutions that generalize well, and in all but one example better than well-tuned SGD methods, to unseen data points.
- Research Organization:
- Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
- Sponsoring Organization:
- USDOE National Nuclear Security Administration (NNSA)
- Grant/Contract Number:
- NA0003525
- OSTI ID:
- 1834344
- Report Number(s):
- SAND--2020-8481J; 689974
- Journal Information:
- SIAM Journal on Mathematics of Data Science, Journal Name: SIAM Journal on Mathematics of Data Science Journal Issue: 4 Vol. 3; ISSN 2577-0187
- Publisher:
- Society for Industrial and Applied Mathematics (SIAM)Copyright Statement
- Country of Publication:
- United States
- Language:
- English
Similar Records
Improving Deep Neural Networks’ Training for Image Classification With Nonlinear Conjugate Gradient-Style Adaptive Momentum
Multiobjective Hyperparameter Optimization for Deep Learning Interatomic Potential Training Using NSGA-II
Training Spiking Neural Networks with Synaptic Plasticity under Integer Representation
Journal Article
·
Thu Mar 23 20:00:00 EDT 2023
· IEEE Transactions on Neural Networks and Learning Systems
·
OSTI ID:2280651
Multiobjective Hyperparameter Optimization for Deep Learning Interatomic Potential Training Using NSGA-II
Conference
·
Tue Aug 01 00:00:00 EDT 2023
·
OSTI ID:1996670
Training Spiking Neural Networks with Synaptic Plasticity under Integer Representation
Conference
·
Thu Jul 01 00:00:00 EDT 2021
·
OSTI ID:1876349