Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Layer-Parallel Training of Deep Residual Neural Networks

Journal Article · · SIAM Journal on Mathematics of Data Science
DOI:https://doi.org/10.1137/19M1247620· OSTI ID:1618082
 [1];  [2];  [3];  [4];  [1]
  1. Univ. of Kaiserslautern (Germany)
  2. Emory Univ., Atlanta, GA (United States)
  3. Univ. of New Mexico, Albuquerque, NM (United States)
  4. Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
Residual neural networks (ResNets) are a promising class of deep neural networks that have shown excellent performance for a number of learning tasks, e.g., image classification and recognition. Mathematically, ResNet architectures can be interpreted as forward Euler discretizations of a nonlinear initial value problem whose time-dependent control variables represent the weights of the neural network. Hence, training a ResNet can be cast as an optimal control problem of the associated dynamical system. For similar time-dependent optimal control problems arising in engineering applications, parallel-in-time methods have shown notable improvements in scalability. This paper demonstrates the use of those techniques for efficient and effective training of ResNets. The proposed algorithms replace the classical (sequential) forward and backward propagation through the network layers with a parallel nonlinear multigrid iteration applied to the layer domain. This adds a new dimension of parallelism across layers that is attractive when training very deep networks. From this basic idea, we derive multiple layer-parallel methods. The most efficient version employs a simultaneous optimization approach where updates to the network parameters are based on inexact gradient information in order to speed up the training process. Finally, using numerical examples from supervised classification, we demonstrate that the new approach achieves a training performance similar to that of traditional methods, but enables layer-parallelism and thus provides speedup over layer-serial methods through greater concurrency.
Research Organization:
Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
Sponsoring Organization:
National Science Foundation (NSF); USDOE National Nuclear Security Administration (NNSA); USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR) (SC-21)
Grant/Contract Number:
AC04-94AL85000; NA0003525
OSTI ID:
1618082
Report Number(s):
SAND--2019-12660J; 680497
Journal Information:
SIAM Journal on Mathematics of Data Science, Journal Name: SIAM Journal on Mathematics of Data Science Journal Issue: 1 Vol. 2; ISSN 2577-0187
Publisher:
Society for Industrial and Applied Mathematics (SIAM)Copyright Statement
Country of Publication:
United States
Language:
English

References (20)

Multigrid methods with space–time concurrency journal August 2017
A non-intrusive parallel-in-time adjoint solver with the XBraid library journal June 2018
Adaptive sequencing of primal, dual, and design steps in simulation based optimization journal October 2013
An Introduction to the Adjoint Approach to Design journal December 2000
Deep learning journal May 2015
A non-intrusive parallel-in-time approach for simultaneous optimization with unsteady PDEs journal May 2018
Stable architectures for deep neural networks journal December 2017
Multi-level adaptive solutions to boundary-value problems journal May 1977
Gradient-based learning applied to document recognition journal January 1998
Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups journal November 2012
Analysis of the Parareal Time‐Parallel Time‐Integration Method journal January 2007
Approximate Nullspace Iterations for KKT Systems journal January 2010
Minimal Repetition Dynamic Checkpointing Algorithm for Unsteady Adjoint Calculation journal January 2009
Adaptive Multilevel Inexact SQP Methods for PDE-Constrained Optimization journal January 2011
Parallel Time Integration with Multigrid journal January 2014
Two-Level Convergence Theory for Multigrid Reduction in Time (MGRIT) journal January 2017
Multigrid Reduction in Time for Nonlinear Parabolic Problems: A Case Study journal January 2017
Parallel Lagrange--Newton--Krylov--Schur Methods for PDE-Constrained Optimization. Part I: The Krylov--Schur Solver journal January 2005
Learning Deep Architectures for AI journal January 2009
220 Band AVIRIS Hyperspectral Image Data Set: June 12, 1992 Indian Pine Test Site 3 dataset January 2015

Similar Records

TorchBraid: High-Performance Layer-Parallel Training of Deep Neural Networks with MPI and GPU Acceleration
Journal Article · Sun Sep 28 20:00:00 EDT 2025 · ACM Transactions on Mathematical Software · OSTI ID:3005462

Train Like a (Var)Pro: Efficient Training of Neural Networks with Variable Projection
Journal Article · Mon Oct 04 20:00:00 EDT 2021 · SIAM Journal on Mathematics of Data Science · OSTI ID:1834344

An introduction to neural networks: A tutorial
Conference · Fri Dec 30 23:00:00 EST 1994 · OSTI ID:482047