skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Layer-Parallel Training of Deep Residual Neural Networks

Abstract

Residual neural networks (ResNets) are a promising class of deep neural networks that have shown excellent performance for a number of learning tasks, e.g., image classification and recognition. Mathematically, ResNet architectures can be interpreted as forward Euler discretizations of a nonlinear initial value problem whose time-dependent control variables represent the weights of the neural network. Hence, training a ResNet can be cast as an optimal control problem of the associated dynamical system. For similar time-dependent optimal control problems arising in engineering applications, parallel-in-time methods have shown notable improvements in scalability. This paper demonstrates the use of those techniques for efficient and effective training of ResNets. The proposed algorithms replace the classical (sequential) forward and backward propagation through the network layers with a parallel nonlinear multigrid iteration applied to the layer domain. This adds a new dimension of parallelism across layers that is attractive when training very deep networks. From this basic idea, we derive multiple layer-parallel methods. The most efficient version employs a simultaneous optimization approach where updates to the network parameters are based on inexact gradient information in order to speed up the training process. Finally, using numerical examples from supervised classification, we demonstrate that the new approach achievesmore » a training performance similar to that of traditional methods, but enables layer-parallelism and thus provides speedup over layer-serial methods through greater concurrency.« less

Authors:
 [1]; ORCiD logo [2];  [3];  [4];  [1]
  1. Univ. of Kaiserslautern (Germany)
  2. Emory Univ., Atlanta, GA (United States)
  3. Univ. of New Mexico, Albuquerque, NM (United States)
  4. Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
Publication Date:
Research Org.:
Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
Sponsoring Org.:
USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR); USDOE National Nuclear Security Administration (NNSA); National Science Foundation (NSF)
OSTI Identifier:
1618082
Report Number(s):
SAND-2019-12660J
Journal ID: ISSN 2577-0187; 680497
Grant/Contract Number:  
AC04-94AL85000; NA0003525; DMS 1522599; DMS 1751636
Resource Type:
Journal Article: Accepted Manuscript
Journal Name:
SIAM Journal on Mathematics of Data Science
Additional Journal Information:
Journal Volume: 2; Journal Issue: 1; Journal ID: ISSN 2577-0187
Country of Publication:
United States
Language:
English
Subject:
deep learning; residual networks; supervised learning; optimal control; layer-parallelization; parallel-in-time; simultaneous optimization

Citation Formats

Günther, Stefanie, Ruthotto, Lars, Schroder, Jacob B., Cyr, Eric C., and Gauger, Nicolas R. Layer-Parallel Training of Deep Residual Neural Networks. United States: N. p., 2020. Web. doi:10.1137/19M1247620.
Günther, Stefanie, Ruthotto, Lars, Schroder, Jacob B., Cyr, Eric C., & Gauger, Nicolas R. Layer-Parallel Training of Deep Residual Neural Networks. United States. doi:10.1137/19M1247620.
Günther, Stefanie, Ruthotto, Lars, Schroder, Jacob B., Cyr, Eric C., and Gauger, Nicolas R. Wed . "Layer-Parallel Training of Deep Residual Neural Networks". United States. doi:10.1137/19M1247620.
@article{osti_1618082,
title = {Layer-Parallel Training of Deep Residual Neural Networks},
author = {Günther, Stefanie and Ruthotto, Lars and Schroder, Jacob B. and Cyr, Eric C. and Gauger, Nicolas R.},
abstractNote = {Residual neural networks (ResNets) are a promising class of deep neural networks that have shown excellent performance for a number of learning tasks, e.g., image classification and recognition. Mathematically, ResNet architectures can be interpreted as forward Euler discretizations of a nonlinear initial value problem whose time-dependent control variables represent the weights of the neural network. Hence, training a ResNet can be cast as an optimal control problem of the associated dynamical system. For similar time-dependent optimal control problems arising in engineering applications, parallel-in-time methods have shown notable improvements in scalability. This paper demonstrates the use of those techniques for efficient and effective training of ResNets. The proposed algorithms replace the classical (sequential) forward and backward propagation through the network layers with a parallel nonlinear multigrid iteration applied to the layer domain. This adds a new dimension of parallelism across layers that is attractive when training very deep networks. From this basic idea, we derive multiple layer-parallel methods. The most efficient version employs a simultaneous optimization approach where updates to the network parameters are based on inexact gradient information in order to speed up the training process. Finally, using numerical examples from supervised classification, we demonstrate that the new approach achieves a training performance similar to that of traditional methods, but enables layer-parallelism and thus provides speedup over layer-serial methods through greater concurrency.},
doi = {10.1137/19M1247620},
journal = {SIAM Journal on Mathematics of Data Science},
issn = {2577-0187},
number = 1,
volume = 2,
place = {United States},
year = {2020},
month = {1}
}

Journal Article:
Free Publicly Available Full Text
This content will become publicly available on January 1, 2021
Publisher's Version of Record

Save / Share: