Improving scalability of parallel CNN training by adaptively adjusting parameter update frequency
Journal Article
·
· Journal of Parallel and Distributed Computing
- Northwestern Univ., Evanston, IL (United States); Northwestern University
- Northwestern Univ., Evanston, IL (United States)
Synchronous SGD with data parallelism, the most popular parallelization strategy for CNN training, suffers from the expensive communication cost of averaging gradients among all workers. The iterative parameter updates of SGD cause frequent communications and it becomes the performance bottleneck. In this paper, we propose a lazy parameter update algorithm that adaptively adjusts the parameter update frequency to address the expensive communication cost issue. Our algorithm accumulates the gradients if the difference of the accumulated gradients and the latest gradients is sufficiently small. Here, the less frequent parameter updates reduce the per-iteration communication cost while maintaining the model accuracy. Our experimental results demonstrate that the lazy update method remarkably improves the scalability while maintaining the model accuracy. For ResNet50 training on ImageNet, the proposed algorithm achieves a significantly higher speedup (739.6 on 2048 Cori KNL nodes) as compared to the vanilla synchronous SGD (276.6) while the model accuracy is almost not affected (<0.2% difference).
- Research Organization:
- Northwestern Univ., Evanston, IL (United States)
- Sponsoring Organization:
- USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR)
- Grant/Contract Number:
- SC0019358; SC0021399
- OSTI ID:
- 1864734
- Alternate ID(s):
- OSTI ID: 1823492
- Journal Information:
- Journal of Parallel and Distributed Computing, Journal Name: Journal of Parallel and Distributed Computing Vol. 159; ISSN 0743-7315
- Publisher:
- ElsevierCopyright Statement
- Country of Publication:
- United States
- Language:
- English
Recent Trends in Deep Learning Based Natural Language Processing [Review Article]
|
journal | August 2018 |
Dynamic Adaptable Asynchronous Progress Model for MPI RMA Multiphase Applications
|
journal | September 2018 |
Optimization Methods for Large-Scale Machine Learning
|
journal | January 2018 |
Stochastic Estimation of the Maximum of a Regression Function
|
journal | September 1952 |
Similar Records
Scaling Deep Learning on GPU and Knights Landing clusters
Scaling deep learning on GPU and knights landing clusters
Journal Article
·
Mon Sep 25 20:00:00 EDT 2017
· International Conference for High Performance Computing, Networking, Storage and Analysis (Online)
·
OSTI ID:1398518
Scaling deep learning on GPU and knights landing clusters
Journal Article
·
Sat Dec 31 23:00:00 EST 2016
· International Conference for High Performance Computing, Networking, Storage and Analysis
·
OSTI ID:1439212