skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks

Abstract

Going deeper and wider in neural architectures improves their accuracy, while the limited GPU DRAM places an undesired restriction on the network design domain. Deep Learning (DL) practitioners either need to change to less de- sired network architectures, or nontrivially dissect a network across multiGPUs. These distract DL practitioners from concentrating on their original machine learning tasks. We present SuperNeurons: a dynamic GPU memory scheduling runtime to enable the network training far beyond the GPU DRAM capacity. SuperNeurons features 3 memory optimizations, Liveness Analysis, Unified Tensor Pool, and Cost-Aware Recomputation; together they effectively reduce the network-wide peak memory usage down to the maximal memory usage among layers. We also address the performance issues in these memory-saving techniques. Given the limited GPU DRAM, SuperNeurons not only provisions the necessary memory for the training, but also dynamically allocates the memory for convolution workspaces to achieve the high performance. Evaluations against Caffe, Torch, MXNet and TensorFlow have demonstrated that SuperNeurons trains at least 3.2432 deeper network than current ones with the leading performance. Particularly, SuperNeurons can train ResNet2500 that has 104 basic network layers on a 12GB K40c.

Authors:
 [1];  [2];  [2];  [3];  [4];  [4];  [2];  [5]
  1. Brown University
  2. University of Electronic Science and Technology of China
  3. Los Alamos National Laboratory
  4. BATTELLE (PACIFIC NW LAB)
  5. Massachusetts Institute of Technology
Publication Date:
Research Org.:
Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
Sponsoring Org.:
USDOE
OSTI Identifier:
1525779
Report Number(s):
PNNL-SA-143407
DOE Contract Number:  
AC05-76RL01830
Resource Type:
Conference
Resource Relation:
Conference: Proceedings of the ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, (PPOPP 2018), February 24-28, 2018, Vienna, Austria
Country of Publication:
United States
Language:
English

Citation Formats

Wang, Linnan, Ye, Jinmian, Zhao, Yiyang, Wu, Wei, Li, Ang, Song, Shuaiwen, Xu, Zenglin, and Kraska, Tim. SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks. United States: N. p., 2018. Web. doi:10.1145/3178487.3178491.
Wang, Linnan, Ye, Jinmian, Zhao, Yiyang, Wu, Wei, Li, Ang, Song, Shuaiwen, Xu, Zenglin, & Kraska, Tim. SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks. United States. doi:10.1145/3178487.3178491.
Wang, Linnan, Ye, Jinmian, Zhao, Yiyang, Wu, Wei, Li, Ang, Song, Shuaiwen, Xu, Zenglin, and Kraska, Tim. Wed . "SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks". United States. doi:10.1145/3178487.3178491.
@article{osti_1525779,
title = {SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks},
author = {Wang, Linnan and Ye, Jinmian and Zhao, Yiyang and Wu, Wei and Li, Ang and Song, Shuaiwen and Xu, Zenglin and Kraska, Tim},
abstractNote = {Going deeper and wider in neural architectures improves their accuracy, while the limited GPU DRAM places an undesired restriction on the network design domain. Deep Learning (DL) practitioners either need to change to less de- sired network architectures, or nontrivially dissect a network across multiGPUs. These distract DL practitioners from concentrating on their original machine learning tasks. We present SuperNeurons: a dynamic GPU memory scheduling runtime to enable the network training far beyond the GPU DRAM capacity. SuperNeurons features 3 memory optimizations, Liveness Analysis, Unified Tensor Pool, and Cost-Aware Recomputation; together they effectively reduce the network-wide peak memory usage down to the maximal memory usage among layers. We also address the performance issues in these memory-saving techniques. Given the limited GPU DRAM, SuperNeurons not only provisions the necessary memory for the training, but also dynamically allocates the memory for convolution workspaces to achieve the high performance. Evaluations against Caffe, Torch, MXNet and TensorFlow have demonstrated that SuperNeurons trains at least 3.2432 deeper network than current ones with the leading performance. Particularly, SuperNeurons can train ResNet2500 that has 104 basic network layers on a 12GB K40c.},
doi = {10.1145/3178487.3178491},
journal = {},
number = ,
volume = ,
place = {United States},
year = {2018},
month = {2}
}

Conference:
Other availability
Please see Document Availability for additional information on obtaining the full-text document. Library patrons may search WorldCat to identify libraries that hold this conference proceeding.

Save / Share: