skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Scaling Deep Learning Workloads: NVIDIA DGX-1/Pascal and Intel Knights Landing

Abstract

Deep Learning (DL) algorithms have become ubiquitous in data analytics. As a result, major computing vendors --- including NVIDIA, Intel, AMD and IBM --- have architectural road-maps influenced by DL workloads. Furthermore, several vendors have recently advertised new computing products as accelerating DL workloads. Unfortunately, it is difficult for data scientists to quantify the potential of these different products. This paper provides a performance and power analysis of important DL workloads on two major parallel architectures: NVIDIA DGX-1 (eight Pascal P100 GPUs interconnected with NVLink) and Intel Knights Landing (KNL) CPUs interconnected with Intel Omni-Path. Our evaluation consists of a cross section of convolutional neural net workloads: CifarNet, CaffeNet, AlexNet and GoogleNet topologies using the Cifar10 and ImageNet datasets. The workloads are vendor optimized for each architecture. GPUs provide the highest overall raw performance. Our analysis indicates that although GPUs provide the highest overall performance, the gap can close for some convolutional networks; and KNL can be competitive when considering performance/watt. Furthermore, NVLink is critical to GPU scaling.

Authors:
; ; ; ; ;
Publication Date:
Research Org.:
Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
Sponsoring Org.:
USDOE
OSTI Identifier:
1373860
Report Number(s):
PNNL-SA-124349
KJ0402000; KJ0402000
DOE Contract Number:  
AC05-76RL01830
Resource Type:
Conference
Resource Relation:
Conference: IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW 2017), May 29-June 2017, Orlando, Florida, 399-408
Country of Publication:
United States
Language:
English

Citation Formats

Gawande, Nitin A., Landwehr, Joshua B., Daily, Jeffrey A., Tallent, Nathan R., Vishnu, Abhinav, and Kerbyson, Darren J. Scaling Deep Learning Workloads: NVIDIA DGX-1/Pascal and Intel Knights Landing. United States: N. p., 2017. Web. doi:10.1109/IPDPSW.2017.36.
Gawande, Nitin A., Landwehr, Joshua B., Daily, Jeffrey A., Tallent, Nathan R., Vishnu, Abhinav, & Kerbyson, Darren J. Scaling Deep Learning Workloads: NVIDIA DGX-1/Pascal and Intel Knights Landing. United States. doi:10.1109/IPDPSW.2017.36.
Gawande, Nitin A., Landwehr, Joshua B., Daily, Jeffrey A., Tallent, Nathan R., Vishnu, Abhinav, and Kerbyson, Darren J. Mon . "Scaling Deep Learning Workloads: NVIDIA DGX-1/Pascal and Intel Knights Landing". United States. doi:10.1109/IPDPSW.2017.36.
@article{osti_1373860,
title = {Scaling Deep Learning Workloads: NVIDIA DGX-1/Pascal and Intel Knights Landing},
author = {Gawande, Nitin A. and Landwehr, Joshua B. and Daily, Jeffrey A. and Tallent, Nathan R. and Vishnu, Abhinav and Kerbyson, Darren J.},
abstractNote = {Deep Learning (DL) algorithms have become ubiquitous in data analytics. As a result, major computing vendors --- including NVIDIA, Intel, AMD and IBM --- have architectural road-maps influenced by DL workloads. Furthermore, several vendors have recently advertised new computing products as accelerating DL workloads. Unfortunately, it is difficult for data scientists to quantify the potential of these different products. This paper provides a performance and power analysis of important DL workloads on two major parallel architectures: NVIDIA DGX-1 (eight Pascal P100 GPUs interconnected with NVLink) and Intel Knights Landing (KNL) CPUs interconnected with Intel Omni-Path. Our evaluation consists of a cross section of convolutional neural net workloads: CifarNet, CaffeNet, AlexNet and GoogleNet topologies using the Cifar10 and ImageNet datasets. The workloads are vendor optimized for each architecture. GPUs provide the highest overall raw performance. Our analysis indicates that although GPUs provide the highest overall performance, the gap can close for some convolutional networks; and KNL can be competitive when considering performance/watt. Furthermore, NVLink is critical to GPU scaling.},
doi = {10.1109/IPDPSW.2017.36},
journal = {},
number = ,
volume = ,
place = {United States},
year = {Mon Jul 03 00:00:00 EDT 2017},
month = {Mon Jul 03 00:00:00 EDT 2017}
}

Conference:
Other availability
Please see Document Availability for additional information on obtaining the full-text document. Library patrons may search WorldCat to identify libraries that hold this conference proceeding.

Save / Share: