Scaling Deep Learning Workloads: NVIDIA DGX-1/Pascal and Intel Knights Landing
Deep Learning (DL) algorithms have become ubiquitous in data analytics. As a result, major computing vendors --- including NVIDIA, Intel, AMD and IBM --- have architectural road-maps influenced by DL workloads. Furthermore, several vendors have recently advertised new computing products as accelerating DL workloads. Unfortunately, it is difficult for data scientists to quantify the potential of these different products. This paper provides a performance and power analysis of important DL workloads on two major parallel architectures: NVIDIA DGX-1 (eight Pascal P100 GPUs interconnected with NVLink) and Intel Knights Landing (KNL) CPUs interconnected with Intel Omni-Path. Our evaluation consists of a cross section of convolutional neural net workloads: CifarNet, CaffeNet, AlexNet and GoogleNet topologies using the Cifar10 and ImageNet datasets. The workloads are vendor optimized for each architecture. GPUs provide the highest overall raw performance. Our analysis indicates that although GPUs provide the highest overall performance, the gap can close for some convolutional networks; and KNL can be competitive when considering performance/watt. Furthermore, NVLink is critical to GPU scaling.
- Research Organization:
- Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
- Sponsoring Organization:
- USDOE
- DOE Contract Number:
- AC05-76RL01830
- OSTI ID:
- 1373860
- Report Number(s):
- PNNL-SA-124349; KJ0402000; KJ0402000
- Resource Relation:
- Conference: IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW 2017), May 29-June 2017, Orlando, Florida, 399-408
- Country of Publication:
- United States
- Language:
- English
Similar Records
Scaling deep learning workloads: NVIDIA DGX-1/Pascal and Intel Knights Landing
Evaluating On-Node GPU Interconnects for Deep Learning Workloads