skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Evaluating Modern GPU Interconnect: PCIe, NVLink, NV-SLI, NVSwitch and GPUDirect

Abstract

High performance multi-GPU computing becomes an inevitable trend due to the ever-increasing demand on computation capability in emerging domains such as deep learning, big data and planet-scale simulations. However, the lack of deep understanding on how modern GPUs can be connected and the real impact of state-of-the-art interconnect technology on multi-GPU application performance become a hurdle. In this paper, we fill the gap by conducting a thorough evaluation on five latest types of modern GPU interconnects: PCIe, NVLink-V1, NVLink-V2, NVLink-SLI and NVSwitch, from six high-end servers and HPC platforms: NVIDIA P100-DGX-1, V100-DGX-1, DGX-2, OLCF’s SummitDev and Summit supercomputers, as well as an SLI-linked system with two NVIDIA Turing RTX-2080 GPUs. Based on the empirical evaluation, we have observed four new types of GPU communication network NUMA effects: three are triggered by NVLink’s topology, connectivity and routing, while one is caused by PCIe chipset design issue. These observations indicate that, for a multi-GPU application running in a multi-GPU node, choosing the right GPU combination can impose considerable impact on GPU communication efficiency, as well as an application’s overall performance. Our evaluation can be leveraged in building practical multi-GPU performance models, which are vital for GPU task allocation, scheduling and migration inmore » a shared environment (e.g., AI cloud and HPC centers), as well as communication-oriented performance tuning.« less

Authors:
 [1];  [1];  [1];  [1];  [2];  [1];  [1]
  1. BATTELLE (PACIFIC NW LAB)
  2. College of William and Mary
Publication Date:
Research Org.:
Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
Sponsoring Org.:
USDOE
OSTI Identifier:
1598812
Report Number(s):
PNNL-SA-141707
DOE Contract Number:  
AC05-76RL01830
Resource Type:
Journal Article
Journal Name:
IEEE Transactions on Parallel and Distributed Systems
Additional Journal Information:
Journal Volume: 31; Journal Issue: 1
Country of Publication:
United States
Language:
English
Subject:
Performance Evaluation, GPU, Interconnect, NUMA, PCIe, NVLink, NVSwitch, SLI, GPUDirect, RDMA

Citation Formats

Li, Ang, Song, Shuaiwen, Chen, Jieyang, Li, Jiajia, Liu, Xu, Tallent, Nathan R., and Barker, Kevin J. Evaluating Modern GPU Interconnect: PCIe, NVLink, NV-SLI, NVSwitch and GPUDirect. United States: N. p., 2020. Web. doi:10.1109/TPDS.2019.2928289.
Li, Ang, Song, Shuaiwen, Chen, Jieyang, Li, Jiajia, Liu, Xu, Tallent, Nathan R., & Barker, Kevin J. Evaluating Modern GPU Interconnect: PCIe, NVLink, NV-SLI, NVSwitch and GPUDirect. United States. doi:10.1109/TPDS.2019.2928289.
Li, Ang, Song, Shuaiwen, Chen, Jieyang, Li, Jiajia, Liu, Xu, Tallent, Nathan R., and Barker, Kevin J. Wed . "Evaluating Modern GPU Interconnect: PCIe, NVLink, NV-SLI, NVSwitch and GPUDirect". United States. doi:10.1109/TPDS.2019.2928289.
@article{osti_1598812,
title = {Evaluating Modern GPU Interconnect: PCIe, NVLink, NV-SLI, NVSwitch and GPUDirect},
author = {Li, Ang and Song, Shuaiwen and Chen, Jieyang and Li, Jiajia and Liu, Xu and Tallent, Nathan R. and Barker, Kevin J.},
abstractNote = {High performance multi-GPU computing becomes an inevitable trend due to the ever-increasing demand on computation capability in emerging domains such as deep learning, big data and planet-scale simulations. However, the lack of deep understanding on how modern GPUs can be connected and the real impact of state-of-the-art interconnect technology on multi-GPU application performance become a hurdle. In this paper, we fill the gap by conducting a thorough evaluation on five latest types of modern GPU interconnects: PCIe, NVLink-V1, NVLink-V2, NVLink-SLI and NVSwitch, from six high-end servers and HPC platforms: NVIDIA P100-DGX-1, V100-DGX-1, DGX-2, OLCF’s SummitDev and Summit supercomputers, as well as an SLI-linked system with two NVIDIA Turing RTX-2080 GPUs. Based on the empirical evaluation, we have observed four new types of GPU communication network NUMA effects: three are triggered by NVLink’s topology, connectivity and routing, while one is caused by PCIe chipset design issue. These observations indicate that, for a multi-GPU application running in a multi-GPU node, choosing the right GPU combination can impose considerable impact on GPU communication efficiency, as well as an application’s overall performance. Our evaluation can be leveraged in building practical multi-GPU performance models, which are vital for GPU task allocation, scheduling and migration in a shared environment (e.g., AI cloud and HPC centers), as well as communication-oriented performance tuning.},
doi = {10.1109/TPDS.2019.2928289},
journal = {IEEE Transactions on Parallel and Distributed Systems},
number = 1,
volume = 31,
place = {United States},
year = {2020},
month = {1}
}