Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Accelerating matrix-centric graph processing on GPUs through bit-level optimizations

Journal Article · · Journal of Parallel and Distributed Computing
 [1];  [1];  [1];  [2];  [2];  [2]
  1. North Carolina State Univ., Raleigh, NC (United States)
  2. Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

Even though it is well known that binary values are common in graph applications (e.g., adjacency matrix), how to leverage the phenomenon for efficiency has not yet been adequately explored. This paper presents a systematic study on how to unlock the potential of the bit-level optimizations of graph computations that involve binary values. It proposes a two-level representation named Bit-Block Compressed Sparse Row (B2SR) and presents a series of optimizations to the graph operations on B2SR by the intrinsics of modern GPUs. It additionally introduces Deep Reinforcement Learning (DRL) as an efficient way to best configure the bit-level optimizations on the fly. Additionally, the DQN-based adaptive tile size selector with dedicated model training can reach 68% prediction accuracy. Evaluations on NVIDIA Pascal and Volta GPUs show that the optimizations bring up to 40× and 6555× for essential GraphBLAS kernels SpMV and SpGEMM, respectively, making GraphBLAS-based BFS accelerate up to 433×, SSSP, PR, and CC up to 35×, and TC up to 52×.

Research Organization:
Pacific Northwest National Laboratory (PNNL), Richland, WA (United States)
Sponsoring Organization:
National Science Foundation (NSF); USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR)
Grant/Contract Number:
AC05-76RL01830; SC0021293; EE0009357
OSTI ID:
1968852
Alternate ID(s):
OSTI ID: 1962681
Report Number(s):
PNNL-SA-179122
Journal Information:
Journal of Parallel and Distributed Computing, Journal Name: Journal of Parallel and Distributed Computing Vol. 177; ISSN 0743-7315
Publisher:
ElsevierCopyright Statement
Country of Publication:
United States
Language:
English

References (7)

Accelerating sparse matrix–matrix multiplication with GPU Tensor Cores journal December 2020
High Performance Exact Triangle Counting on GPUs journal December 2017
Enabling Runtime SpMV Format Selection through an Overhead Conscious Method journal January 2020
Scalable GPU graph traversal journal September 2012
Thinking Like a Vertex journal October 2015
GraphIt: a high-performance graph DSL journal October 2018
GraphBLAST: A High-Performance Linear Algebra-based Graph Framework on the GPU journal February 2022

Similar Records

Bit-GraphBLAS: Bit-Level Optimizations of Matrix-Centric Graph Processing on GPU
Conference · Fri Jul 15 00:00:00 EDT 2022 · OSTI ID:1888819

A Pattern Based Algorithmic Autotuner for Graph Processing on GPUs
Conference · Fri Feb 15 23:00:00 EST 2019 · OSTI ID:1765323

GPU-Centric Communication on NVIDIA GPU Clusters with InfiniBand: A Case Study with OpenSHMEM
Conference · Thu Nov 30 23:00:00 EST 2017 · OSTI ID:1427708