Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Sparse Deep Neural Network Inference using different Programming Models

Conference ·
Sparse deep neural networks have gained increasing attention recently in achieving speedups on inference with reduced memory footprints. However, the real-world applications are little shown with specialized optimizations, yet a wide variety of DNN tasks remain dense without exploiting the advantages of sparsity in networks. Recent work presented by MIT/IEEE/Amazon GraphChallenge has demonstrated significant speedups and various techniques. Still, we find that there is limited investigation of the impact of various Python and C\slash C++ based programming models to explore new opportunities in general cases. In this work, we provide performance evaluation through different programming models using CuPy, cuSPARSE, and OpenMP to discuss the advantages and disadvantages of our sparse implementations on single-GPU and multiple GPUs of NVIDIA DGX-A100 40GB/80GB platforms.
Research Organization:
Pacific Northwest National Laboratory (PNNL), Richland, WA (United States)
Sponsoring Organization:
USDOE
DOE Contract Number:
AC05-76RL01830
OSTI ID:
1903281
Report Number(s):
PNNL-SA-175380
Country of Publication:
United States
Language:
English

Similar Records

Accelerating GNNs on GPU Sparse Tensor Cores through N:M Sparsity-Oriented Graph Reordering
Conference · Thu Feb 27 23:00:00 EST 2025 · OSTI ID:2524569

Generic, Sparse Tensor Core for Neural Networks
Conference · Fri Oct 02 00:00:00 EDT 2020 · OSTI ID:1788374

Fast and Scalable Sparse Triangular Solver for Multi-GPU Based HPC Architectures
Conference · Mon Aug 09 00:00:00 EDT 2021 · OSTI ID:1830211

Related Subjects