skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Sparse Deep Neural Network Inference using different Programming Models

Conference ·

Sparse deep neural networks have gained increasing attention recently in achieving speedups on inference with reduced memory footprints. However, the real-world applications are little shown with specialized optimizations, yet a wide variety of DNN tasks remain dense without exploiting the advantages of sparsity in networks. Recent work presented by MIT/IEEE/Amazon GraphChallenge has demonstrated significant speedups and various techniques. Still, we find that there is limited investigation of the impact of various Python and C\slash C++ based programming models to explore new opportunities in general cases. In this work, we provide performance evaluation through different programming models using CuPy, cuSPARSE, and OpenMP to discuss the advantages and disadvantages of our sparse implementations on single-GPU and multiple GPUs of NVIDIA DGX-A100 40GB/80GB platforms.

Research Organization:
Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
Sponsoring Organization:
DOE Contract Number:
Report Number(s):
Resource Relation:
Conference: IEEE High Performance Extreme Computing Conference (HPEC 2022), September 19-23, 2022, Waltham, MA
Country of Publication:
United States