skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Sparse Matrix-Matrix Multiplication on Multilevel Memory Architectures: Algorithms and Experiments

Abstract

Architectures with multiple classes of memory media are becoming a common part of mainstream supercomputer deployments. So called multi-level memories offer differing characteristics for each memory component including variation in bandwidth, latency and capacity. This paper investigates the performance of sparse matrix multiplication kernels on two leading highperformance computing architectures — Intel's Knights Landing processor and NVIDIA's Pascal GPU. We describe a data placement method and a chunking-based algorithm for our kernels that exploits the existence of the multiple memory spaces in each hardware platform. We evaluate the performance of these methods w.r.t. standard algorithms using the auto-caching mechanisms Our results show that standard algorithms that exploit cache reuse performed as well as multi-memory-aware algorithms for architectures such as Ki\iLs where the memory subsystems have similar latencies. However, for architectures such as GPUS where memory subsystems differ significantly in both bandwidth and latency, multi-memory-aware methods are crucial for good performance. In addition, our new approaches permit the user to run problems that require larger capacities than the fastest memory of each compute node without depending on the software-managed cache mechanisms.

Authors:
 [1];  [1];  [1];  [1]
  1. Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
Publication Date:
Research Org.:
Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
Sponsoring Org.:
USDOE National Nuclear Security Administration (NNSA); USDOE Laboratory Directed Research and Development (LDRD) Program
OSTI Identifier:
1435688
Report Number(s):
SAND2018-3428R
662552
DOE Contract Number:  
AC04-94AL85000; NA-0003525
Resource Type:
Technical Report
Country of Publication:
United States
Language:
English
Subject:
97 MATHEMATICS AND COMPUTING

Citation Formats

Deveci, Mehmet, Hammond, Simon David, Wolf, Michael M., and Rajamanickam, Sivasankaran. Sparse Matrix-Matrix Multiplication on Multilevel Memory Architectures: Algorithms and Experiments. United States: N. p., 2018. Web. doi:10.2172/1435688.
Deveci, Mehmet, Hammond, Simon David, Wolf, Michael M., & Rajamanickam, Sivasankaran. Sparse Matrix-Matrix Multiplication on Multilevel Memory Architectures: Algorithms and Experiments. United States. doi:10.2172/1435688.
Deveci, Mehmet, Hammond, Simon David, Wolf, Michael M., and Rajamanickam, Sivasankaran. Mon . "Sparse Matrix-Matrix Multiplication on Multilevel Memory Architectures: Algorithms and Experiments". United States. doi:10.2172/1435688. https://www.osti.gov/servlets/purl/1435688.
@article{osti_1435688,
title = {Sparse Matrix-Matrix Multiplication on Multilevel Memory Architectures: Algorithms and Experiments},
author = {Deveci, Mehmet and Hammond, Simon David and Wolf, Michael M. and Rajamanickam, Sivasankaran},
abstractNote = {Architectures with multiple classes of memory media are becoming a common part of mainstream supercomputer deployments. So called multi-level memories offer differing characteristics for each memory component including variation in bandwidth, latency and capacity. This paper investigates the performance of sparse matrix multiplication kernels on two leading highperformance computing architectures — Intel's Knights Landing processor and NVIDIA's Pascal GPU. We describe a data placement method and a chunking-based algorithm for our kernels that exploits the existence of the multiple memory spaces in each hardware platform. We evaluate the performance of these methods w.r.t. standard algorithms using the auto-caching mechanisms Our results show that standard algorithms that exploit cache reuse performed as well as multi-memory-aware algorithms for architectures such as Ki\iLs where the memory subsystems have similar latencies. However, for architectures such as GPUS where memory subsystems differ significantly in both bandwidth and latency, multi-memory-aware methods are crucial for good performance. In addition, our new approaches permit the user to run problems that require larger capacities than the fastest memory of each compute node without depending on the software-managed cache mechanisms.},
doi = {10.2172/1435688},
journal = {},
number = ,
volume = ,
place = {United States},
year = {Mon Apr 02 00:00:00 EDT 2018},
month = {Mon Apr 02 00:00:00 EDT 2018}
}

Technical Report:

Save / Share: