skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Compiler-based code generation and autotuning for geometric multigrid on GPU-accelerated supercomputers

Abstract

GPUs, with their high bandwidths and computational capabilities are an increasingly popular target for scientific computing. Unfortunately, to date, harnessing the power of the GPU has required use of a GPU-specific programming model like CUDA, OpenCL, or OpenACC. Thus, in order to deliver portability across CPU-based and GPU-accelerated supercomputers, programmers are forced to write and maintain two versions of their applications or frameworks. In this paper, we explore the use of a compiler-based autotuning framework based on CUDA-CHiLL to deliver not only portability, but also performance portability across CPU- and GPU-accelerated platforms for the geometric multigrid linear solvers found in many scientific applications. We also show that with autotuning we can attain near Roofline (a performance bound for a computation and target architecture) performance across the key operations in the miniGMG benchmark for both CPU- and GPU-based architectures as well as for a multiple stencil discretizations and smoothers. We show that our technology is readily interoperable with MPI resulting in performance at scale equal to that obtained via hand-optimized MPI+CUDA implementation.

Authors:
 [1];  [1];  [1];  [1];  [1];  [2]
  1. Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
  2. Univ. of Utah, Salt Lake City, UT (United States). School of Computing
Publication Date:
Research Org.:
Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
Sponsoring Org.:
USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR) (SC-21)
OSTI Identifier:
1379823
Alternate Identifier(s):
OSTI ID: 1397648
Grant/Contract Number:  
AC02-05CH11231; AC05-00OR22725
Resource Type:
Journal Article: Accepted Manuscript
Journal Name:
Parallel Computing
Additional Journal Information:
Journal Volume: 64; Journal Issue: C; Journal ID: ISSN 0167-8191
Publisher:
Elsevier
Country of Publication:
United States
Language:
English
Subject:
97 MATHEMATICS AND COMPUTING; GPU; Compiler; Autotuning; Multigrid

Citation Formats

Basu, Protonu, Williams, Samuel, Van Straalen, Brian, Oliker, Leonid, Colella, Phillip, and Hall, Mary. Compiler-based code generation and autotuning for geometric multigrid on GPU-accelerated supercomputers. United States: N. p., 2017. Web. doi:10.1016/j.parco.2017.04.002.
Basu, Protonu, Williams, Samuel, Van Straalen, Brian, Oliker, Leonid, Colella, Phillip, & Hall, Mary. Compiler-based code generation and autotuning for geometric multigrid on GPU-accelerated supercomputers. United States. doi:10.1016/j.parco.2017.04.002.
Basu, Protonu, Williams, Samuel, Van Straalen, Brian, Oliker, Leonid, Colella, Phillip, and Hall, Mary. Wed . "Compiler-based code generation and autotuning for geometric multigrid on GPU-accelerated supercomputers". United States. doi:10.1016/j.parco.2017.04.002. https://www.osti.gov/servlets/purl/1379823.
@article{osti_1379823,
title = {Compiler-based code generation and autotuning for geometric multigrid on GPU-accelerated supercomputers},
author = {Basu, Protonu and Williams, Samuel and Van Straalen, Brian and Oliker, Leonid and Colella, Phillip and Hall, Mary},
abstractNote = {GPUs, with their high bandwidths and computational capabilities are an increasingly popular target for scientific computing. Unfortunately, to date, harnessing the power of the GPU has required use of a GPU-specific programming model like CUDA, OpenCL, or OpenACC. Thus, in order to deliver portability across CPU-based and GPU-accelerated supercomputers, programmers are forced to write and maintain two versions of their applications or frameworks. In this paper, we explore the use of a compiler-based autotuning framework based on CUDA-CHiLL to deliver not only portability, but also performance portability across CPU- and GPU-accelerated platforms for the geometric multigrid linear solvers found in many scientific applications. We also show that with autotuning we can attain near Roofline (a performance bound for a computation and target architecture) performance across the key operations in the miniGMG benchmark for both CPU- and GPU-based architectures as well as for a multiple stencil discretizations and smoothers. We show that our technology is readily interoperable with MPI resulting in performance at scale equal to that obtained via hand-optimized MPI+CUDA implementation.},
doi = {10.1016/j.parco.2017.04.002},
journal = {Parallel Computing},
number = C,
volume = 64,
place = {United States},
year = {Wed Apr 05 00:00:00 EDT 2017},
month = {Wed Apr 05 00:00:00 EDT 2017}
}

Journal Article:
Free Publicly Available Full Text
Publisher's Version of Record

Save / Share: