DOE PAGES title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Hierarchical Roofline analysis for GPUs: Accelerating performance optimization for the NERSC‐9 Perlmutter system

Abstract

Summary The Roofline performance model provides an intuitive and insightful approach to identifying performance bottlenecks and guiding performance optimization. In preparation for the next‐generation supercomputer Perlmutter at NERSC, this paper presents a methodology to construct a hierarchical Roofline on NVIDIA GPUs and extends it to support reduced precision and Tensor Cores. The hierarchical Roofline incorporates L1, L2, device memory, and system memory bandwidths into one single figure, and it offers more profound insights into performance analysis than the traditional DRAM‐only Roofline. We use our Roofline methodology to analyze three proxy applications: GPP from BerkeleyGW, HPGMG from AMReX, and conv2d from TensorFlow. In doing so, we demonstrate the ability of our methodology to readily understand various aspects of performance and performance bottlenecks on NVIDIA GPUs and motivate code optimizations.

Authors:
ORCiD logo [1]; ORCiD logo [1];  [2]
  1. National Energy Research Scientific Computing Center (NERSC) Lawrence Berkeley National Laboratory Berkeley California
  2. Computational Research Division (CRD) Lawrence Berkeley National Laboratory Berkeley California
Publication Date:
Sponsoring Org.:
USDOE
OSTI Identifier:
1574050
Grant/Contract Number:  
AC02-05CH11231
Resource Type:
Publisher's Accepted Manuscript
Journal Name:
Concurrency and Computation. Practice and Experience
Additional Journal Information:
Journal Name: Concurrency and Computation. Practice and Experience Journal Volume: 32 Journal Issue: 20; Journal ID: ISSN 1532-0626
Publisher:
Wiley Blackwell (John Wiley & Sons)
Country of Publication:
United Kingdom
Language:
English

Citation Formats

Yang, Charlene, Kurth, Thorsten, and Williams, Samuel. Hierarchical Roofline analysis for GPUs: Accelerating performance optimization for the NERSC‐9 Perlmutter system. United Kingdom: N. p., 2019. Web. doi:10.1002/cpe.5547.
Yang, Charlene, Kurth, Thorsten, & Williams, Samuel. Hierarchical Roofline analysis for GPUs: Accelerating performance optimization for the NERSC‐9 Perlmutter system. United Kingdom. https://doi.org/10.1002/cpe.5547
Yang, Charlene, Kurth, Thorsten, and Williams, Samuel. Tue . "Hierarchical Roofline analysis for GPUs: Accelerating performance optimization for the NERSC‐9 Perlmutter system". United Kingdom. https://doi.org/10.1002/cpe.5547.
@article{osti_1574050,
title = {Hierarchical Roofline analysis for GPUs: Accelerating performance optimization for the NERSC‐9 Perlmutter system},
author = {Yang, Charlene and Kurth, Thorsten and Williams, Samuel},
abstractNote = {Summary The Roofline performance model provides an intuitive and insightful approach to identifying performance bottlenecks and guiding performance optimization. In preparation for the next‐generation supercomputer Perlmutter at NERSC, this paper presents a methodology to construct a hierarchical Roofline on NVIDIA GPUs and extends it to support reduced precision and Tensor Cores. The hierarchical Roofline incorporates L1, L2, device memory, and system memory bandwidths into one single figure, and it offers more profound insights into performance analysis than the traditional DRAM‐only Roofline. We use our Roofline methodology to analyze three proxy applications: GPP from BerkeleyGW, HPGMG from AMReX, and conv2d from TensorFlow. In doing so, we demonstrate the ability of our methodology to readily understand various aspects of performance and performance bottlenecks on NVIDIA GPUs and motivate code optimizations.},
doi = {10.1002/cpe.5547},
journal = {Concurrency and Computation. Practice and Experience},
number = 20,
volume = 32,
place = {United Kingdom},
year = {Tue Nov 12 00:00:00 EST 2019},
month = {Tue Nov 12 00:00:00 EST 2019}
}

Journal Article:
Free Publicly Available Full Text
Publisher's Version of Record
https://doi.org/10.1002/cpe.5547

Citation Metrics:
Cited by: 29 works
Citation information provided by
Web of Science

Save / Share:

Works referenced in this record:

An Empirical Roofline Methodology for Quantitatively Assessing Performance Portability
conference, November 2018

  • Yang, Charlene; Gayatri, Rahulkumar; Kurth, Thorsten
  • 2018 IEEE/ACM International Workshop on Performance, Portability and Productivity in HPC (P3HPC)
  • DOI: 10.1109/P3HPC.2018.00005

Deep Residual Learning for Image Recognition
conference, June 2016

  • He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing
  • 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • DOI: 10.1109/CVPR.2016.90

Evaluating and Optimizing the NERSC Workload on Knights Landing
conference, November 2016

  • Barnes, Taylor; Cook, Brandon; Deslippe, Jack
  • 2016 7th International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS)
  • DOI: 10.1109/PMBS.2016.010

Roofline: an insightful visual performance model for multicore architectures
journal, April 2009

  • Williams, Samuel; Waterman, Andrew; Patterson, David
  • Communications of the ACM, Vol. 52, Issue 4
  • DOI: 10.1145/1498765.1498785

Electron self-energy calculation using a general multi-pole approximation
journal, April 2003

  • Soininen, J. A.; Rehr, J. J.; Shirley, Eric L.
  • Journal of Physics: Condensed Matter, Vol. 15, Issue 17
  • DOI: 10.1088/0953-8984/15/17/312

Demystifying Parallel and Distributed Deep Learning: An In-depth Concurrency Analysis
journal, August 2019

  • Ben-Nun, Tal; Hoefler, Torsten
  • ACM Computing Surveys, Vol. 52, Issue 4
  • DOI: 10.1145/3320060