skip to main content
DOE PAGES title logo U.S. Department of Energy
Office of Scientific and Technical Information

This content will become publicly available on August 20, 2020

Title: Targeting GPUs with OpenMP directives on Summit: A simple and effective Fortran experience

Abstract

We use OpenMP to target hardware accelerators (GPUs) on Summit, a newly deployed supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), demonstrating simplified access to GPU devices for users of our astrophysics code GenASiS and useful speedup on a sample fluid dynamics problem. We modify our workhorse class for data storage to include members and methods that significantly streamline the persistent allocation of and association to GPU memory. Users offload computational kernels with OpenMP target directives that are rather similar to constructs already familiar from multi-core parallelization. In this initial example we ask, “With a given number of Summit nodes, how fast can we compute with and without GPUs?”, and find total wall time speedups of ~ 12X. We also find reasonable weak scaling up to 8000 GPUs (1334 Summit nodes). We make available the source code from this work at https://github.com/GenASiS/GenASiS_Basics.

Authors:
 [1]; ORCiD logo [2]
  1. Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). National Center for Computational Sciences; Univ. of Tennessee, Knoxville, TN (United States). Dept. of Physics and Astronomy
  2. Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Physics Division; Univ. of Tennessee, Knoxville, TN (United States). Dept. of Physics and Astronomy
Publication Date:
Research Org.:
Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
Sponsoring Org.:
USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR) (SC-21)
OSTI Identifier:
1569391
Grant/Contract Number:  
AC05-00OR22725
Resource Type:
Accepted Manuscript
Journal Name:
Parallel Computing
Additional Journal Information:
Journal Volume: 88; Journal Issue: C; Journal ID: ISSN 0167-8191
Publisher:
Elsevier
Country of Publication:
United States
Language:
English

Citation Formats

Budiardja, Reuben D., and Cardall, Christian Y. Targeting GPUs with OpenMP directives on Summit: A simple and effective Fortran experience. United States: N. p., 2019. Web. doi:10.1016/j.parco.2019.102544.
Budiardja, Reuben D., & Cardall, Christian Y. Targeting GPUs with OpenMP directives on Summit: A simple and effective Fortran experience. United States. doi:10.1016/j.parco.2019.102544.
Budiardja, Reuben D., and Cardall, Christian Y. Tue . "Targeting GPUs with OpenMP directives on Summit: A simple and effective Fortran experience". United States. doi:10.1016/j.parco.2019.102544.
@article{osti_1569391,
title = {Targeting GPUs with OpenMP directives on Summit: A simple and effective Fortran experience},
author = {Budiardja, Reuben D. and Cardall, Christian Y.},
abstractNote = {We use OpenMP to target hardware accelerators (GPUs) on Summit, a newly deployed supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), demonstrating simplified access to GPU devices for users of our astrophysics code GenASiS and useful speedup on a sample fluid dynamics problem. We modify our workhorse class for data storage to include members and methods that significantly streamline the persistent allocation of and association to GPU memory. Users offload computational kernels with OpenMP target directives that are rather similar to constructs already familiar from multi-core parallelization. In this initial example we ask, “With a given number of Summit nodes, how fast can we compute with and without GPUs?”, and find total wall time speedups of ~ 12X. We also find reasonable weak scaling up to 8000 GPUs (1334 Summit nodes). We make available the source code from this work at https://github.com/GenASiS/GenASiS_Basics.},
doi = {10.1016/j.parco.2019.102544},
journal = {Parallel Computing},
number = C,
volume = 88,
place = {United States},
year = {2019},
month = {8}
}

Journal Article:
Free Publicly Available Full Text
This content will become publicly available on August 20, 2020
Publisher's Version of Record

Save / Share: