Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Application Experiences on a GPU-Accelerated Arm-based HPC Testbed

Conference ·
 [1];  [1];  [1];  [1];  [1];  [1];  [1];  [1];  [1];  [1];  [1];  [1];  [1];  [2];  [3];  [2];  [2];  [2];  [4];  [4] more »;  [4];  [5];  [6];  [7];  [8];  [9];  [9];  [9];  [9];  [10];  [11];  [11];  [12] « less
  1. ORNL
  2. Helmholtz-Zentrum Dresden Rossendorf (HDZR), Germany
  3. Helmholtz-Institut fur Strahlen- und Kernphysik, Universitat Bonn, Germany
  4. Georgia Tech
  5. Georgia Institute of Technology, Atlanta
  6. University of Delaware
  7. Universität Basel
  8. Universitat Basel, Switzerland
  9. NVIDIA, Santa Clara, CA
  10. University of Illinois at Urbana-Champaign
  11. Swiss National Supercomputer Centre (CSCS)
  12. Sandia National Laboratories (SNL)

This paper assesses and reports the experience of ten teams working to port, validate, and benchmark several High Performance Computing applications on a novel GPU-accelerated Arm testbed system. The testbed consists of eight NVIDIA Arm HPC Developer Kit systems, each one equipped with a server-class Arm CPU from Ampere Computing and two data center GPUs from NVIDIA Corp. The systems are connected together using InfiniBand interconnect. The selected applications and mini-apps are written using several programming languages and use multiple accelerator-based programming models for GPUs such as CUDA, OpenACC, and OpenMP offloading. Working on application porting requires a robust and easy-to-access programming environment, including a variety of compilers and optimized scientific libraries. The goal of this work is to evaluate platform readiness and assess the effort required from developers to deploy well-established scientific workloads on current and future generation Arm-based GPU-accelerated HPC systems. The reported case studies demonstrate that the current level of maturity and diversity of software and tools is already adequate for large-scale production deployments.

Research Organization:
Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States)
Sponsoring Organization:
USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR)
DOE Contract Number:
AC05-00OR22725
OSTI ID:
1960691
Resource Relation:
Conference: HPC ASIA 2023: International Conference on High Performance Computing in Asia-Pacific Region Workshops - Singapore, , Singapore - 2/27/2023 3:00:00 PM-3/2/2023 3:00:00 PM
Country of Publication:
United States
Language:
English

References (29)

First Experiences in Performance Benchmarking with the New SPEChpc 2021 Suites conference May 2022
A quantitative comparison of phase-averaged models for bubbly, cavitating flows journal June 2019
MFC: An open-source high-order multi-component, multi-phase, and multi-scale compressible flow solver journal September 2021
A Smoothed Particle Hydrodynamics Mini-App for Exascale conference June 2020
Accelerating staggered-fermion dynamics with the rational hybrid Monte Carlo algorithm journal January 2007
Performance Portability across Diverse Computer Architectures conference November 2019
Tracking Performance Portability on the Yellow Brick Road to Exascale conference November 2020
Experiences with Porting the FLASH Code to Ookami, an HPE Apollo 80 A64FX Platform conference January 2022
Highly improved staggered quarks on the lattice with applications to charm physics journal March 2007
Simulating Hydrodynamics in Cosmology with CRK-HACC journal January 2023
Exascale models of stellar explosions: Quintessential multi-physics simulation journal July 2021
VMD: Visual molecular dynamics journal February 1996
Challenges Porting a C++ Template-Metaprogramming Abstraction Layer to Directive-Based Offloading book January 2022
QMCPACK: Advances in the development, efficiency, and application of auxiliary field and real-space variational and diffusion quantum Monte Carlo journal May 2020
A DG-IMEX Method for Two-moment Neutrino Transport: Nonlinear Solvers for Neutrino–Matter Coupling* journal April 2021
Benchmarking the first generation of production quality Arm‐based supercomputers
  • McIntosh‐Smith, Simon; Price, James; Poenaru, Andrei
  • Concurrency and Computation: Practice and Experience, Vol. 32, Issue 20 https://doi.org/10.1002/cpe.5569
journal November 2019
NAMD goes quantum: an integrative suite for hybrid simulations journal March 2018
Scalable molecular dynamics on CPU and GPU architectures with NAMD journal July 2020
Tibidabo: Making the case for an ARM-based HPC system journal July 2014
An assessment of multicomponent flow models and interface capturing schemes for spherical bubble dynamics journal February 2020
The ARM Scalable Vector Extension journal March 2017
Evaluation of Emerging Energy-Efficient Heterogeneous Computing Platforms for Biomolecular and Cellular Simulation Workloads conference May 2016
GPU-accelerated molecular modeling coming of age journal September 2010
Early Experiences Porting the NAMD and VMD Molecular Simulation and Analysis Software to GPU-Accelerated OpenPOWER Platforms book January 2016
High performance computation and interactive display of molecular orbitals on GPUs and multi-core CPUs conference January 2009
LAMMPS - a flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales journal February 2022
Kokkos 3: Programming Model Extensions for the Exascale Era journal January 2021
Scaling the Summit: Deploying the World’s Fastest Supercomputer book January 2019
I-TASSER gateway: A protein structure and function prediction server powered by XSEDE journal October 2019