DOE PAGES title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Towards performance portability in the Spark astrophysical magnetohydrodynamics solver in the Flash-X simulation framework

Journal Article · · Parallel Computing

Simulations of core-collapse supernovae, and other astrophysical phenomena, are quintessential extreme-scale computing challenges. For core-collapse supernova simulations to be carried out by the ExaStar project under the Exascale Computing Project umbrella, a robust, efficient, and state-of-the-art magnetohydrodynamics solver is a critical requirement. In Flash-X, the primary software instrument for ExaStar, a new magnetohydrodynamics solver has been designed and implemented from the ground up to achieve accuracy and efficiency for simulations of complex astrophysical flows. This new solver, dubbed Spark, uses high-order spatial reconstruction, Runge-Kutta time integration, and an efficient cell-centered approach to satisfying the divergence-free condition for the magnetic fields. Spark was written to be optimized for data locality in cache hierarchy of CPUs. Since data locality optimizations for cache hierarchy are not directly compatible with those of accelerators, we have taken the approach of using program synthesis to avoid massive amounts of code replication that would be necessary if we were to maintain two different versions of the solver. Our program synthesis relies on a simple key-dictionary approach, implemented in python, that enables us to assemble the version of the solver suitable for the target hardware from code fragments identified by specific keys. In this work, we describe the data locality optimizations of the solver for CPUs and accelerators and the program synthesis tools that enable this portability. We also detail the parallel performance of Spark for both CPUs and accelerators.

Research Organization:
Argonne National Laboratory (ANL), Argonne, IL (United States)
Sponsoring Organization:
USDOE Office of Science (SC); USDOE Exascale Computing Project (ECP); USDOE National Nuclear Security Administration (NNSA)
Grant/Contract Number:
AC02-06CH11357; SC0017955
OSTI ID:
1962609
Journal Information:
Parallel Computing, Journal Name: Parallel Computing Vol. 108; ISSN 0167-8191
Publisher:
ElsevierCopyright Statement
Country of Publication:
United States
Language:
English

References (12)

High Order Strong Stability Preserving Time Discretizations journal September 2008
Distillation of Best Practices from Refactoring FLASH for Exascale journal July 2020
Efficient implementation of essentially non-oscillatory shock-capturing schemes journal August 1988
PARAMESH: A parallel adaptive mesh refinement community toolkit journal April 2000
A multi-state HLL approximate Riemann solver for ideal magnetohydrodynamics journal September 2005
An improved weighted essentially non-oscillatory scheme for hyperbolic conservation laws journal March 2008
A solution accurate, efficient and stable unsplit staggered mesh scheme for three dimensional magnetohydrodynamics journal June 2013
Extensible component-based architecture for FLASH, a massively parallel, multiphysics simulation code journal October 2009
FLASH: An Adaptive Mesh Hydrodynamics Code for Modeling Astrophysical Thermonuclear Flashes journal November 2000
High Order Weighted Essentially Nonoscillatory Schemes for Convection Dominated Problems journal February 2009
Kokkos Array performance-portable manycore programming model
  • Edwards, H. Carter; Sunderland, Daniel
  • Proceedings of the 2012 International Workshop on Programming Models and Applications for Multicores and Manycores - PMAM '12 https://doi.org/10.1145/2141702.2141703
conference January 2012
AMReX: a framework for block-structured adaptive mesh refinement journal May 2019