skip to main content
DOE PAGES title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Harnessing billions of tasks for a scalable portable hydrodynamic simulation of the merger of two stars

Abstract

We propose a highly scalable demonstration of a portable asynchronous many-task programming model and runtime system applied to a grid-based adaptive mesh refinement hydrodynamic simulation of a double white dwarf merger with 14 levels of refinement that spans 17 orders of magnitude in astrophysical densities. The code uses the portable C++ parallel programming model that is embodied in the HPX library and being incorporated into the ISO C++ standard. The model reflects a significant shift from existing bulk synchronous parallel programming models under consideration for exascale systems. Through the use of the Futurization technique, seemingly sequential code is transformed into wait-free asynchronous tasks. We demonstrate the potential of our model by showing results from strong scaling runs on National Energy Research Scientific Computing Center’s Cori system (658,784 Intel Knight’s Landing cores) that achieve a parallel efficiency of 96.8% using billions of asynchronous tasks.

Authors:
 [1];  [2];  [3];  [4]; ORCiD logo [5];  [6];  [7];  [8];  [9];  [8];  [8];  [8];  [9];  [10];  [8]
  1. Univ. of Erlangen-Nuremberg (FAU), Bavaria (Germany)
  2. NVIDIA, Santa Clara, CA (United States)
  3. Univ. of Oregon, Eugene, OR (United States)
  4. National Supercomputing Centre, Lugano (Switzerland)
  5. Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
  6. Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
  7. GSI-Helmholtzzentrum fur Schwerionenforschung, Darmstadt (Germany)
  8. Louisiana State Univ., Baton Rouge, LA (United States)
  9. Univ. of Stuttgart, Baden-Württemberg (Germany)
  10. Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
Publication Date:
Research Org.:
Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
Sponsoring Org.:
USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR) (SC-21)
Contributing Org.:
The STE||AR Group
OSTI Identifier:
1524389
Report Number(s):
LA-UR-17-31311
Journal ID: ISSN 1094-3420
Grant/Contract Number:  
89233218CNA000001; AC02-05CH11231; SC0008638; SC0008714; AC52-07NA27344
Resource Type:
Accepted Manuscript
Journal Name:
International Journal of High Performance Computing Applications
Additional Journal Information:
Journal Volume: 33; Journal Issue: 4; Journal ID: ISSN 1094-3420
Publisher:
SAGE
Country of Publication:
United States
Language:
English
Subject:
79 ASTRONOMY AND ASTROPHYSICS; 97 MATHEMATICS AND COMPUTING; parallel runtime; binary star merger; asynchronous tasks; HPX; C++

Citation Formats

Heller, Thomas, Lelbach, Bryce Adelstein, Huck, Kevin A., Biddiscombe, John, Grubel, Patricia, Koniges, Alice E., Kretz, Matthias, Marcello, Dominic, Pfander, David, Serio, Adrian, Frank, Juhan, Clayton, Geoffrey C., Pflüger, Dirk, Eder, David, and Kaiser, Hartmut. Harnessing billions of tasks for a scalable portable hydrodynamic simulation of the merger of two stars. United States: N. p., 2019. Web. doi:10.1177/1094342018819744.
Heller, Thomas, Lelbach, Bryce Adelstein, Huck, Kevin A., Biddiscombe, John, Grubel, Patricia, Koniges, Alice E., Kretz, Matthias, Marcello, Dominic, Pfander, David, Serio, Adrian, Frank, Juhan, Clayton, Geoffrey C., Pflüger, Dirk, Eder, David, & Kaiser, Hartmut. Harnessing billions of tasks for a scalable portable hydrodynamic simulation of the merger of two stars. United States. doi:10.1177/1094342018819744.
Heller, Thomas, Lelbach, Bryce Adelstein, Huck, Kevin A., Biddiscombe, John, Grubel, Patricia, Koniges, Alice E., Kretz, Matthias, Marcello, Dominic, Pfander, David, Serio, Adrian, Frank, Juhan, Clayton, Geoffrey C., Pflüger, Dirk, Eder, David, and Kaiser, Hartmut. Thu . "Harnessing billions of tasks for a scalable portable hydrodynamic simulation of the merger of two stars". United States. doi:10.1177/1094342018819744. https://www.osti.gov/servlets/purl/1524389.
@article{osti_1524389,
title = {Harnessing billions of tasks for a scalable portable hydrodynamic simulation of the merger of two stars},
author = {Heller, Thomas and Lelbach, Bryce Adelstein and Huck, Kevin A. and Biddiscombe, John and Grubel, Patricia and Koniges, Alice E. and Kretz, Matthias and Marcello, Dominic and Pfander, David and Serio, Adrian and Frank, Juhan and Clayton, Geoffrey C. and Pflüger, Dirk and Eder, David and Kaiser, Hartmut},
abstractNote = {We propose a highly scalable demonstration of a portable asynchronous many-task programming model and runtime system applied to a grid-based adaptive mesh refinement hydrodynamic simulation of a double white dwarf merger with 14 levels of refinement that spans 17 orders of magnitude in astrophysical densities. The code uses the portable C++ parallel programming model that is embodied in the HPX library and being incorporated into the ISO C++ standard. The model reflects a significant shift from existing bulk synchronous parallel programming models under consideration for exascale systems. Through the use of the Futurization technique, seemingly sequential code is transformed into wait-free asynchronous tasks. We demonstrate the potential of our model by showing results from strong scaling runs on National Energy Research Scientific Computing Center’s Cori system (658,784 Intel Knight’s Landing cores) that achieve a parallel efficiency of 96.8% using billions of asynchronous tasks.},
doi = {10.1177/1094342018819744},
journal = {International Journal of High Performance Computing Applications},
number = 4,
volume = 33,
place = {United States},
year = {2019},
month = {2}
}

Journal Article:
Free Publicly Available Full Text
Publisher's Version of Record

Save / Share:

Works referenced in this record:

Preparing NERSC users for Cori, a Cray XC40 system with Intel many integrated cores
journal, August 2017

  • He, Yun; Cook, Brandon; Deslippe, Jack
  • Concurrency and Computation: Practice and Experience, Vol. 30, Issue 1
  • DOI: 10.1002/cpe.4291

Continuation-passing, closure-passing style
conference, January 1989

  • Appel, A. W.; Jim, T.
  • Proceedings of the 16th ACM SIGPLAN-SIGACT symposium on Principles of programming languages - POPL '89
  • DOI: 10.1145/75277.75303

The Tau Parallel Performance System
journal, May 2006

  • Shende, Sameer S.; Malony, Allen D.
  • The International Journal of High Performance Computing Applications, Vol. 20, Issue 2
  • DOI: 10.1177/1094342006064482

X10: an object-oriented approach to non-uniform cluster computing
journal, October 2005

  • Charles, Philippe; Grothoff, Christian; Saraswat, Vijay
  • ACM SIGPLAN Notices, Vol. 40, Issue 10
  • DOI: 10.1145/1103845.1094852

Angular momentum preserving cell-centered Lagrangian and Eulerian schemes on arbitrary grids
journal, June 2015


Vc: A C++ library for explicit vectorization: VC: A C++ LIBRARY FOR EXPLICIT VECTORIZATION
journal, December 2011

  • Kretz, Matthias; Lindenstruth, Volker
  • Software: Practice and Experience, Vol. 42, Issue 11
  • DOI: 10.1002/spe.1149

Legion: Expressing locality and independence with logical regions
conference, November 2012

  • Bauer, Michael; Treichler, Sean; Slaughter, Elliott
  • 2012 SC - International Conference for High Performance Computing, Networking, Storage and Analysis, 2012 International Conference for High Performance Computing, Networking, Storage and Analysis
  • DOI: 10.1109/SC.2012.71

New High-Resolution Central Schemes for Nonlinear Conservation Laws and Convection–Diffusion Equations
journal, May 2000

  • Kurganov, Alexander; Tadmor, Eitan
  • Journal of Computational Physics, Vol. 160, Issue 1
  • DOI: 10.1006/jcph.2000.6459

Parallel Programmability and the Chapel Language
journal, August 2007

  • Chamberlain, B. L.; Callahan, D.; Zima, H. P.
  • The International Journal of High Performance Computing Applications, Vol. 21, Issue 3
  • DOI: 10.1177/1094342007078442

    Works referencing / citing this record:

    New High-Resolution Central Schemes for Nonlinear Conservation Laws and Convection–Diffusion Equations
    journal, May 2000

    • Kurganov, Alexander; Tadmor, Eitan
    • Journal of Computational Physics, Vol. 160, Issue 1
    • DOI: 10.1006/jcph.2000.6459

    Preparing NERSC users for Cori, a Cray XC40 system with Intel many integrated cores
    journal, August 2017

    • He, Yun; Cook, Brandon; Deslippe, Jack
    • Concurrency and Computation: Practice and Experience, Vol. 30, Issue 1
    • DOI: 10.1002/cpe.4291

    Vc: A C++ library for explicit vectorization: VC: A C++ LIBRARY FOR EXPLICIT VECTORIZATION
    journal, December 2011

    • Kretz, Matthias; Lindenstruth, Volker
    • Software: Practice and Experience, Vol. 42, Issue 11
    • DOI: 10.1002/spe.1149

    X10: an object-oriented approach to non-uniform cluster computing
    journal, October 2005

    • Charles, Philippe; Grothoff, Christian; Saraswat, Vijay
    • ACM SIGPLAN Notices, Vol. 40, Issue 10
    • DOI: 10.1145/1103845.1094852

    Angular momentum preserving cell-centered Lagrangian and Eulerian schemes on arbitrary grids
    journal, June 2015


    Parallel Programmability and the Chapel Language
    journal, August 2007

    • Chamberlain, B. L.; Callahan, D.; Zima, H. P.
    • The International Journal of High Performance Computing Applications, Vol. 21, Issue 3
    • DOI: 10.1177/1094342007078442

    Continuation-passing, closure-passing style
    conference, January 1989

    • Appel, A. W.; Jim, T.
    • Proceedings of the 16th ACM SIGPLAN-SIGACT symposium on Principles of programming languages - POPL '89
    • DOI: 10.1145/75277.75303

    The Tau Parallel Performance System
    journal, May 2006

    • Shende, Sameer S.; Malony, Allen D.
    • The International Journal of High Performance Computing Applications, Vol. 20, Issue 2
    • DOI: 10.1177/1094342006064482