skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: A survey of MPI usage in the US exascale computing project

Journal Article · · Concurrency and Computation. Practice and Experience
DOI:https://doi.org/10.1002/cpe.4851· OSTI ID:1477440
ORCiD logo [1];  [1]; ORCiD logo [2];  [1];  [3];  [1];  [4];  [5];  [1]
  1. Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
  2. Univ. of Tennessee, Knoxville, TN (United States)
  3. Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
  4. Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
  5. Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Technical Univ. of Munich, Munich (Germany)

The Exascale Computing Project (ECP) is currently the primary effort in the United States focused on developing “exascale” levels of computing capabilities, including hardware, software, and applications. In order to obtain a more thorough understanding of how the software projects under the ECP are using, and planning to use the Message Passing Interface (MPI), and help guide the work of our own project within the ECP, we created a survey. Of the 97 ECP projects active at the time the survey was distributed, we received 77 responses, 56 of which reported that their projects were using MPI. This paper reports the results of that survey for the benefit of the broader community of MPI developers.

Research Organization:
Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States)
Sponsoring Organization:
USDOE National Nuclear Security Administration (NNSA)
Grant/Contract Number:
AC04-94AL85000; AC05-00OR22725; NA0003525; FC02-06ER25750; AC52-07NA27344
OSTI ID:
1477440
Alternate ID(s):
OSTI ID: 1474751; OSTI ID: 1560504
Report Number(s):
SAND-2018-6513J; 664505
Journal Information:
Concurrency and Computation. Practice and Experience, Vol. 32, Issue 3; ISSN 1532-0626
Publisher:
WileyCopyright Statement
Country of Publication:
United States
Language:
English
Citation Metrics:
Cited by: 36 works
Citation information provided by
Web of Science

References (7)

Enabling communication concurrency through flexible MPI endpoints journal September 2014
GPU-Centric Communication on NVIDIA GPU Clusters with InfiniBand: A Case Study with OpenSHMEM conference December 2017
A Survey of MPI Usage in the U.S. Exascale Computing Project report June 2018
Post-failure recovery of MPI communication capability: Design and rationale journal June 2013
Open MPI: Goals, Concept, and Design of a Next Generation MPI Implementation book January 2004
MPI Sessions: Leveraging Runtime Infrastructure to Increase Scalability of Applications at Exascale conference January 2016
Why is MPI so slow?: analyzing the fundamental limits in implementing MPI-3.1
  • Raffenetti, Ken; Blocksome, Michael; Si, Min
  • Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis on - SC '17 https://doi.org/10.1145/3126908.3126963
conference January 2017

Cited By (2)

Application health monitoring for extreme‐scale resiliency using cooperative fault management journal July 2019
Foreword to the Special Issue of the Workshop on Exascale MPI (ExaMPI 2017)
  • Skjellum, Anthony; Bangalore, Purushotham V.; Grant, Ryan E.
  • Concurrency and Computation: Practice and Experience, Vol. 32, Issue 3 https://doi.org/10.1002/cpe.5459
journal July 2019

Similar Records

A Survey of MPI Usage in the U.S. Exascale Computing Project
Technical Report · Fri Jun 01 00:00:00 EDT 2018 · OSTI ID:1477440

Understanding the use of message passing interface in exascale proxy applications
Journal Article · Mon Aug 17 00:00:00 EDT 2020 · Concurrency and Computation. Practice and Experience · OSTI ID:1477440

A survey of software implementations used by application codes in the Exascale Computing Project
Journal Article · Fri Jun 25 00:00:00 EDT 2021 · International Journal of High Performance Computing Applications · OSTI ID:1477440