Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

A survey of MPI usage in the US exascale computing project

Journal Article · · Concurrency and Computation. Practice and Experience
DOI:https://doi.org/10.1002/cpe.4851· OSTI ID:1477440
 [1];  [1];  [2];  [1];  [3];  [1];  [4];  [5];  [1]
  1. Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
  2. Univ. of Tennessee, Knoxville, TN (United States)
  3. Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
  4. Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
  5. Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Technical Univ. of Munich, Munich (Germany)
The Exascale Computing Project (ECP) is currently the primary effort in the United States focused on developing “exascale” levels of computing capabilities, including hardware, software, and applications. In order to obtain a more thorough understanding of how the software projects under the ECP are using, and planning to use the Message Passing Interface (MPI), and help guide the work of our own project within the ECP, we created a survey. Of the 97 ECP projects active at the time the survey was distributed, we received 77 responses, 56 of which reported that their projects were using MPI. Furthermore, this paper reports the results of that survey for the benefit of the broader community of MPI developers.
Research Organization:
Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States); Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
Sponsoring Organization:
USDOE National Nuclear Security Administration (NNSA)
Grant/Contract Number:
AC04-94AL85000; AC05-00OR22725; AC52-07NA27344; FC02-06ER25750; NA0003525
OSTI ID:
1477440
Alternate ID(s):
OSTI ID: 1560504
OSTI ID: 1474751
Report Number(s):
SAND--2018-6513J; 664505
Journal Information:
Concurrency and Computation. Practice and Experience, Journal Name: Concurrency and Computation. Practice and Experience Journal Issue: 3 Vol. 32; ISSN 1532-0626
Publisher:
WileyCopyright Statement
Country of Publication:
United States
Language:
English

References (8)

Exploring OpenSHMEM Model to Program GPU-based Extreme-Scale Systems book January 2015
Open MPI: Goals, Concept, and Design of a Next Generation MPI Implementation book January 2004
GPU-Centric Communication on NVIDIA GPU Clusters with InfiniBand: A Case Study with OpenSHMEM conference December 2017
MPI Sessions: Leveraging Runtime Infrastructure to Increase Scalability of Applications at Exascale conference January 2016
Why is MPI so slow?: analyzing the fundamental limits in implementing MPI-3.1
  • Raffenetti, Ken; Blocksome, Michael; Si, Min
  • Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis on - SC '17 https://doi.org/10.1145/3126908.3126963
conference January 2017
Post-failure recovery of MPI communication capability: Design and rationale journal June 2013
Enabling communication concurrency through flexible MPI endpoints journal September 2014
A Survey of MPI Usage in the U.S. Exascale Computing Project report June 2018

Cited By (2)

Application health monitoring for extreme‐scale resiliency using cooperative fault management journal July 2019
Foreword to the Special Issue of the Workshop on Exascale MPI (ExaMPI 2017)
  • Skjellum, Anthony; Bangalore, Purushotham V.; Grant, Ryan E.
  • Concurrency and Computation: Practice and Experience, Vol. 32, Issue 3 https://doi.org/10.1002/cpe.5459
journal July 2019

Similar Records

A Survey of MPI Usage in the U.S. Exascale Computing Project
Technical Report · Fri Jun 01 00:00:00 EDT 2018 · OSTI ID:1462877

Understanding the use of message passing interface in exascale proxy applications
Journal Article · Sun Aug 16 20:00:00 EDT 2020 · Concurrency and Computation. Practice and Experience · OSTI ID:1860774