DOE PAGES title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: A survey of MPI usage in the US exascale computing project

Abstract

The Exascale Computing Project (ECP) is currently the primary effort in the United States focused on developing “exascale” levels of computing capabilities, including hardware, software, and applications. In order to obtain a more thorough understanding of how the software projects under the ECP are using, and planning to use the Message Passing Interface (MPI), and help guide the work of our own project within the ECP, we created a survey. Of the 97 ECP projects active at the time the survey was distributed, we received 77 responses, 56 of which reported that their projects were using MPI. This paper reports the results of that survey for the benefit of the broader community of MPI developers.

Authors:
ORCiD logo [1];  [1]; ORCiD logo [2];  [1];  [3];  [1];  [4];  [5];  [1]
  1. Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
  2. Univ. of Tennessee, Knoxville, TN (United States)
  3. Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
  4. Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
  5. Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Technical Univ. of Munich, Munich (Germany)
Publication Date:
Research Org.:
Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States)
Sponsoring Org.:
USDOE National Nuclear Security Administration (NNSA)
OSTI Identifier:
1477440
Alternate Identifier(s):
OSTI ID: 1474751; OSTI ID: 1560504
Report Number(s):
SAND-2018-6513J
Journal ID: ISSN 1532-0626; 664505
Grant/Contract Number:  
AC04-94AL85000; AC05-00OR22725; NA0003525; FC02-06ER25750; AC52-07NA27344
Resource Type:
Accepted Manuscript
Journal Name:
Concurrency and Computation. Practice and Experience
Additional Journal Information:
Journal Volume: 32; Journal Issue: 3; Journal ID: ISSN 1532-0626
Publisher:
Wiley
Country of Publication:
United States
Language:
English
Subject:
97 MATHEMATICS AND COMPUTING; exascale; MPI

Citation Formats

Bernholdt, David E., Boehm, Swen, Bosilca, George, Venkata, Manjunath Gorentla, Grant, Ryan E., Naughton, Thomas, Pritchard, Howard P., Schulz, Martin, and Vallee, Geoffroy R. A survey of MPI usage in the US exascale computing project. United States: N. p., 2018. Web. doi:10.1002/cpe.4851.
Bernholdt, David E., Boehm, Swen, Bosilca, George, Venkata, Manjunath Gorentla, Grant, Ryan E., Naughton, Thomas, Pritchard, Howard P., Schulz, Martin, & Vallee, Geoffroy R. A survey of MPI usage in the US exascale computing project. United States. https://doi.org/10.1002/cpe.4851
Bernholdt, David E., Boehm, Swen, Bosilca, George, Venkata, Manjunath Gorentla, Grant, Ryan E., Naughton, Thomas, Pritchard, Howard P., Schulz, Martin, and Vallee, Geoffroy R. Thu . "A survey of MPI usage in the US exascale computing project". United States. https://doi.org/10.1002/cpe.4851. https://www.osti.gov/servlets/purl/1477440.
@article{osti_1477440,
title = {A survey of MPI usage in the US exascale computing project},
author = {Bernholdt, David E. and Boehm, Swen and Bosilca, George and Venkata, Manjunath Gorentla and Grant, Ryan E. and Naughton, Thomas and Pritchard, Howard P. and Schulz, Martin and Vallee, Geoffroy R.},
abstractNote = {The Exascale Computing Project (ECP) is currently the primary effort in the United States focused on developing “exascale” levels of computing capabilities, including hardware, software, and applications. In order to obtain a more thorough understanding of how the software projects under the ECP are using, and planning to use the Message Passing Interface (MPI), and help guide the work of our own project within the ECP, we created a survey. Of the 97 ECP projects active at the time the survey was distributed, we received 77 responses, 56 of which reported that their projects were using MPI. This paper reports the results of that survey for the benefit of the broader community of MPI developers.},
doi = {10.1002/cpe.4851},
journal = {Concurrency and Computation. Practice and Experience},
number = 3,
volume = 32,
place = {United States},
year = {Thu Sep 27 00:00:00 EDT 2018},
month = {Thu Sep 27 00:00:00 EDT 2018}
}

Journal Article:
Free Publicly Available Full Text
Publisher's Version of Record

Citation Metrics:
Cited by: 36 works
Citation information provided by
Web of Science

Save / Share:

Works referenced in this record:

Enabling communication concurrency through flexible MPI endpoints
journal, September 2014

  • Dinan, James; Grant, Ryan E.; Balaji, Pavan
  • The International Journal of High Performance Computing Applications, Vol. 28, Issue 4
  • DOI: 10.1177/1094342014548772

GPU-Centric Communication on NVIDIA GPU Clusters with InfiniBand: A Case Study with OpenSHMEM
conference, December 2017

  • Potluri, Sreeram; Goswami, Anshuman; Rossetti, Davide
  • 2017 IEEE 24th International Conference on High Performance Computing (HiPC)
  • DOI: 10.1109/HiPC.2017.00037

A Survey of MPI Usage in the U.S. Exascale Computing Project
report, June 2018


Post-failure recovery of MPI communication capability: Design and rationale
journal, June 2013

  • Bland, Wesley; Bouteiller, Aurelien; Herault, Thomas
  • The International Journal of High Performance Computing Applications, Vol. 27, Issue 3
  • DOI: 10.1177/1094342013488238

Open MPI: Goals, Concept, and Design of a Next Generation MPI Implementation
book, January 2004

  • Gabriel, Edgar; Fagg, Graham E.; Bosilca, George
  • Recent Advances in Parallel Virtual Machine and Message Passing Interface
  • DOI: 10.1007/978-3-540-30218-6_19

MPI Sessions: Leveraging Runtime Infrastructure to Increase Scalability of Applications at Exascale
conference, January 2016

  • Holmes, Daniel; Mohror, Kathryn; Grant, Ryan E.
  • Proceedings of the 23rd European MPI Users' Group Meeting on - EuroMPI 2016
  • DOI: 10.1145/2966884.2966915

Why is MPI so slow?: analyzing the fundamental limits in implementing MPI-3.1
conference, January 2017

  • Raffenetti, Ken; Blocksome, Michael; Si, Min
  • Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis on - SC '17
  • DOI: 10.1145/3126908.3126963

Works referencing / citing this record:

Application health monitoring for extreme‐scale resiliency using cooperative fault management
journal, July 2019

  • Agarwal, Pratul K.; Naughton, Thomas; Park, Byung H.
  • Concurrency and Computation: Practice and Experience, Vol. 32, Issue 2
  • DOI: 10.1002/cpe.5449

Foreword to the Special Issue of the Workshop on Exascale MPI (ExaMPI 2017)
journal, July 2019

  • Skjellum, Anthony; Bangalore, Purushotham V.; Grant, Ryan E.
  • Concurrency and Computation: Practice and Experience, Vol. 32, Issue 3
  • DOI: 10.1002/cpe.5459