Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Designing and prototyping extensions to the Message Passing Interface in MPICH

Journal Article · · International Journal of High Performance Computing Applications
As HPC system architectures and the applications running on them continue to evolve, the MPI standard itself must evolve. The trend in current and future HPC systems toward powerful nodes with multiple CPU cores and multiple GPU accelerators makes efficient support for hybrid programming critical for applications to achieve high performance. However, the support for hybrid programming in the MPI standard has not kept up with recent trends. The MPICH implementation of MPI provides a platform for implementing and experimenting with new proposals and extensions to fill this gap and to gain valuable experience and feedback before the MPI Forum can consider them for standardization. Here, in this work, we detail six extensions implemented in MPICH to increase MPI interoperability with other runtimes, with a specific focus on heterogeneous architectures. First, the extension to MPI generalized requests lets applications integrate asynchronous tasks into MPI’s progress engine. Second, the iovec extension to datatypes lets applications use MPI datatypes as a general-purpose data layout API beyond just MPI communications. Third, a new MPI object, MPIX_Stream, can be used by applications to identify execution contexts beyond MPI processes, including threads and GPU streams. MPIX stream communicators can be created to make existing MPI functions thread-aware and GPU-aware, thus providing applications with explicit ways to achieve higher performance. Fourth, MPIX Streams are extended to support the enqueue semantics for offloading MPI communications onto a GPU stream context. Fifth, thread communicators allow MPI communicators to be constructed with individual threads, thus providing a new level of interoperability between MPI and on-node runtimes such as OpenMP. Lastly, we present an extension to invoke MPI progress, which lets users spawn progress threads with fine-grained control to adapt the communication performance to their application designs. We describe the design and implementation of these extensions, provide usage examples, and highlight their expected benefits with performance results.
Research Organization:
Argonne National Laboratory (ANL), Argonne, IL (United States)
Sponsoring Organization:
USDOE National Nuclear Security Administration (NNSA); USDOE Office of Science (SC)
Grant/Contract Number:
AC02-06CH11357
OSTI ID:
2571429
Journal Information:
International Journal of High Performance Computing Applications, Journal Name: International Journal of High Performance Computing Applications Vol. 38; ISSN 1094-3420; ISSN 1741-2846
Publisher:
SAGECopyright Statement
Country of Publication:
United States
Language:
English

References (13)

A survey of MPI usage in the US exascale computing project: A survey of MPI usage in the U. S. exascale computing project journal September 2018
The design and evolution of Zipcode journal April 1994
Portable programming with the PARMACS message-passing library journal April 1994
Performance models for asynchronous data transfers on consumer Graphics Processing Units journal September 2012
Performance and Power Efficient Massive Parallel Computational Model for HPC Heterogeneous Exascale Systems journal January 2018
Design and evaluation of Nemesis, a scalable, low-latency, message-passing communication subsystem conference January 2006
Give MPI Threading a Fair Chance: A Study of Multithreaded MPI Designs conference September 2019
Logically Parallel Communication for Fast MPI+Threads Applications journal December 2021
MPI+Threads: runtime contention and remedies journal January 2015
Multi-criteria partitioning of multi-block structured grids conference June 2019
How I learned to stop worrying about user-visible endpoints and love MPI conference June 2020
MPIX Stream: An Explicit Solution to Hybrid MPI+X Programming conference September 2022
Performance comparison of MPI and three openMP programming styles on shared memory multiprocessors conference January 2003

Figures / Tables (8)


Similar Records

Preparing MPICH for exascale
Journal Article · Wed Jan 08 19:00:00 EST 2025 · International Journal of High Performance Computing Applications · OSTI ID:2506860

Qthreads Support for MPICH
Software · Tue Nov 28 19:00:00 EST 2023 · OSTI ID:code-128134