skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Enabling communication concurrency through flexible MPI endpoints

Abstract

MPI defines a one-to-one relationship between MPI processes and ranks. This model captures many use cases effectively; however, it also limits communication concurrency and interoperability between MPI and programming models that utilize threads. This paper describes the MPI endpoints extension, which relaxes the longstanding one-to-one relationship between MPI processes and ranks. Using endpoints, an MPI implementation can map separate communication contexts to threads, allowing them to drive communication independently. Endpoints also enable threads to be addressable in MPI operations, enhancing interoperability between MPI and other programming models. These characteristics are illustrated through several examples and an empirical study that contrasts current multithreaded communication performance with the need for high degrees of communication concurrency to achieve peak communication performance.

Authors:
; ; ; ; ; ;
Publication Date:
Research Org.:
Argonne National Lab. (ANL), Argonne, IL (United States)
Sponsoring Org.:
USDOE Office of Science (SC)
OSTI Identifier:
1392394
DOE Contract Number:  
AC02-06CH11357
Resource Type:
Journal Article
Journal Name:
International Journal of High Performance Computing Applications
Additional Journal Information:
Journal Volume: 28; Journal Issue: 4; Journal ID: ISSN 1094-3420
Publisher:
SAGE
Country of Publication:
United States
Language:
English
Subject:
Communication; Endpoints; Hybrid Parallel Programming; Interoperability; MPI

Citation Formats

Dinan, James, Grant, Ryan E., Balaji, Pavan, Goodell, David, Miller, Douglas, Snir, Marc, and Thakur, Rajeev. Enabling communication concurrency through flexible MPI endpoints. United States: N. p., 2014. Web. doi:10.1177/1094342014548772.
Dinan, James, Grant, Ryan E., Balaji, Pavan, Goodell, David, Miller, Douglas, Snir, Marc, & Thakur, Rajeev. Enabling communication concurrency through flexible MPI endpoints. United States. doi:10.1177/1094342014548772.
Dinan, James, Grant, Ryan E., Balaji, Pavan, Goodell, David, Miller, Douglas, Snir, Marc, and Thakur, Rajeev. Tue . "Enabling communication concurrency through flexible MPI endpoints". United States. doi:10.1177/1094342014548772.
@article{osti_1392394,
title = {Enabling communication concurrency through flexible MPI endpoints},
author = {Dinan, James and Grant, Ryan E. and Balaji, Pavan and Goodell, David and Miller, Douglas and Snir, Marc and Thakur, Rajeev},
abstractNote = {MPI defines a one-to-one relationship between MPI processes and ranks. This model captures many use cases effectively; however, it also limits communication concurrency and interoperability between MPI and programming models that utilize threads. This paper describes the MPI endpoints extension, which relaxes the longstanding one-to-one relationship between MPI processes and ranks. Using endpoints, an MPI implementation can map separate communication contexts to threads, allowing them to drive communication independently. Endpoints also enable threads to be addressable in MPI operations, enhancing interoperability between MPI and other programming models. These characteristics are illustrated through several examples and an empirical study that contrasts current multithreaded communication performance with the need for high degrees of communication concurrency to achieve peak communication performance.},
doi = {10.1177/1094342014548772},
journal = {International Journal of High Performance Computing Applications},
issn = {1094-3420},
number = 4,
volume = 28,
place = {United States},
year = {2014},
month = {9}
}