skip to main content

DOE PAGESDOE PAGES

Title: Enabling communication concurrency through flexible MPI endpoints

MPI defines a one-to-one relationship between MPI processes and ranks. This model captures many use cases effectively; however, it also limits communication concurrency and interoperability between MPI and programming models that utilize threads. Our paper describes the MPI endpoints extension, which relaxes the longstanding one-to-one relationship between MPI processes and ranks. Using endpoints, an MPI implementation can map separate communication contexts to threads, allowing them to drive communication independently. Also, endpoints enable threads to be addressable in MPI operations, enhancing interoperability between MPI and other programming models. Furthermore, these characteristics are illustrated through several examples and an empirical study that contrasts current multithreaded communication performance with the need for high degrees of communication concurrency to achieve peak communication performance.
Authors:
 [1] ;  [2] ;  [3] ;  [4] ;  [5] ;  [3] ;  [3]
  1. Intel Corporation, Hudson, MA (United States)
  2. Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
  3. Argonne National Lab. (ANL), Argonne, IL (United States)
  4. Cisco Systems Inc., San Jose, CA (United States)
  5. International Business Machines Corporation, Rochester, MN (United States)
Publication Date:
OSTI Identifier:
1140752
Report Number(s):
SAND2014--0614J
Journal ID: ISSN 1094-3420; 498469
Grant/Contract Number:
AC04-94AL85000
Type:
Accepted Manuscript
Journal Name:
International Journal of High Performance Computing Applications
Additional Journal Information:
Journal Volume: 28; Journal Issue: 4; Journal ID: ISSN 1094-3420
Publisher:
SAGE
Research Org:
Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
Sponsoring Org:
USDOE National Nuclear Security Administration (NNSA)
Country of Publication:
United States
Language:
English
Subject:
97 MATHEMATICS AND COMPUTING MPI; endpoints; hybrid parallel programming; interoperability; communication concurrency