Enabling communication concurrency through flexible MPI endpoints
Journal Article
·
· International Journal of High Performance Computing Applications
- Intel Corporation, Hudson, MA (United States)
- Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
- Argonne National Lab. (ANL), Argonne, IL (United States)
- Cisco Systems Inc., San Jose, CA (United States)
- International Business Machines Corporation, Rochester, MN (United States)
MPI defines a one-to-one relationship between MPI processes and ranks. This model captures many use cases effectively; however, it also limits communication concurrency and interoperability between MPI and programming models that utilize threads. Our paper describes the MPI endpoints extension, which relaxes the longstanding one-to-one relationship between MPI processes and ranks. Using endpoints, an MPI implementation can map separate communication contexts to threads, allowing them to drive communication independently. Also, endpoints enable threads to be addressable in MPI operations, enhancing interoperability between MPI and other programming models. Furthermore, these characteristics are illustrated through several examples and an empirical study that contrasts current multithreaded communication performance with the need for high degrees of communication concurrency to achieve peak communication performance.
- Research Organization:
- Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
- Sponsoring Organization:
- USDOE National Nuclear Security Administration (NNSA)
- Grant/Contract Number:
- AC04-94AL85000
- OSTI ID:
- 1140752
- Alternate ID(s):
- OSTI ID: 1392394
- Report Number(s):
- SAND2014--0614J; 498469
- Journal Information:
- International Journal of High Performance Computing Applications, Journal Name: International Journal of High Performance Computing Applications Journal Issue: 4 Vol. 28; ISSN 1094-3420
- Publisher:
- SAGECopyright Statement
- Country of Publication:
- United States
- Language:
- English
MPI-2: Extending the message-passing interface
|
book | January 1996 |
Dynamic Communicators in MPI
|
book | January 2009 |
Compact and Efficient Implementation of the MPI Group Operations
|
book | January 2010 |
Enabling Concurrent Multithreaded MPI Communication on Multicore Petascale Systems
|
book | January 2010 |
Leveraging MPI’s One-Sided Communication Interface for Shared-Memory Programming
|
book | January 2012 |
MVAPICH2-GPU: optimized GPU to GPU communication for InfiniBand clusters
|
journal | April 2011 |
MPI + MPI: a new hybrid approach to parallel programming with MPI plus shared memory
|
journal | May 2013 |
Generalized communicators in the message passing interface
|
journal | June 2001 |
Multi-threaded UPC runtime with network endpoints: Design alternatives and evaluation on multi-core architectures
|
conference | December 2011 |
Supporting Hybrid MPI and OpenSHMEM over InfiniBand: Design and Performance Evaluation
|
conference | September 2012 |
PAMI: A Parallel Active Message Interface for the Blue Gene/Q Supercomputer
|
conference | May 2012 |
FG-MPI: Fine-grain MPI for multicore and clusters
|
conference | April 2010 |
Hybrid MPI/OpenMP Parallel Programming on Clusters of Multi-Core SMP Nodes
|
conference | February 2009 |
Network Endpoints for Clusters of SMPs
|
conference | October 2012 |
Evaluating NIC hardware requirements to achieve high message rate PGAS support on multi-core processors
|
conference | January 2007 |
Hybrid parallel programming with MPI and unified parallel C
|
conference | January 2010 |
Hybrid PGAS runtime support for multicore nodes
|
conference | January 2010 |
Unifying UPC and MPI runtimes: experience with MVAPICH
|
conference | January 2010 |
Extending MPI to accelerators
|
conference | January 2011 |
Ownership passing: efficient distributed memory programming on multi-core systems
|
conference | January 2013 |
Enabling MPI interoperability through flexible communication endpoints
|
conference | January 2013 |
The impact of hybrid-core processors on MPI message rate
|
conference | January 2013 |
NUMA-aware shared-memory collective communication for MPI
|
conference | January 2013 |
Portable, MPI-interoperable coarray fortran
|
conference | January 2014 |
Development of Mixed Mode MPI / OpenMP Applications
|
journal | January 2001 |
Similar Records
Enabling communication concurrency through flexible MPI endpoints
Test suite for evaluating performance of multithreaded MPI communication.
Improving concurrency and asynchrony in multithreaded MPI applications using software offloading
Journal Article
·
Tue Sep 23 00:00:00 EDT 2014
· International Journal of High Performance Computing Applications
·
OSTI ID:1392394
Test suite for evaluating performance of multithreaded MPI communication.
Journal Article
·
Mon Nov 30 23:00:00 EST 2009
· Parallel Comput.
·
OSTI ID:977356
Improving concurrency and asynchrony in multithreaded MPI applications using software offloading
Conference
·
Sun Nov 01 00:00:00 EDT 2015
·
OSTI ID:1970188