Accelerating the Global Arrays ComEx Runtime using Multiple Progress Ranks
- BATTELLE (PACIFIC NW LAB)
Partitioned Global Address Space (PGAS) models are part of system software that are being designed to support communication runtime for exascale applications. Suitability of using MPI as a scalable PGAS communication subsystem has been shown as a viable option due to its standardization and higher performance. We used MPI two-sided semantics with a combination of automatic and user defined splitting of MPI communicators to achieve asynchronous progress. Our implementation can make use of multiple asynchronous progress ranks (PR) per node that can be mapped to the computing architecture of a node in a distributed cluster. We are able to show significant speed up of over 2.0X and scaling of communication bound computational chemistry application over 1024 nodes of state- of-the-art HPC clusters. Our results show that while running a communication bound application workload on a certain number of cluster nodes, a use of optimum number of ranks dedicated for communication may be used to achieve asynchronous communication progress and obtain highest performance.
- Research Organization:
- Pacific Northwest National Laboratory (PNNL), Richland, WA (United States)
- Sponsoring Organization:
- USDOE
- DOE Contract Number:
- AC05-76RL01830
- OSTI ID:
- 1598881
- Report Number(s):
- PNNL-SA-143812
- Country of Publication:
- United States
- Language:
- English
Similar Records
On the Suitability of MPI as a PGAS Runtime
Designing and prototyping extensions to the Message Passing Interface in MPICH
The uintah framework: a unified heterogeneous task scheduling and runtime system
Conference
·
Wed Dec 17 23:00:00 EST 2014
·
OSTI ID:1194324
Designing and prototyping extensions to the Message Passing Interface in MPICH
Journal Article
·
Sun Aug 18 20:00:00 EDT 2024
· International Journal of High Performance Computing Applications
·
OSTI ID:2571429
The uintah framework: a unified heterogeneous task scheduling and runtime system
Conference
·
Thu Nov 01 00:00:00 EDT 2012
· 2012 SC Companion: High Performance Computing, Networking Storage and Analysis; 10-16 Nov. 2012; Salt Lake City, UT, USA
·
OSTI ID:1567606