Tuning collective communication for Partitioned Global Address Space programming models
Journal Article
·
· Parallel Computing
- Univ. of California, Berkeley, CA (United States)
- Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
Partitioned Global Address Space (PGAS) languages offer programmers the convenience of a shared memory programming style combined with locality control necessary to run on large-scale distributed memory systems. Even within a PGAS language programmers often need to perform global communication operations such as broadcasts or reductions, which are best performed as collective operations in which a group of threads work together to perform the operation. In this study we consider the problem of implementing collective communication within PGAS languages and explore some of the design trade-offs in both the interface and implementation. In particular, PGAS collectives have semantic issues that are different than in send–receive style message passing programs, and different implementation approaches that take advantage of the one-sided communication style in these languages. We present an implementation framework for PGAS collectives as part of the GASNet communication layer, which supports shared memory, distributed memory and hybrids. The framework supports a broad set of algorithms for each collective, over which the implementation may be automatically tuned. In conclusion, we demonstrate the benefit of optimized GASNet collectives using application benchmarks written in UPC, and demonstrate that the GASNet collectives can deliver scalable performance on a variety of state-of-the-art parallel machines including a Cray XT4, an IBM BlueGene/P, and a Sun Constellation system with InfiniBand interconnect.
- Research Organization:
- Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States)
- Sponsoring Organization:
- National Science Foundation (NSF) (United States); USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR) (SC-21)
- Grant/Contract Number:
- AC02-05CH11231; AC02-06CH11357; AC05-00OR22725; FC02-07ER25799; FC03-01ER25509
- OSTI ID:
- 1407106
- Journal Information:
- Parallel Computing, Journal Name: Parallel Computing Journal Issue: 9 Vol. 37; ISSN 0167-8191
- Publisher:
- ElsevierCopyright Statement
- Country of Publication:
- United States
- Language:
- English
| Parallel and scalable short-read alignment on multi-core clusters using UPC++ | text | January 2016 |
GASNet-EX: A High-Performance, Portable Communication Library for Exascale
|
book | November 2019 |
A view of programming scalable data analysis: from clouds to exascale
|
journal | February 2019 |
| GASNet-EX: A High-Performance, Portable Communication Library for Exascale | report | October 2018 |
Similar Records
Porting GASNet to Portals: Partitioned Global Address Space (PGAS) Language Support for the Cray XT
Optimized collectives for PGAS languages with one-sided communication (Poster)
Global-Address Space Networking for Exascale
Conference
·
Mon May 04 00:00:00 EDT 2009
·
OSTI ID:1407075
Optimized collectives for PGAS languages with one-sided communication (Poster)
Conference
·
Thu Nov 30 23:00:00 EST 2006
·
OSTI ID:1511298
Global-Address Space Networking for Exascale
Software
·
Tue Sep 04 20:00:00 EDT 2018
·
OSTI ID:code-18015