Scalable PGAS Metadata Management on Extreme Scale Systems
Programming models intended to run on exascale systems have a number of challenges to overcome, specially the sheer size of the system as measured by the number of concurrent software entities created and managed by the underlying runtime. It is clear from the size of these systems that any state maintained by the programming model has to be strictly sub-linear in size, in order not to overwhelm memory usage with pure overhead. A principal feature of Partitioned Global Address Space (PGAS) models is providing easy access to global-view distributed data structures. In order to provide efficient access to these distributed data structures, PGAS models must keep track of metadata such as where array sections are located with respect to processes/threads running on the HPC system. As PGAS models and applications become ubiquitous on very large transpetascale systems, a key component to their performance and scalability will be efficient and judicious use of memory for model overhead (metadata) compared to application data. We present an evaluation of several strategies to manage PGAS metadata that exhibit different space/time tradeoffs. We use two real-world PGAS applications to capture metadata usage patterns and gain insight into their communication behavior.
- Research Organization:
- Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
- Sponsoring Organization:
- USDOE
- DOE Contract Number:
- AC05-76RL01830
- OSTI ID:
- 1089070
- Report Number(s):
- PNNL-SA-93214
- Resource Relation:
- Conference: 13th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid'13), May 13-16, 2013, Delft, Netherlands, 103-111
- Country of Publication:
- United States
- Language:
- English
Similar Records
On the Suitability of MPI as a PGAS Runtime
GASNet-EX Memory Kinds: Support for Device Memory in PGAS Programming Models