Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

A Global View Programming Abstraction for Transitioning MPI Codes to PGAS Languages

Conference ·
 [1];  [1];  [1]
  1. Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
The multicore generation of scientific high performance computing has provided a platform for the realization of Exascale computing, and has also underscored the need for new paradigms in coding parallel applications. The current standard for writing parallel applications requires programmers to use languages designed for sequential execution. These languages have abstractions that only allow programmers to operate on the process centric local view of data. To provide suitable languages for parallel execution, many research efforts have designed languages based on the Partitioned Global Address Space (PGAS) programming model. Chapel is one of the more recent languages to be developed using this model. Chapel supports multithreaded execution with high-level abstractions for parallelism. With Chapel in mind, we have developed a set of directives that serve as intermediate expressions for transitioning scientific applications from languages designed for sequential execution to PGAS languages like Chapel that are being developed with parallelism in mind.
Research Organization:
Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States)
Sponsoring Organization:
USDOE Office of Science; USDOE
OSTI ID:
1567508
Country of Publication:
United States
Language:
English

References (12)

Co-array Fortran for parallel programming journal August 1998
The rise and fall of High Performance Fortran: an historical object lesson conference January 2007
An Empirical Performance Study of Chapel Programming Language
  • Dun, Nan; Taura, Kenjiro
  • 2012 26th IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), 2012 IEEE 26th International Parallel and Distributed Processing Symposium Workshops & PhD Forum https://doi.org/10.1109/IPDPSW.2012.64
conference May 2012
Competitive management of distributed shared memory conference January 1989
High performance computing using MPI and OpenMP on multi-core parallel systems journal September 2011
Productivity and performance using partitioned global address space languages conference January 2007
Titanium: a high-performance Java dialect journal September 1998
Survey on Parallel Programming Model book January 2008
Parallel Programmability and the Chapel Language journal August 2007
X10: an object-oriented approach to non-uniform cluster computing
  • Charles, Philippe; Grothoff, Christian; Saraswat, Vijay
  • Proceedings of the 20th annual ACM SIGPLAN conference on Object oriented programming systems languages and applications - OOPSLA '05 https://doi.org/10.1145/1094811.1094852
conference January 2005
Simple but effective techniques for NUMA memory management conference January 1989
Hybrid MPI/OpenMP Parallel Programming on Clusters of Multi-Core SMP Nodes
  • Rabenseifner, Rolf; Hager, Georg; Jost, Gabriele
  • 2009 17th Euromicro International Conference on Parallel, Distributed and Network-based Processing https://doi.org/10.1109/PDP.2009.43
conference February 2009

Similar Records

A Global View Programming Abstraction for Transitioning MPI Codes to PGAS Languages
Conference · Tue Dec 31 23:00:00 EST 2013 · OSTI ID:1122698

Integrating PGAS and MPI-based Graph Analysis
Technical Report · Fri Oct 01 00:00:00 EDT 2021 · OSTI ID:1832085

Porting GASNet to Portals: Partitioned Global Address Space (PGAS) Language Support for the Cray XT
Conference · Mon May 04 00:00:00 EDT 2009 · OSTI ID:1407075