skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: MPI as a coordination layer for communicating HPF tasks

Abstract

Data-parallel languages such as High Performance Fortran (HPF) present a simple execution model in which a single thread of control performs high-level operations on distributed arrays. These languages can greatly ease the development of parallel programs. Yet there are large classes of applications for which a mixture of task and data parallelism is most appropriate. Such applications can be structured as collections of data-parallel tasks that communicate by using explicit message passing. Because the Message Passing Interface (MPI) defines standardized, familiar mechanisms for this communication model, the authors propose that HPF tasks communicate by making calls to a coordination library that provides an HPF binding for MPI. The semantics of a communication interface for sequential languages can be ambiguous when the interface is invoked from a parallel language; they show how these ambiguities can be resolved by describing one possible HPF binding for MPI. They then present the design of a library that implements this binding, discuss issues that influenced the design decisions, and evaluate the performance of a prototype HPF/MPI library using a communications microbenchmark and application kernel. Finally, they discuss how MPI features might be incorporated into the design framework.

Authors:
;  [1]; ;  [2]
  1. Argonne National Lab., IL (United States). Mathematics and Computer Science Div.
  2. Syracuse Univ., NY (United States)
Publication Date:
Research Org.:
Argonne National Lab. (ANL), Argonne, IL (United States)
Sponsoring Org.:
National Science Foundation, Washington, DC (United States); USDOE Office of Energy Research, Washington, DC (United States)
OSTI Identifier:
418494
Report Number(s):
ANL/MCS-P-597-0596; CONF-9607124-5
ON: DE97000693; TRN: AHC29702%%110
DOE Contract Number:  
W-31109-ENG-38
Resource Type:
Conference
Resource Relation:
Conference: 1996 Message Passing Interface (MPI) developers conference, Notre Dame, IN (United States), 1-2 Jul 1996; Other Information: PBD: [1996]
Country of Publication:
United States
Language:
English
Subject:
99 MATHEMATICS, COMPUTERS, INFORMATION SCIENCE, MANAGEMENT, LAW, MISCELLANEOUS; PARALLEL PROCESSING; PROGRAMMING LANGUAGES; DATA TRANSMISSION; TASK SCHEDULING; IMPLEMENTATION; PERFORMANCE

Citation Formats

Foster, I T, Kohr, Jr, D R, Krishnaiyer, R, and Choudhary, A. MPI as a coordination layer for communicating HPF tasks. United States: N. p., 1996. Web.
Foster, I T, Kohr, Jr, D R, Krishnaiyer, R, & Choudhary, A. MPI as a coordination layer for communicating HPF tasks. United States.
Foster, I T, Kohr, Jr, D R, Krishnaiyer, R, and Choudhary, A. 1996. "MPI as a coordination layer for communicating HPF tasks". United States. https://www.osti.gov/servlets/purl/418494.
@article{osti_418494,
title = {MPI as a coordination layer for communicating HPF tasks},
author = {Foster, I T and Kohr, Jr, D R and Krishnaiyer, R and Choudhary, A},
abstractNote = {Data-parallel languages such as High Performance Fortran (HPF) present a simple execution model in which a single thread of control performs high-level operations on distributed arrays. These languages can greatly ease the development of parallel programs. Yet there are large classes of applications for which a mixture of task and data parallelism is most appropriate. Such applications can be structured as collections of data-parallel tasks that communicate by using explicit message passing. Because the Message Passing Interface (MPI) defines standardized, familiar mechanisms for this communication model, the authors propose that HPF tasks communicate by making calls to a coordination library that provides an HPF binding for MPI. The semantics of a communication interface for sequential languages can be ambiguous when the interface is invoked from a parallel language; they show how these ambiguities can be resolved by describing one possible HPF binding for MPI. They then present the design of a library that implements this binding, discuss issues that influenced the design decisions, and evaluate the performance of a prototype HPF/MPI library using a communications microbenchmark and application kernel. Finally, they discuss how MPI features might be incorporated into the design framework.},
doi = {},
url = {https://www.osti.gov/biblio/418494}, journal = {},
number = ,
volume = ,
place = {United States},
year = {Tue Dec 31 00:00:00 EST 1996},
month = {Tue Dec 31 00:00:00 EST 1996}
}

Conference:
Other availability
Please see Document Availability for additional information on obtaining the full-text document. Library patrons may search WorldCat to identify libraries that hold this conference proceeding.

Save / Share: