Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

MPI as a coordination layer for communicating HPF tasks

Conference ·
OSTI ID:418494
;  [1]; ;  [2]
  1. Argonne National Lab., IL (United States). Mathematics and Computer Science Div.
  2. Syracuse Univ., NY (United States)

Data-parallel languages such as High Performance Fortran (HPF) present a simple execution model in which a single thread of control performs high-level operations on distributed arrays. These languages can greatly ease the development of parallel programs. Yet there are large classes of applications for which a mixture of task and data parallelism is most appropriate. Such applications can be structured as collections of data-parallel tasks that communicate by using explicit message passing. Because the Message Passing Interface (MPI) defines standardized, familiar mechanisms for this communication model, the authors propose that HPF tasks communicate by making calls to a coordination library that provides an HPF binding for MPI. The semantics of a communication interface for sequential languages can be ambiguous when the interface is invoked from a parallel language; they show how these ambiguities can be resolved by describing one possible HPF binding for MPI. They then present the design of a library that implements this binding, discuss issues that influenced the design decisions, and evaluate the performance of a prototype HPF/MPI library using a communications microbenchmark and application kernel. Finally, they discuss how MPI features might be incorporated into the design framework.

Research Organization:
Argonne National Lab., IL (United States)
Sponsoring Organization:
National Science Foundation, Washington, DC (United States); USDOE Office of Energy Research, Washington, DC (United States)
DOE Contract Number:
W-31109-ENG-38
OSTI ID:
418494
Report Number(s):
ANL/MCS-P--597-0596; CONF-9607124--5; ON: DE97000693
Country of Publication:
United States
Language:
English

Similar Records

Double standards: bringing task parallelism to HPF via the message passing interface
Conference · Mon Dec 30 23:00:00 EST 1996 · OSTI ID:469060

MPI nuts and bolts and more [Slides]
Technical Report · Tue Jun 25 00:00:00 EDT 2024 · OSTI ID:2386887

Redistribution of block-cyclic data distributions using MPI
Technical Report · Thu Jun 01 00:00:00 EDT 1995 · OSTI ID:106487