Open issues in MPI implementation.
MPI (the Message Passing Interface) continues to be the dominant programming model for parallel machines of all sizes, from small Linux clusters to the largest parallel supercomputers such as IBM Blue Gene/L and Cray XT3. Although the MPI standard was released more than 10 years ago and a number of implementations of MPI are available from both vendors and research groups, MPI implementations still need improvement in many areas. In this paper, we discuss several such areas, including performance, scalability, fault tolerance, support for debugging and verification, topology awareness, collective communication, derived datatypes, and parallel I/O. We also present results from experiments with several MPI implementations (MPICH2, Open MPI, Sun, IBM) on a number of platforms (Linux clusters, Sun and IBM SMPs) that demonstrate the need for performance improvement in one-sided communication and support for multithreaded programs.
- Research Organization:
- Argonne National Lab. (ANL), Argonne, IL (United States)
- Sponsoring Organization:
- USDOE Office of Science (SC)
- DOE Contract Number:
- DE-AC02-06CH11357
- OSTI ID:
- 971149
- Report Number(s):
- ANL/MCS/CP-59524; TRN: US201003%%597
- Resource Relation:
- Conference: 12th Asia-Pacific Computer Systems Architecture Conference (ACSAC 2007); Aug. 23, 2007 - Aug. 25, 2007; Seoul, Korea
- Country of Publication:
- United States
- Language:
- ENGLISH
Similar Records
Test suite for evaluating performance of multithreaded MPI communication.
Revealing the performance of MPI RMA implementations.