Fine-grained multithreading support for hybrid threaded MPI programming.
- Mathematics and Computer Science
As high-end computing systems continue to grow in scale, recent advances in multi- and many-core architectures have pushed such growth toward more dense architectures, that is, more processing elements per physical node, rather than more physical nodes themselves. Although a large number of scientific applications have relied so far on an MPI-everywhere model for programming high-end parallel systems; this model may not be sufficient for future machines, given their physical constraints such as decreasing amounts of memory per processing element and shared caches. As a result, application and computer scientists are exploring alternative programming models that involve using MPI between address spaces and some other threaded model, such as OpenMP, Pthreads, or Intel TBB, within an address space. Such hybrid models require efficient support from an MPI implementation for MPI messages sent from multiple threads simultaneously. In this paper, we explore the issues involved in designing such an implementation. We present four approaches to building a fully thread-safe MPI implementation, with decreasing levels of critical-section granularity (from coarse-grain locks to fine-grain locks to lock-free operations) and correspondingly increasing levels of complexity. We present performance results that demonstrate the performance implications of the different approaches.
- Research Organization:
- Argonne National Laboratory (ANL)
- Sponsoring Organization:
- SC
- DOE Contract Number:
- AC02-06CH11357
- OSTI ID:
- 1015545
- Report Number(s):
- ANL/MCS/JA-64365
- Journal Information:
- Int. J. High Perform. Comput. Appl., Journal Name: Int. J. High Perform. Comput. Appl. Journal Issue: 1 ; Feb. 2010 Vol. 24; ISSN 1094-3420
- Country of Publication:
- United States
- Language:
- ENGLISH
Similar Records
A Locality-Based Threading Algorithm for the Configuration-Interaction Method