Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Message passing and shared address space parallelism on an SMP cluster

Journal Article · · Parallel Computing
OSTI ID:825127
Currently, message passing (MP) and shared address space (SAS) are the two leading parallel programming paradigms. MP has been standardized with MPI, and is the more common and mature approach; however, code development can be extremely difficult, especially for irregularly structured computations. SAS offers substantial ease of programming, but may suffer from performance limitations due to poor spatial locality and high protocol overhead. In this paper, we compare the performance of and the programming effort required for six applications under both programming models on a 32-processor PC-SMP cluster, a platform that is becoming increasingly attractive for high-end scientific computing. Our application suite consists of codes that typically do not exhibit scalable performance under shared-memory programming due to their high communication-to-computation ratios and/or complex communication patterns. Results indicate that SAS can achieve about half the parallel efficiency of MPI for mo st of our applications, while being competitive for the others. A hybrid MPI + SAS strategy shows only a small performance advantage over pure MPI in some cases. Finally, improved implementations of two MPI collective operations on PC-SMP clusters are presented.
Research Organization:
Ernest Orlando Lawrence Berkeley National Laboratory, Berkeley, CA (US)
Sponsoring Organization:
USDOE Director. Office of Science. Office of Computational and Technology Research; National Science Foundation Grant ESS-9806751; PE-CASE, Sloan Research Fellowship (US)
DOE Contract Number:
AC03-76SF00098
OSTI ID:
825127
Report Number(s):
LBNL--53114
Journal Information:
Parallel Computing, Journal Name: Parallel Computing Vol. 29; ISSN PACOEJ; ISSN 0167-8191
Country of Publication:
United States
Language:
English