Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

MPI on millions of cores

Journal Article · · Parallel Processing Letters
Petascale parallel computers with more than a million processing cores are expected to be available in a couple of years. Although MPI is the dominant programming interface today for large-scale systems that at the highest end already have close to 300,000 processors, a challenging question to both researchers and users is whether MPI will scale to processor and core counts in the millions. In this paper, we examine the issue of scalability of MPI to very large systems. We first examine the MPI specification itself and discuss areas with scalability concerns and how they can be overcome. We then investigate issues that an MPI implementation must address in order to be scalable. To illustrate the issues, we ran a number of simple experiments to measure MPI memory consumption at scale up to 131,072 processes, or 80%, of the IBM Blue Gene/P system at Argonne National Laboratory. Based on the results, we identified nonscalable aspects of the MPI implementation and found ways to tune it to reduce its memory footprint. We also briefly discuss issues in application scalability to large process counts and features of MPI that enable the use of other techniques to alleviate scalability limitations in applications.
Research Organization:
Argonne National Laboratory (ANL)
Sponsoring Organization:
USDOE Office of Science - Office of Advanced Scientific Computing Research; National Science Foundation (NSF)
DOE Contract Number:
AC02-06CH11357
OSTI ID:
1395013
Journal Information:
Parallel Processing Letters, Journal Name: Parallel Processing Letters Journal Issue: 01 Vol. 21; ISSN 0129-6264
Publisher:
World Scientific
Country of Publication:
United States
Language:
English

References (6)

Fault Tolerance in Message Passing Interface Programs journal August 2004
HARNESS and fault tolerant MPI journal October 2001
Collective communication: theory, practice, and experience journal January 2007
Q UANTUM M ONTE C ARLO C ALCULATIONS OF L IGHT N UCLEI journal December 2001
Optimization of Collective Communication Operations in MPICH journal February 2005
Toward message passing for a million processes: characterizing MPI on a massive scale blue gene/P journal August 2009

Similar Records

MPI-hybrid Parallelism for Volume Rendering on Large, Multi-core Systems
Conference · Sat Mar 20 00:00:00 EDT 2010 · OSTI ID:983174

Petascale Parallelization of the Gyrokinetic Toroidal Code
Conference · Sat May 01 00:00:00 EDT 2010 · OSTI ID:1032521

Improving Multi-Million Virtual Rank MPI Execution in
Conference · Fri Dec 31 23:00:00 EST 2010 · OSTI ID:1022648