Summary: Building MPI for MultiProgramming Systems using
Frederick C. Wong 1 , Andrea C. ArpaciDusseau 2 , and David E. Culler 1
1 Computer Science Division, University of California, Berkeley
2 Computer Systems Laboratory, Stanford University
Abstract. With the growing importance of fast system area networks in the par
allel community, it is becoming common for message passing programs to run in
multiprogramming environments. Competing sequential and parallel jobs can
distort the global coordination of communicating processes. In this paper, we
describe our implementation of MPI using implicit information for global co
scheduling. Our results show that MPI program performance is, indeed, sensitive
to local scheduling variations. Further, the integration of implicit coscheduling
with the MPI runtime system achieves robust performance in a multiprogram
ming environment, without compromising performance in dedicated use.
With the emergence of fast system area networks and lowoverhead communication
interfaces , it is becoming common for parallel MPI programs to run in cluster envi
ronments that offer both high performance communication and multiprogramming.