skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Evaluating the Performance Impact of Xen on MPI and Process Execution For HPC Systems

Abstract

Virtualization has become increasingly popular for enabling full system isolation, load balancing, and hardware multiplexing for high-end server systems. Virtualizing software has the potential to benefit HPC systems similarly by facilitating efficient cluster management, application isolation, full-system customization, and process migration. However, virtualizing software is not currently employed in HPC environments due to its perceived overhead. In this work, we investigate the overhead imposed by the popular, open-source, Xen virtualization system, on performance-critical HPC kernels and applications. We empirically evaluate the impact of Xen on both communication and computation and compare its use to that of a customized kernel using HPC cluster resources at Lawrence Livermore National Lab (LLNL). We also employ statistically sound methods to compare the performance of a paravirtualized kernel against three popular Linux operating systems: RedHat Enterprise 4 (RHEL4) for build versions 2.6.9 and 2.6.12 and the LLNL CHAOS kernel, a specialized version of RHEL4. Our results indicate that Xen is very efficient and practical for HPC systems.

Authors:
; ; ;
Publication Date:
Research Org.:
Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
Sponsoring Org.:
USDOE
OSTI Identifier:
897944
Report Number(s):
UCRL-TR-226980
TRN: US200705%%580
DOE Contract Number:
W-7405-ENG-48
Resource Type:
Technical Report
Country of Publication:
United States
Language:
English
Subject:
99 GENERAL AND MISCELLANEOUS//MATHEMATICS, COMPUTING, AND INFORMATION SCIENCE; COMMUNICATIONS; KERNELS; LAWRENCE LIVERMORE NATIONAL LABORATORY; MANAGEMENT; PERFORMANCE

Citation Formats

Youseff, L, Wolski, R, Gorda, B, and Krintz, C. Evaluating the Performance Impact of Xen on MPI and Process Execution For HPC Systems. United States: N. p., 2006. Web. doi:10.2172/897944.
Youseff, L, Wolski, R, Gorda, B, & Krintz, C. Evaluating the Performance Impact of Xen on MPI and Process Execution For HPC Systems. United States. doi:10.2172/897944.
Youseff, L, Wolski, R, Gorda, B, and Krintz, C. Thu . "Evaluating the Performance Impact of Xen on MPI and Process Execution For HPC Systems". United States. doi:10.2172/897944. https://www.osti.gov/servlets/purl/897944.
@article{osti_897944,
title = {Evaluating the Performance Impact of Xen on MPI and Process Execution For HPC Systems},
author = {Youseff, L and Wolski, R and Gorda, B and Krintz, C},
abstractNote = {Virtualization has become increasingly popular for enabling full system isolation, load balancing, and hardware multiplexing for high-end server systems. Virtualizing software has the potential to benefit HPC systems similarly by facilitating efficient cluster management, application isolation, full-system customization, and process migration. However, virtualizing software is not currently employed in HPC environments due to its perceived overhead. In this work, we investigate the overhead imposed by the popular, open-source, Xen virtualization system, on performance-critical HPC kernels and applications. We empirically evaluate the impact of Xen on both communication and computation and compare its use to that of a customized kernel using HPC cluster resources at Lawrence Livermore National Lab (LLNL). We also employ statistically sound methods to compare the performance of a paravirtualized kernel against three popular Linux operating systems: RedHat Enterprise 4 (RHEL4) for build versions 2.6.9 and 2.6.12 and the LLNL CHAOS kernel, a specialized version of RHEL4. Our results indicate that Xen is very efficient and practical for HPC systems.},
doi = {10.2172/897944},
journal = {},
number = ,
volume = ,
place = {United States},
year = {Thu Dec 21 00:00:00 EST 2006},
month = {Thu Dec 21 00:00:00 EST 2006}
}

Technical Report:

Save / Share:
  • The use of Powder River Basin (PRB) coals as a blend stock in boilers designed for high-sulfur bituminous coals represents one alternative for meeting the SO{sub 2} emissions requirements of the 1990 Clean Air Act Amendments. This report documents the results of a test burn on a large, supercritical unit to evaluate the impact of PRB coal blends on plant performance and emissions. The test burn was designed to incorporate new operational enhancements as they became apparent while maximizing the PRB coal blend fraction under safe, reliable operating conditions. The report also provides guidelines for planning and conducting a PRBmore » coal test burn. Finally, the report describes enhancements to EPRI`s Coal Quality Impact Model (CQIM{trademark}) in the areas of mill drying, furnace heat transfer, sootblower modeling, precipitator modeling, and component capital costing capability. The enhanced CQIM provides a tool to be used before, during, and after the test burn planning phase to screen candidate coals and various operating scenarios.« less
  • Methods are developed for assessing the cost effectiveness of proposed systems and strategies for mitigating the consequences of severe nuclear accidents. Such mitigation systems consist mostly of devices for improving the ability of a reactor containment to survive such an accident and retain all radioactive materials. Value/impact analysis is applied to the system with and without mitigation, using the population dose averted by mitigation as the value of benefit, and the dollar cost of the containment improvements as the impact. Other considerations affecting such analyses include ways of monetizing public health risk, economic discounting, and the effect of interdiction policymore » and other post-accident recovery costs.« less
  • This paper presents a modular-redundant Message Passing Interface (MPI) solution, MR-MPI, for transparently executing high-performance computing (HPC) applications in a redundant fashion. The presented work addresses the deficiencies of recovery-oriented HPC, i.e., checkpoint/restart to/from a parallel file system, at extreme scale by adding the redundancy approach to the HPC resilience portfolio. It utilizes the MPI performance tool interface, PMPI, to transparently intercept MPI calls from an application and to hide all redundancy-related mechanisms. A redundantly executed application runs with r*m native MPI processes, where r is the number of MPI ranks visible to the application and m is the replicationmore » degree. Messages between redundant nodes are replicated. Partial replication for tunable resilience is supported. The performance results clearly show the negative impact of the O(m^m) messages between replicas. For low-level, point-to-point benchmarks, the impact can be as high as the replication degree. For applications, performance highly depends on the actual communication types and counts. On single-core systems, the overhead can be 0% for embarrassingly parallel applications independent of the employed redundancy configuration or up to 70-90% for communication-intensive applications in a dual-redundant configuration. On multi-core systems, the overhead can be significantly higher due to the additional communication contention.« less
  • This reference is under the jurisdiction of ASTM Committee E-7 on Nondestructive Testing and is the direct responsibility of Subcommittee E07.02 on Reference Radiological Images. Current edition approved Dec. 10, 1997 and published June 1998.