skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Scientific Application Performance on Leading Scalar and VectorSupercomputing Platforms

Abstract

The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end computing (HEC) platforms, primarily because of their generality, scalability, and cost effectiveness. However, the growing gap between sustained and peak performance for full-scale scientific applications on conventional supercomputers has become a major concern in high performance computing, requiring significantly larger systems and application scalability than implied by peak performance in order to achieve desired performance. The latest generation of custom-built parallel vector systems have the potential to address this issue for numerical algorithms with sufficient regularity in their computational structure. In this work we explore applications drawn from four areas: magnetic fusion (GTC), plasma physics (LBMHD3D), astrophysics (Cactus), and material science (PARATEC). We compare performance of the vector-based Cray X1, X1E, Earth Simulator, NEC SX-8, with performance of three leading commodity-based superscalar platforms utilizing the IBM Power3, Intel Itanium2, and AMD Opteron processors. Our work makes several significant contributions: a new data-decomposition scheme for GTC that (for the first time) enables a breakthrough of the Teraflop barrier; the introduction of a new three-dimensional Lattice Boltzmann magneto-hydrodynamic implementation used to study the onset evolution of plasma turbulence that achieves over 26Tflop/s on 4800 ES processors; themore » highest per processor performance (by far) achieved by the full-production version of the Cactus ADM-BSSN; and the largest PARATEC cell size atomistic simulation to date. Overall, results show that the vector architectures attain unprecedented aggregate performance across our application suite, demonstrating the tremendous potential of modern parallel vector systems.« less

Authors:
; ; ; ;
Publication Date:
Research Org.:
Ernest Orlando Lawrence Berkeley NationalLaboratory, Berkeley, CA (US)
Sponsoring Org.:
USDOE Director. Office of Science. Advanced ScientificComputing Research
OSTI Identifier:
925423
Report Number(s):
LBNL-60800
R&D Project: K11121; BnR: KJ0101030; TRN: US200807%%348
DOE Contract Number:  
DE-AC02-05CH11231
Resource Type:
Journal Article
Resource Relation:
Journal Name: International Journal of High Performance ComputingApplications; Journal Volume: 22; Journal Issue: 1; Related Information: Journal Publication Date: 2008
Country of Publication:
United States
Language:
English
Subject:
42; ALGORITHMS; ASTROPHYSICS; IMPLEMENTATION; MICROPROCESSORS; PERFORMANCE; PHYSICS; PLASMA; PROLIFERATION; SCALARS; SIMULATION; SUPERCOMPUTERS; TURBULENCE; VECTORS

Citation Formats

Oliker, Leonid, Canning, Andrew, Carter, Jonathan, Shalf, John, and Ethier, Stephane. Scientific Application Performance on Leading Scalar and VectorSupercomputing Platforms. United States: N. p., 2007. Web.
Oliker, Leonid, Canning, Andrew, Carter, Jonathan, Shalf, John, & Ethier, Stephane. Scientific Application Performance on Leading Scalar and VectorSupercomputing Platforms. United States.
Oliker, Leonid, Canning, Andrew, Carter, Jonathan, Shalf, John, and Ethier, Stephane. Mon . "Scientific Application Performance on Leading Scalar and VectorSupercomputing Platforms". United States. doi:. https://www.osti.gov/servlets/purl/925423.
@article{osti_925423,
title = {Scientific Application Performance on Leading Scalar and VectorSupercomputing Platforms},
author = {Oliker, Leonid and Canning, Andrew and Carter, Jonathan and Shalf, John and Ethier, Stephane},
abstractNote = {The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end computing (HEC) platforms, primarily because of their generality, scalability, and cost effectiveness. However, the growing gap between sustained and peak performance for full-scale scientific applications on conventional supercomputers has become a major concern in high performance computing, requiring significantly larger systems and application scalability than implied by peak performance in order to achieve desired performance. The latest generation of custom-built parallel vector systems have the potential to address this issue for numerical algorithms with sufficient regularity in their computational structure. In this work we explore applications drawn from four areas: magnetic fusion (GTC), plasma physics (LBMHD3D), astrophysics (Cactus), and material science (PARATEC). We compare performance of the vector-based Cray X1, X1E, Earth Simulator, NEC SX-8, with performance of three leading commodity-based superscalar platforms utilizing the IBM Power3, Intel Itanium2, and AMD Opteron processors. Our work makes several significant contributions: a new data-decomposition scheme for GTC that (for the first time) enables a breakthrough of the Teraflop barrier; the introduction of a new three-dimensional Lattice Boltzmann magneto-hydrodynamic implementation used to study the onset evolution of plasma turbulence that achieves over 26Tflop/s on 4800 ES processors; the highest per processor performance (by far) achieved by the full-production version of the Cactus ADM-BSSN; and the largest PARATEC cell size atomistic simulation to date. Overall, results show that the vector architectures attain unprecedented aggregate performance across our application suite, demonstrating the tremendous potential of modern parallel vector systems.},
doi = {},
journal = {International Journal of High Performance ComputingApplications},
number = 1,
volume = 22,
place = {United States},
year = {Mon Jan 01 00:00:00 EST 2007},
month = {Mon Jan 01 00:00:00 EST 2007}
}