skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Harnessing the power of the new SMP cluster architecture

Conference ·
OSTI ID:11985

In 1993, members of our team collaborated with Silicon Graphics to perform the first full-scale demonstration of the computational power of the SMP cluster supercomputer architecture. That demonstration involved the simulation of homogeneous, compressible turbulence on a uniform grid of a billion cells, using our PPM gas dynamics code. This computation was embarrassingly parallel, the ideal test case, and it achieved only 4.9 Gflop/s performance, slightly over half that achievable by this application on the most expensive supercomputers of that day. After four to five solid days of computation, when the prototype machine had to be dismantled, the simulation was only about 20% completed. Nevertheless, this computation gave us important new insights into compressible turbulence and also into a powerful new mode of cost-effective, commercially sustainable supercomputing [S]. In the intervening 6 years, the SMP cluster architecture has become a fundamental strategy for several large supercomputer centers in the US, including the DOE's ASCI centers at Los Alamos National Laboratory and at the Lawrence Livermore National Laboratory and the NSF's center NCSA at the University of Illinois. This SMP cluster architecture now underlies product offerings at the high-end of performance from SGI, IBM, and HP, among others. Nevertheless, despite many successes, it is our opinion that the computational science community is only now beginning to exploit the full promise of these new computing platforms. In this paper, we will briefly discuss two key architectural issues, vector computing and the flat multiprocessor architecture, which continue to drive spirited discussions among computational scientists, and then we will describe the hierarchical shared memory programming paradigm that we feel is best suited to the creative use of SMP cluster systems. Finally, we will give examples of recent large-scale simulations carried out by our team on these kinds of systems and point toward the still more challenging work which we foresee in the near future.

Research Organization:
Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
Sponsoring Organization:
USDOE Office of Defense Programs (DP) (US)
DOE Contract Number:
W-7405-ENG-48
OSTI ID:
11985
Report Number(s):
UCRL-JC-134547; DP0101031; DP0101031; TRN: AH200119%%228
Resource Relation:
Conference: High Performance Computing on Hewlett-Packard Systems, Tromsoe (NO), 06/27/1999--06/30/1999; Other Information: PBD: 16 Jun 1999
Country of Publication:
United States
Language:
English