skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Liquid Cooling in Data Centers

Abstract

Semiconductor manufacturers have aggressively attacked the problem of escalating microprocessor power consumption levels. Today, server manufacturers can purchase microprocessors that currently have power consumption levels capped at 100W maximum. However, total server power levels continue to increase, with the increase in power consumption coming from the supportin chipsets, memory, and other components. In turn, full rack heat loads are very aggressivley climbing as well, and this is making it increasingly difficult and cost-prohibitive for facility owners to cool these high power racks. As a result, facilities owners are turning to alternative, and more energy efficient, cooling solutions that deploy liquids in one form or another. The paper discusses the advent of the adoption of liquid-cooling in high performance computing centers. An overview of the following competing rack-based, liquid-cooling, technologies is provided: in-row, above rack, refrigerated/enclosed rack, rear door heat exchanger, and device-level (i.e., chip-level). Preparation for a liquid-cooled data center, retroft and greenfield (new), is discussed, with a focus on the key issues that are common to all liquid-cooling technologies that depend upon the delivery of water to the rack (or in some deployments, a Coolant Distribution Unit). The paper then discusses, in some detail, the actual implementation and deploymentmore » of a liquid device-level cooled (spray cooled) supercomputer at the Pacific Northwest National Laboratory. Initial results from a successful 30 day compliance test show excellent hardware stability, operating system (OS) and software stack stability, application stability and performance, and an availability level that exceeded expectations at 99.94%. The liquid-cooled supercomputer achieved a peak performance of 9.287 TeraFlops, which placed it at number 101 in the June 2007 Top500 fastest supercomputers worldwide. Long-term performance and energy efficiency testing is currently underway, and detailed results will be reported in upcoming publications.« less

Authors:
; ; ;
Publication Date:
Research Org.:
Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
Sponsoring Org.:
USDOE
OSTI Identifier:
974961
Report Number(s):
PNNL-SA-58737
Journal ID: ISSN 0001-2505; ASHTAG; DP1501000; TRN: US201007%%820
DOE Contract Number:  
AC05-76RL01830
Resource Type:
Journal Article
Journal Name:
ASHRAE Transactions, 115(pt. 1):231-241
Additional Journal Information:
Journal Volume: 115; Journal Issue: 1; Journal ID: ISSN 0001-2505
Country of Publication:
United States
Language:
English
Subject:
32 ENERGY CONSERVATION, CONSUMPTION, AND UTILIZATION; INFORMATION CENTERS; MICROPROCESSORS; HEAT GAIN; COOLING SYSTEMS; ENERGY EFFICIENCY; SUPERCOMPUTERS; HEAT EXCHANGERS; COOLANTS; PERFORMANCE; Liquid Cooling, Data Center, Plumbing, Greenfield

Citation Formats

Cader, Tahir, Sorell, Vali, Westra, Levi, and Marquez, Andres. Liquid Cooling in Data Centers. United States: N. p., 2009. Web.
Cader, Tahir, Sorell, Vali, Westra, Levi, & Marquez, Andres. Liquid Cooling in Data Centers. United States.
Cader, Tahir, Sorell, Vali, Westra, Levi, and Marquez, Andres. 2009. "Liquid Cooling in Data Centers". United States.
@article{osti_974961,
title = {Liquid Cooling in Data Centers},
author = {Cader, Tahir and Sorell, Vali and Westra, Levi and Marquez, Andres},
abstractNote = {Semiconductor manufacturers have aggressively attacked the problem of escalating microprocessor power consumption levels. Today, server manufacturers can purchase microprocessors that currently have power consumption levels capped at 100W maximum. However, total server power levels continue to increase, with the increase in power consumption coming from the supportin chipsets, memory, and other components. In turn, full rack heat loads are very aggressivley climbing as well, and this is making it increasingly difficult and cost-prohibitive for facility owners to cool these high power racks. As a result, facilities owners are turning to alternative, and more energy efficient, cooling solutions that deploy liquids in one form or another. The paper discusses the advent of the adoption of liquid-cooling in high performance computing centers. An overview of the following competing rack-based, liquid-cooling, technologies is provided: in-row, above rack, refrigerated/enclosed rack, rear door heat exchanger, and device-level (i.e., chip-level). Preparation for a liquid-cooled data center, retroft and greenfield (new), is discussed, with a focus on the key issues that are common to all liquid-cooling technologies that depend upon the delivery of water to the rack (or in some deployments, a Coolant Distribution Unit). The paper then discusses, in some detail, the actual implementation and deployment of a liquid device-level cooled (spray cooled) supercomputer at the Pacific Northwest National Laboratory. Initial results from a successful 30 day compliance test show excellent hardware stability, operating system (OS) and software stack stability, application stability and performance, and an availability level that exceeded expectations at 99.94%. The liquid-cooled supercomputer achieved a peak performance of 9.287 TeraFlops, which placed it at number 101 in the June 2007 Top500 fastest supercomputers worldwide. Long-term performance and energy efficiency testing is currently underway, and detailed results will be reported in upcoming publications.},
doi = {},
url = {https://www.osti.gov/biblio/974961}, journal = {ASHRAE Transactions, 115(pt. 1):231-241},
issn = {0001-2505},
number = 1,
volume = 115,
place = {United States},
year = {Fri May 01 00:00:00 EDT 2009},
month = {Fri May 01 00:00:00 EDT 2009}
}