Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

On the energy footprint of I/O management in Exascale HPC systems

Journal Article · · Future Generations Computer Systems
 [1];  [2];  [2];  [3];  [2]
  1. Ecole Normale Supérieure of Rennes (ENS) (France); Argonne National Lab. (ANL), Argonne, IL (United States)
  2. Inra, Rennes (France)
  3. Centre National de la Recherche Scientifique (CNRS), Annecy-le-Vieux (France). Research Inst. of Computer Science and Random Systems (IRISA)

The advent of unprecedentedly scalable yet energy hungry Exascale supercomputers poses a major challenge in sustaining a high performance-per-watt ratio. With I/O management acquiring a crucial role in supporting scientific simulations, various I/O management approaches have been proposed to achieve high performance and scalability. But, the details of how these approaches affect energy consumption have not been studied yet. Therefore, this paper aims to explore how much energy a supercomputer consumes while running scientific simulations when adopting various I/O management approaches. In particular, we closely examine three radically different I/O schemes including time partitioning, dedicated cores, and dedicated nodes. To accomplish this, we implement the three approaches within the Damaris I/O middleware and perform extensive experiments with one of the target HPC applications of the Blue Waters sustained-petaflop supercomputer project: the CM1 atmospheric model. Our experimental results obtained on the French Grid'5000 platform highlight the differences among these three approaches and illustrate in which way various configurations of the application and of the system can impact performance and energy consumption. Moreover, we propose and validate a mathematical model that estimates the energy consumption of a HPC simulation under different I/O approaches. This proposed model gives hints to pre-select the most energy-efficient I/O approach for a particular simulation on a particular HPC system and therefore provides a step towards energy-efficient HPC simulations in Exascale systems. To the best of our knowledge, our work provides the first in-depth look into the energy-performance tradeoffs of I/O management approaches.

Research Organization:
Argonne National Lab. (ANL), Argonne, IL (United States)
Sponsoring Organization:
USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR) (SC-21)
Grant/Contract Number:
AC02-06CH11357
OSTI ID:
1390829
Journal Information:
Future Generations Computer Systems, Journal Name: Future Generations Computer Systems Journal Issue: C Vol. 62; ISSN 0167-739X
Publisher:
ElsevierCopyright Statement
Country of Publication:
United States
Language:
English

References (10)

Enabling high-speed asynchronous data extraction and transfer using DART journal January 2010
Damaris/Viz: A nonintrusive, adaptable and user-friendly in situ visualization framework conference October 2013
A performance and energy analysis of I/O management approaches for exascale systems conference June 2014
Evaluation of Collective I/O Implementations on Parallel Architectures journal August 2001
Governing energy consumption in Hadoop through CPU frequency scaling: An analysis journal January 2016
High-level buffering for hiding periodic output cost in scientific simulations journal March 2006
ACPI thermal sensing and control in the PC conference September 1998
A survey on techniques for improving the energy efficiency of large-scale distributed systems journal April 2014
A Benchmark Simulation for Moist Nonhydrostatic Numerical Models journal December 2002
Energy efficiency in high-performance computing with and without knowledge of applications and services journal July 2013

Similar Records

Damaris: Addressing performance variability in data management for post-petascale simulations
Journal Article · Sat Oct 01 00:00:00 EDT 2016 · ACM Transactions on Parallel Computing · OSTI ID:1346736

Data Management Challenges of Exascale Scientific Simulations: A Case Study with the Gyrokinetic Toroidal Code and ADIOS
Conference · Mon Jul 01 00:00:00 EDT 2019 · OSTI ID:1558473

Characterization and identification of HPC applications at leadership computing facility
Conference · Mon Jun 01 00:00:00 EDT 2020 · OSTI ID:1649007