skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: On the energy footprint of I/O management in Exascale HPC systems

Journal Article · · Future Generations Computer Systems
ORCiD logo [1];  [2];  [2];  [3];  [2]
  1. Ecole Normale Supérieure of Rennes (ENS) (France); Argonne National Lab. (ANL), Argonne, IL (United States)
  2. Inra, Rennes (France)
  3. Centre National de la Recherche Scientifique (CNRS), Annecy-le-Vieux (France). Research Inst. of Computer Science and Random Systems (IRISA)

The advent of unprecedentedly scalable yet energy hungry Exascale supercomputers poses a major challenge in sustaining a high performance-per-watt ratio. With I/O management acquiring a crucial role in supporting scientific simulations, various I/O management approaches have been proposed to achieve high performance and scalability. But, the details of how these approaches affect energy consumption have not been studied yet. Therefore, this paper aims to explore how much energy a supercomputer consumes while running scientific simulations when adopting various I/O management approaches. In particular, we closely examine three radically different I/O schemes including time partitioning, dedicated cores, and dedicated nodes. To accomplish this, we implement the three approaches within the Damaris I/O middleware and perform extensive experiments with one of the target HPC applications of the Blue Waters sustained-petaflop supercomputer project: the CM1 atmospheric model. Our experimental results obtained on the French Grid'5000 platform highlight the differences among these three approaches and illustrate in which way various configurations of the application and of the system can impact performance and energy consumption. Moreover, we propose and validate a mathematical model that estimates the energy consumption of a HPC simulation under different I/O approaches. This proposed model gives hints to pre-select the most energy-efficient I/O approach for a particular simulation on a particular HPC system and therefore provides a step towards energy-efficient HPC simulations in Exascale systems. To the best of our knowledge, our work provides the first in-depth look into the energy-performance tradeoffs of I/O management approaches.

Research Organization:
Argonne National Laboratory (ANL), Argonne, IL (United States)
Sponsoring Organization:
USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR)
Grant/Contract Number:
AC02-06CH11357
OSTI ID:
1390829
Journal Information:
Future Generations Computer Systems, Vol. 62, Issue C; ISSN 0167-739X
Publisher:
ElsevierCopyright Statement
Country of Publication:
United States
Language:
English
Citation Metrics:
Cited by: 5 works
Citation information provided by
Web of Science

References (10)

Enabling high-speed asynchronous data extraction and transfer using DART journal January 2010
Damaris/Viz: A nonintrusive, adaptable and user-friendly in situ visualization framework conference October 2013
A performance and energy analysis of I/O management approaches for exascale systems conference June 2014
Evaluation of Collective I/O Implementations on Parallel Architectures journal August 2001
High-level buffering for hiding periodic output cost in scientific simulations journal March 2006
A Benchmark Simulation for Moist Nonhydrostatic Numerical Models journal December 2002
A survey on techniques for improving the energy efficiency of large-scale distributed systems journal April 2014
Governing energy consumption in Hadoop through CPU frequency scaling: An analysis journal January 2016
ACPI thermal sensing and control in the PC conference September 1998
Energy efficiency in high-performance computing with and without knowledge of applications and services journal July 2013