skip to main content

DOE PAGESDOE PAGES

Title: Improving parallel I/O autotuning with performance modeling

Various layers of the parallel I/O subsystem offer tunable parameters for improving I/O performance on large-scale computers. However, searching through a large parameter space is challenging. We are working towards an autotuning framework for determining the parallel I/O parameters that can achieve good I/O performance for different data write patterns. In this paper, we characterize parallel I/O and discuss the development of predictive models for use in effectively reducing the parameter space. Furthermore, applying our technique on tuning an I/O kernel derived from a large-scale simulation code shows that the search time can be reduced from 12 hours to 2 hours, while achieving 54X I/O performance speedup.
Authors:
 [1] ;  [2] ;  [3] ;  [2] ;  [3]
  1. Univ. of Illinois, Urbana-Champaign, IL (United States)
  2. Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
  3. Argonne National Lab. (ANL), Argonne, IL (United States)
Publication Date:
OSTI Identifier:
1311632
Report Number(s):
LBNL--1005955
Journal ID: ISSN 1063-9635; ir:1005955
Grant/Contract Number:
AC02-05CH11231; AC02-06CH11357
Type:
Accepted Manuscript
Journal Name:
Proceedings of the ACM/IEEE Supercomputing Conference
Additional Journal Information:
Journal Volume: 2014; Conference: HPDC'14. 23. International Symposium on High-Performance Parallel and Distributed Computing, Vancouver, BC (Canada), 23-27 Jun 2014; Journal ID: ISSN 1063-9635
Publisher:
ACM/IEEE
Research Org:
Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States)
Sponsoring Org:
Computational Research Division, National Energy Research Scientific Computing Division; USDOE
Country of Publication:
United States
Language:
English
Subject:
97 MATHEMATICS AND COMPUTING parallel I/O; autotuning; performance optimization; performance modeling