Machine Learning Based Parallel I/O Predictive Modeling: A Case Study on Lustre File Systems
Parallel I/O hardware and software infrastructure is a key contributor to performance variability for applications running on large-scale HPC systems. This variability confounds efforts to predict application performance for characterization, modeling, optimization, and job scheduling. We propose a modeling approach that improves predictive ability by explicitly treating the variability and by leveraging the sensitivity of application parameters on performance to group applications with similar characteristics. We develop a Gaussian process-based machine learning algorithm to model I/O performance and its variability as a function of application and file system characteristics. We demonstrate the effectiveness of the proposed approach using data collected from the Edison system at the National Energy Research Scientific Computing Center. The results show that the proposed sensitivity-based models are better at prediction when compared with application-partitioned or unpartitioned models. We highlight modeling techniques that are robust to the outliers that can occur in production parallel file systems. Using the developed metrics and modeling approach, we provide insights into the file system metrics that have a significant impact on I/O performance.
- Research Organization:
- Argonne National Laboratory (ANL), Argonne, IL (United States)
- Sponsoring Organization:
- USDOE Office of Science - Office of Advanced Scientific Computing Research (ASCR)
- DOE Contract Number:
- AC02-06CH11357
- OSTI ID:
- 1491024
- Resource Relation:
- Conference: 2018 ISC High Performance, 06/24/18 - 06/28/18, Frankfurt, DE
- Country of Publication:
- United States
- Language:
- English
Similar Records
Automatic and Transparent Resource Contention Mitigation for Improving Large-Scale Parallel File System Performance
Automatic and Transparent Resource Contention Mitigation for Improving Large-Scale Parallel File System Performance