Growing interest in Artificial Intelligence (AI) has resulted in a surge in demand for faster methods of Machine Learning (ML) model training and inference. This demand for speed has prompted the use of high performance computing (HPC) systems that excel in managing distributed workloads. Because data is the main fuel for AI applications, the performance of the storage and I/O subsystem of HPC systems is critical. In the past, HPC applications accessed large portions of data written by simulations or experiments or ingested data for visualizations or analysis tasks. ML workloads perform small reads spread across a large number of random files. This shift of I/O access patterns poses several challenges to modern parallel storage systems. In this paper, we survey I/O in ML applications on HPC systems, and target literature within a 6-year time window from 2019 to 2024. We define the scope of the survey, provide an overview of the common phases of ML, review available profilers and benchmarks, examine the I/O patterns encountered during offline data preparation, training, and inference, and explore I/O optimizations utilized in modern ML frameworks and proposed in recent literature. Lastly, we seek to expose research gaps that could spawn further R&D.
Lewis, Noah, et al. "I/O in Machine Learning Applications on HPC Systems: A 360-degree Survey." ACM Computing Surveys, vol. 57, no. 10, Mar. 2025. https://doi.org/10.1145/3722215
@article{osti_2544251,
author = {Lewis, Noah and Bez, Jean Luca and Byna, Surendra},
title = {I/O in Machine Learning Applications on HPC Systems: A 360-degree Survey},
annote = {Growing interest in Artificial Intelligence (AI) has resulted in a surge in demand for faster methods of Machine Learning (ML) model training and inference. This demand for speed has prompted the use of high performance computing (HPC) systems that excel in managing distributed workloads. Because data is the main fuel for AI applications, the performance of the storage and I/O subsystem of HPC systems is critical. In the past, HPC applications accessed large portions of data written by simulations or experiments or ingested data for visualizations or analysis tasks. ML workloads perform small reads spread across a large number of random files. This shift of I/O access patterns poses several challenges to modern parallel storage systems. In this paper, we survey I/O in ML applications on HPC systems, and target literature within a 6-year time window from 2019 to 2024. We define the scope of the survey, provide an overview of the common phases of ML, review available profilers and benchmarks, examine the I/O patterns encountered during offline data preparation, training, and inference, and explore I/O optimizations utilized in modern ML frameworks and proposed in recent literature. Lastly, we seek to expose research gaps that could spawn further R&D.},
doi = {10.1145/3722215},
url = {https://www.osti.gov/biblio/2544251},
journal = {ACM Computing Surveys},
issn = {ISSN 0360-0300},
number = {10},
volume = {57},
place = {United States},
publisher = {Association for Computing Machinery (ACM)},
year = {2025},
month = {03}}
2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops), 2009 IEEE Conference on Computer Vision and Pattern Recognitionhttps://doi.org/10.1109/CVPR.2009.5206848
2018 IEEE 26th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)https://doi.org/10.1109/MASCOTS.2018.00023
Choi, Jong Youl; Lupo Pasini, Massimiliano; Zhang, Pei
Proceedings of the SC '23 Workshops of the International Conference on High Performance Computing, Network, Storage, and Analysishttps://doi.org/10.1145/3624062.3624171