skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Challenges and Opportunities of User-Level File Systemsfor HPC

Abstract

The performance gap between magnetic disks and data processing on HPC systems has become that huge that an efficient data processing can only be achieved by introducing non-volatile memory (NVRAM) as a new storage tier. Although the benefits of hierarchical storage have been adequately demonstrated to the point that the newest leadership class HPC systems will employ burst buffers, critical questions remain for supporting hierarchical storage systems, including: How should we present hierarchical storage systems to user applications, such that they are easy to use and that application code is portable across systems? How should we manage data movement through a storage hierarchy for best performance and resilience of data? How do the particular I/O use cases mandate the way we manage data? There have been many efforts to explore this space in the form of file systems, with increasingly more implemented at the user level. This is because it is relatively easy to swap in new, specialized user-level file systems for use by applications on a case-by-case basis, as opposed to the current mainstream approach of using general-purpose, system-level file systems which may not be optimized for HPC workloads and must be installed by administrators. In contrast, file systemsmore » at the user level can be tailored for specific HPC workloads for high performance and can be used by applications without administrator intervention. Many user-level file system developers have found themselves “having to reinvent the wheel” to implement various optimizations in their file systems. Thus, a main goal of this meeting was to bring together experts in I/O performance, file systems, and storage, and collectively explore the space of current and future problems and solutions for I/O on hierarchical storage systems in order to begin a community effort in enabling user-level file system support for HPC systems. We had a lively week of learning about each other’s approaches as well as unique I/O use cases that can influence the design of a community-driven file and storage system standards. The agenda for this meeting contained talks from participants on the following high level topics: HPC storage and I/O support today; what do HPC users need for I/O; existing userlevel file system efforts; object stores and other alternative storage systems; and components for building user-level file systems. The talks were short and intended to be conversation starters for more in-depth discussions with the whole group. The participants engaged in lengthy discussions on various questions that arose from the talks including: Are we ready to program to a memory hierarchy versus block devices? Are the needs of HPC users reflected in our existing file systems and storage systems? Should we drop or keep POSIX moving forward? What do we mean when we say “user-level file system”? Do we all mean the same thing? How should the IO 500 benchmark be defined so it is fair and useful? and How are stage-in and stage-out actually going to work? The report for this seminar contains a record of the talks from the participants as well as the resulting discussions. Our hope is that the effort initiated during this seminar will result in long-term collaborations that will benefit the HPC community as a whole.« less

Authors:
 [1];  [2];  [3]
  1. Johannes Gutenberg Univ., Mainz (Germany)
  2. Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
  3. Florida State Univ., Tallahassee, FL (United States)
Publication Date:
Research Org.:
Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
Sponsoring Org.:
USDOE
OSTI Identifier:
1424647
Report Number(s):
LLNL-TR-737379
DOE Contract Number:  
AC52-07NA27344
Resource Type:
Technical Report
Country of Publication:
United States
Language:
English
Subject:
97 MATHEMATICS AND COMPUTING

Citation Formats

Brinkmann, A., Mohror, K., and Yu, W. Challenges and Opportunities of User-Level File Systemsfor HPC. United States: N. p., 2017. Web. doi:10.2172/1424647.
Brinkmann, A., Mohror, K., & Yu, W. Challenges and Opportunities of User-Level File Systemsfor HPC. United States. doi:10.2172/1424647.
Brinkmann, A., Mohror, K., and Yu, W. Wed . "Challenges and Opportunities of User-Level File Systemsfor HPC". United States. doi:10.2172/1424647. https://www.osti.gov/servlets/purl/1424647.
@article{osti_1424647,
title = {Challenges and Opportunities of User-Level File Systemsfor HPC},
author = {Brinkmann, A. and Mohror, K. and Yu, W.},
abstractNote = {The performance gap between magnetic disks and data processing on HPC systems has become that huge that an efficient data processing can only be achieved by introducing non-volatile memory (NVRAM) as a new storage tier. Although the benefits of hierarchical storage have been adequately demonstrated to the point that the newest leadership class HPC systems will employ burst buffers, critical questions remain for supporting hierarchical storage systems, including: How should we present hierarchical storage systems to user applications, such that they are easy to use and that application code is portable across systems? How should we manage data movement through a storage hierarchy for best performance and resilience of data? How do the particular I/O use cases mandate the way we manage data? There have been many efforts to explore this space in the form of file systems, with increasingly more implemented at the user level. This is because it is relatively easy to swap in new, specialized user-level file systems for use by applications on a case-by-case basis, as opposed to the current mainstream approach of using general-purpose, system-level file systems which may not be optimized for HPC workloads and must be installed by administrators. In contrast, file systems at the user level can be tailored for specific HPC workloads for high performance and can be used by applications without administrator intervention. Many user-level file system developers have found themselves “having to reinvent the wheel” to implement various optimizations in their file systems. Thus, a main goal of this meeting was to bring together experts in I/O performance, file systems, and storage, and collectively explore the space of current and future problems and solutions for I/O on hierarchical storage systems in order to begin a community effort in enabling user-level file system support for HPC systems. We had a lively week of learning about each other’s approaches as well as unique I/O use cases that can influence the design of a community-driven file and storage system standards. The agenda for this meeting contained talks from participants on the following high level topics: HPC storage and I/O support today; what do HPC users need for I/O; existing userlevel file system efforts; object stores and other alternative storage systems; and components for building user-level file systems. The talks were short and intended to be conversation starters for more in-depth discussions with the whole group. The participants engaged in lengthy discussions on various questions that arose from the talks including: Are we ready to program to a memory hierarchy versus block devices? Are the needs of HPC users reflected in our existing file systems and storage systems? Should we drop or keep POSIX moving forward? What do we mean when we say “user-level file system”? Do we all mean the same thing? How should the IO 500 benchmark be defined so it is fair and useful? and How are stage-in and stage-out actually going to work? The report for this seminar contains a record of the talks from the participants as well as the resulting discussions. Our hope is that the effort initiated during this seminar will result in long-term collaborations that will benefit the HPC community as a whole.},
doi = {10.2172/1424647},
journal = {},
number = ,
volume = ,
place = {United States},
year = {Wed Aug 23 00:00:00 EDT 2017},
month = {Wed Aug 23 00:00:00 EDT 2017}
}

Technical Report:

Save / Share: