skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: End-to-end I/O portfolio for the summit supercomputing ecosystem

Abstract

The I/O subsystem for the Summit supercomputer, No. 1 on the Top500 list, and its ecosystem of analysis platforms is composed of two distinct layers, namely the in-system layer and the center-wide parallel file system layer (PFS), Spider 3. The in-system layer uses node-local SSDs and provides 26.7 TB/s for reads, 9.7 TB/s for writes, and 4.6 billion IOPS to Summit. The Spider 3 PFS layer uses IBM's Spectrum Scale™ and provides 2.5 TB/s and 2.6 million IOPS to Summit and other systems. While deploying them as two distinct layers was operationally efficient, it also presented usability challenges in terms of multiple mount points and lack of transparency in data movement. To address these challenges, we have developed novel end-to-end I/O solutions for the concerted use of the two storage layers. We present the I/O subsystem architecture, the end-to-end I/O solution space, their design considerations and our deployment experience.

Authors:
ORCiD logo [1]; ORCiD logo [1]; ORCiD logo [1]; ORCiD logo [1]; ORCiD logo [1]; ORCiD logo [1]; ORCiD logo [1]; ORCiD logo [1]; ORCiD logo [1]; ORCiD logo [1]; ORCiD logo [1]
  1. ORNL
Publication Date:
Research Org.:
Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
Sponsoring Org.:
USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR) (SC-21)
OSTI Identifier:
1619016
DOE Contract Number:  
AC05-00OR22725
Resource Type:
Conference
Resource Relation:
Conference: The International Conference for High Performance Computing, Networking, Storage, and Analysis (SC19) - Denver, Colorado, United States of America - 11/17/2019 5:00:00 AM-11/22/2019 5:00:00 AM
Country of Publication:
United States
Language:
English

Citation Formats

Oral, Sarp, Vazhkudai, Sudharshan, Wang, Feiyi, Zimmer, Christopher, Brumgard, Christopher, Hanley, Jesse A., Markomanolis, George, Miller, Ross, Leverman, Dustin B., Atchley, Scott, and Melesse Vergara, Veronica. End-to-end I/O portfolio for the summit supercomputing ecosystem. United States: N. p., 2019. Web. doi:10.1145/3295500.3356157.
Oral, Sarp, Vazhkudai, Sudharshan, Wang, Feiyi, Zimmer, Christopher, Brumgard, Christopher, Hanley, Jesse A., Markomanolis, George, Miller, Ross, Leverman, Dustin B., Atchley, Scott, & Melesse Vergara, Veronica. End-to-end I/O portfolio for the summit supercomputing ecosystem. United States. doi:10.1145/3295500.3356157.
Oral, Sarp, Vazhkudai, Sudharshan, Wang, Feiyi, Zimmer, Christopher, Brumgard, Christopher, Hanley, Jesse A., Markomanolis, George, Miller, Ross, Leverman, Dustin B., Atchley, Scott, and Melesse Vergara, Veronica. Fri . "End-to-end I/O portfolio for the summit supercomputing ecosystem". United States. doi:10.1145/3295500.3356157. https://www.osti.gov/servlets/purl/1619016.
@article{osti_1619016,
title = {End-to-end I/O portfolio for the summit supercomputing ecosystem},
author = {Oral, Sarp and Vazhkudai, Sudharshan and Wang, Feiyi and Zimmer, Christopher and Brumgard, Christopher and Hanley, Jesse A. and Markomanolis, George and Miller, Ross and Leverman, Dustin B. and Atchley, Scott and Melesse Vergara, Veronica},
abstractNote = {The I/O subsystem for the Summit supercomputer, No. 1 on the Top500 list, and its ecosystem of analysis platforms is composed of two distinct layers, namely the in-system layer and the center-wide parallel file system layer (PFS), Spider 3. The in-system layer uses node-local SSDs and provides 26.7 TB/s for reads, 9.7 TB/s for writes, and 4.6 billion IOPS to Summit. The Spider 3 PFS layer uses IBM's Spectrum Scale™ and provides 2.5 TB/s and 2.6 million IOPS to Summit and other systems. While deploying them as two distinct layers was operationally efficient, it also presented usability challenges in terms of multiple mount points and lack of transparency in data movement. To address these challenges, we have developed novel end-to-end I/O solutions for the concerted use of the two storage layers. We present the I/O subsystem architecture, the end-to-end I/O solution space, their design considerations and our deployment experience.},
doi = {10.1145/3295500.3356157},
journal = {},
number = ,
volume = ,
place = {United States},
year = {2019},
month = {11}
}

Conference:
Other availability
Please see Document Availability for additional information on obtaining the full-text document. Library patrons may search WorldCat to identify libraries that hold this conference proceeding.

Save / Share: