Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Layout-Aware I/O Scheduling for Terabits Data Movement

Conference ·
OSTI ID:1097485
Many science facilities, such as the Department of Energy s Leadership Computing Facilities and experimental facilities including the Spallation Neutron Source, Stanford Linear Accelerator Center, and Advanced Photon Source, produce massive amounts of experimental and simulation data. These data are often shared among the facilities and with collaborating institutions. Moving large datasets over the wide- area network (WAN) is a major problem inhibiting collaboration. Next- generation, terabit-networks will help alleviate the problem, however, the parallel storage systems on the end-system hosts at these institutions can become a bottleneck for terabit data movement. The parallel storage system (PFS) is shared by simulation systems, experimental systems, analysis and visualization clusters, in addition to wide-area data movers. These competing uses often induce temporary, but significant, I/O load imbalances on the storage system, which impact the performance of all the users. The problem is a serious concern because some resources are more expensive (e.g. super computers) or have time-critical deadlines (e.g. experimental data from a light source), but parallel file systems handle all requests fairly even if some storage servers are under heavy load. This paper investigates the problem of competing workloads accessing the parallel file system and how the performance of wide-area data movement can be improved in these environments. First, we study the I/O load imbalance problems using actual I/O performance data collected from the Spider storage system at the Oak Ridge Leadership Computing Facility. Second, we present I/O optimization solutions with layout-awareness on end-system hosts for bulk data movement. With our evaluation, we show that our I/O optimization techniques can avoid the I/O congested disk groups, improving storage I/O times on parallel storage systems for terabit data movement.
Research Organization:
Oak Ridge National Laboratory (ORNL); Center for Computational Sciences
Sponsoring Organization:
SC USDOE - Office of Science (SC)
DOE Contract Number:
AC05-00OR22725
OSTI ID:
1097485
Country of Publication:
United States
Language:
English

Similar Records

Layout-aware I/O Scheduling for terabits data movement
Journal Article · Tue Oct 01 00:00:00 EDT 2013 · OSTI ID:1567556

Optimizing End-to-End Big Data Transfers over Terabits Network Infrastructure
Journal Article · Tue Apr 05 00:00:00 EDT 2016 · IEEE Transactions on Parallel and Distributed Systems · OSTI ID:1361284

End-to-end data movement using MPI-IO over routed terabits infrastructures. In: NDM '13 Proceedings of the Third International Workshop on Network-Aware Data Management, Article No. 9
Conference · Mon Dec 31 23:00:00 EST 2012 · OSTI ID:1567633