skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Planning of distributed data production for High Energy and Nuclear Physics

Abstract

Modern experiments in High Energy and Nuclear Physics heavily rely on distributed computations using multiple computational facilities across the world. One of the essential types of the computations is a distributed data production where petabytes of raw files from a single source has to be processed once (per production campaign) using thousands of CPUs at distant locations and the output has to be transferred back to that source. The data distribution over a large system does not necessary match the distribution of storage, network and CPU capacity. Therefore, bottlenecks may appear and lead to increased latency and degraded performance. In this paper we propose a new scheduling approach for distributed data production which is based on the network flow maximization model. In our approach a central planner defines how much input and output data should be transferred over each network link in order to maximize the computational throughput. Such plans are created periodically for a fixed planning time interval using up-to-date information on network, storage and CPU resources. The centrally created plans are executed in a distributed manner by dedicated services running at participating sites. In conclusion, our simulations based on the log records from the data production framework ofmore » the experiment STAR (Solenoid Tracker at RHIC) have shown that the proposed model systematically provides a better performance compared to the simulated traditional techniques.« less

Authors:
ORCiD logo [1];  [2];  [3]
  1. Czech Technical Univ. in Prague, Prague (Czech Republic); Nuclear Physics Institute of the Czech Academy of Sciences, Prague (Czech Republic)
  2. Brookhaven National Lab. (BNL), Upton, NY (United States)
  3. Masaryk Univ., Brno (Czech Republic)
Publication Date:
Research Org.:
Brookhaven National Lab. (BNL), Upton, NY (United States)
Sponsoring Org.:
USDOE Office of Science (SC), Nuclear Physics (NP) (SC-26)
OSTI Identifier:
1480983
Report Number(s):
BNL-209348-2018-JAAM
Journal ID: ISSN 1386-7857
Grant/Contract Number:  
SC0012704
Resource Type:
Journal Article: Accepted Manuscript
Journal Name:
Cluster Computing
Additional Journal Information:
Journal Volume: 21; Journal Issue: 4; Journal ID: ISSN 1386-7857
Publisher:
Springer
Country of Publication:
United States
Language:
English
Subject:
73 NUCLEAR PHYSICS AND RADIATION PHYSICS; Load balancing; Job scheduling; Planning; Network flow; Distributed computing; Large scale computing; Grid; Data intensive applications; Data production; Big data

Citation Formats

Makatun, Dzmitry, Lauret, Jérôme, and Rudová, Hana. Planning of distributed data production for High Energy and Nuclear Physics. United States: N. p., 2018. Web. doi:10.1007/s10586-018-2834-3.
Makatun, Dzmitry, Lauret, Jérôme, & Rudová, Hana. Planning of distributed data production for High Energy and Nuclear Physics. United States. doi:10.1007/s10586-018-2834-3.
Makatun, Dzmitry, Lauret, Jérôme, and Rudová, Hana. Sat . "Planning of distributed data production for High Energy and Nuclear Physics". United States. doi:10.1007/s10586-018-2834-3. https://www.osti.gov/servlets/purl/1480983.
@article{osti_1480983,
title = {Planning of distributed data production for High Energy and Nuclear Physics},
author = {Makatun, Dzmitry and Lauret, Jérôme and Rudová, Hana},
abstractNote = {Modern experiments in High Energy and Nuclear Physics heavily rely on distributed computations using multiple computational facilities across the world. One of the essential types of the computations is a distributed data production where petabytes of raw files from a single source has to be processed once (per production campaign) using thousands of CPUs at distant locations and the output has to be transferred back to that source. The data distribution over a large system does not necessary match the distribution of storage, network and CPU capacity. Therefore, bottlenecks may appear and lead to increased latency and degraded performance. In this paper we propose a new scheduling approach for distributed data production which is based on the network flow maximization model. In our approach a central planner defines how much input and output data should be transferred over each network link in order to maximize the computational throughput. Such plans are created periodically for a fixed planning time interval using up-to-date information on network, storage and CPU resources. The centrally created plans are executed in a distributed manner by dedicated services running at participating sites. In conclusion, our simulations based on the log records from the data production framework of the experiment STAR (Solenoid Tracker at RHIC) have shown that the proposed model systematically provides a better performance compared to the simulated traditional techniques.},
doi = {10.1007/s10586-018-2834-3},
journal = {Cluster Computing},
issn = {1386-7857},
number = 4,
volume = 21,
place = {United States},
year = {2018},
month = {8}
}

Journal Article:
Free Publicly Available Full Text
Publisher's Version of Record

Save / Share:

Works referenced in this record:

Distributed computing in practice: the Condor experience
journal, January 2005

  • Thain, Douglas; Tannenbaum, Todd; Livny, Miron
  • Concurrency and Computation: Practice and Experience, Vol. 17, Issue 2-4, p. 323-356
  • DOI: 10.1002/cpe.938