DOE PAGES title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC

Abstract

The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregate $$\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.

Authors:
ORCiD logo [1];  [2];  [3]
  1. Cornell Univ., Ithaca, NY (United States)
  2. Heidelberg Univ. (Germany)
  3. Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States)
Publication Date:
Research Org.:
Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States)
Sponsoring Org.:
USDOE Office of Science (SC), High Energy Physics (HEP)
OSTI Identifier:
1437402
Report Number(s):
arXiv:1801.03872; FERMILAB-PUB-18-074-CD
Journal ID: ISSN 2510-2036; 1647570; TRN: US1900324
Grant/Contract Number:  
AC02-07CH11359
Resource Type:
Accepted Manuscript
Journal Name:
Computing and Software for Big Science
Additional Journal Information:
Journal Volume: 2; Journal Issue: 1; Journal ID: ISSN 2510-2036
Publisher:
Springer
Country of Publication:
United States
Language:
English
Subject:
72 PHYSICS OF ELEMENTARY PARTICLES AND FIELDS

Citation Formats

Kuznetsov, Valentin, Fischer, Nils Leif, and Guo, Yuyi. The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC. United States: N. p., 2018. Web. doi:10.1007/s41781-018-0005-0.
Kuznetsov, Valentin, Fischer, Nils Leif, & Guo, Yuyi. The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC. United States. https://doi.org/10.1007/s41781-018-0005-0
Kuznetsov, Valentin, Fischer, Nils Leif, and Guo, Yuyi. Mon . "The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC". United States. https://doi.org/10.1007/s41781-018-0005-0. https://www.osti.gov/servlets/purl/1437402.
@article{osti_1437402,
title = {The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC},
author = {Kuznetsov, Valentin and Fischer, Nils Leif and Guo, Yuyi},
abstractNote = {The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregate $\mathcal{O}$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.},
doi = {10.1007/s41781-018-0005-0},
journal = {Computing and Software for Big Science},
number = 1,
volume = 2,
place = {United States},
year = {Mon Mar 19 00:00:00 EDT 2018},
month = {Mon Mar 19 00:00:00 EDT 2018}
}

Journal Article:
Free Publicly Available Full Text
Publisher's Version of Record

Figures / Tables:

Fig. 1 Fig. 1 : WMArchive architecture. MongoDB represents shortterm storage (STS) and HDFS refers to long-term storage (LTS) discussed in a text.

Save / Share:

Works referenced in this record:

Using the glideinWMS System as a Common Resource Provisioning Layer in CMS
journal, December 2015


The CMS Data Management System
journal, June 2014


The Pilot Way to Grid Resources Using glideinWMS
conference, March 2009

  • Sfiligoi, Igor; Bradley, Daniel C.; Holzman, Burt
  • 2009 WRI World Congress on Computer Science and Information Engineering
  • DOI: 10.1109/CSIE.2009.950

CMS computing operations during run 1
journal, June 2014


Distributed computing in practice: the Condor experience
journal, January 2005

  • Thain, Douglas; Tannenbaum, Todd; Livny, Miron
  • Concurrency and Computation: Practice and Experience, Vol. 17, Issue 2-4, p. 323-356
  • DOI: 10.1002/cpe.938

CMS computing operations during run 1
text, January 2014


Figures/Tables have been extracted from DOE-funded journal article accepted manuscripts.