The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC
- Cornell Univ., Ithaca, NY (United States)
- Heidelberg Univ. (Germany)
- Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States)
The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregate $$\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.
- Research Organization:
- Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States)
- Sponsoring Organization:
- USDOE Office of Science (SC), High Energy Physics (HEP)
- Grant/Contract Number:
- AC02-07CH11359
- OSTI ID:
- 1437402
- Report Number(s):
- arXiv:1801.03872; FERMILAB-PUB-18-074-CD; 1647570; TRN: US1900324
- Journal Information:
- Computing and Software for Big Science, Vol. 2, Issue 1; ISSN 2510-2036
- Publisher:
- SpringerCopyright Statement
- Country of Publication:
- United States
- Language:
- English
Similar Records
Automatic log analysis with NLP for the CMS workflow handling
Optimizing CMS build infrastructure via Apache Mesos