skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Extending the distributed computing infrastructure of the CMS experiment with HPC resources

Conference · · J.Phys.Conf.Ser.
 [1]; ORCiD logo [2]; ORCiD logo [3]; ORCiD logo [4];  [3]; ORCiD logo [5];  [3]; ORCiD logo [4]; ORCiD logo [1];  [3]; ORCiD logo [6];  [7];  [8]; ORCiD logo [5]; ORCiD logo [9];  [6]
  1. Fermilab
  2. INFN, Pisa
  3. KIT, Karlsruhe, IKP
  4. Madrid, CIEMAT
  5. Madrid, CIEMAT; PIC, Bellaterra
  6. DESY
  7. Wisconsin U., Madison
  8. CERN
  9. INFN, Perugia

Particle accelerators are an important tool to study the fundamental properties of elementary particles. Currently the highest energy accelerator is the LHC at CERN, in Geneva, Switzerland. Each of its four major detectors, such as the CMS detector, produces dozens of Petabytes of data per year to be analyzed by a large international collaboration. The processing is carried out on the Worldwide LHC Computing Grid, that spans over more than 170 compute centers around the world and is used by a number of particle physics experiments. Recently the LHC experiments were encouraged to make increasing use of HPC resources. While Grid resources are homogeneous with respect to the used Grid middleware, HPC installations can be very different in their setup. In order to integrate HPC resources into the highly automatized processing setups of the CMS experiment a number of challenges need to be addressed. For processing, access to primary data and metadata as well as access to the software is required. At Grid sites all this is achieved via a number of services that are provided by each center. However at HPC sites many of these capabilities cannot be easily provided and have to be enabled in the user space or enabled by other means. At HPC centers there are often restrictions regarding network access to remote services, which is again a severe limitation. The paper discusses a number of solutions and recent experiences by the CMS experiment to include HPC resources in processing campaigns.

Research Organization:
Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States)
Sponsoring Organization:
USDOE Office of Science (SC), High Energy Physics (HEP)
Contributing Organization:
CMS
DOE Contract Number:
AC02-07CH11359
OSTI ID:
1958511
Report Number(s):
FERMILAB-CONF-23-081-CMS; oai:inspirehep.net:2633529
Journal Information:
J.Phys.Conf.Ser., Vol. 2438, Issue 1
Country of Publication:
United States
Language:
English

References (9)

Architecture and prototype of a WLCG data lake for HL-LHC journal January 2019
HEPCloud, an Elastic Hybrid HEP Facility using an Intelligent Decision Support System journal January 2019
XSEDE: Accelerating Scientific Discovery journal September 2014
A fully unprivileged CernVM-FS journal January 2020
INFN Tier–1: a distributed site journal January 2019
Extension of the INFN Tier-1 on a HPC system journal January 2020
Effective Dynamic Integration and Utilization of Heterogenous Compute Resources journal January 2020
Lightweight dynamic integration of opportunistic resources journal January 2020
Exploiting network restricted compute resources with HTCondor: a CMS experiment experience journal January 2020

Similar Records

Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing
Journal Article · Fri May 22 00:00:00 EDT 2015 · Journal of Physics. Conference Series · OSTI ID:1958511

Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science
Conference · Fri Jan 01 00:00:00 EST 2016 · OSTI ID:1958511

INTEGRATION OF PANDA WORKLOAD MANAGEMENT SYSTEM WITH SUPERCOMPUTERS
Conference · Fri Jan 01 00:00:00 EST 2016 · OSTI ID:1958511