skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Large scale and low latency analysis facilities for the CMS experiment: Development and operational aspects

Conference · · J.Phys.Conf.Ser.

While a majority of CMS data analysis activities rely on the distributed computing infrastructure on the WLCG Grid, dedicated local computing facilities have been deployed to address particular requirements in terms of latency and scale. The CMS CERN Analysis Facility (CAF) was primarily designed to host a large variety of latency-critical workflows. These break down into alignment and calibration, detector commissioning and diagnosis, and high-interest physics analysis requiring fast turnaround. In order to reach the goal for fast turnaround tasks, the Workload Management group has designed a CRABServer based system to fit with two main needs: to provide a simple, familiar interface to the user (as used in the CRAB Analysis Tool[7]) and to allow an easy transition to the Tier-0 system. While the CRABServer component had been initially designed for Grid analysis by CMS end-users, with a few modifications it turned out to be also a very powerful service to manage and monitor local submissions on the CAF. Transition to Tier-0 has been guaranteed by the usage of the WMCore, a library developed by CMS to be the common core of workload management tools, for handing data driven workflow dependencies. This system is now being used with the first use cases, and important experience is being acquired. In addition to the CERN CAF facility, FNAL has CMS dedicated analysis resources at the FNAL LHC Physics Center (LPC). In the first few years of data collection FNAL has been able to accept a large fraction of CMS data. The remote centre is not well suited for the extremely low latency work expected of the CAF, but the presence of substantial analysis resources, a large resident community, and a large fraction of the data make the LPC a strong facility for resource intensive analysis. We present the building, commissioning and operation of these dedicated analysis facilities in the first year of LHC collisions, we also present the specific development to our software needed to allow for the use of these computing facilities in the special use cases of fast turnaround analyses.

Research Organization:
Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States)
Sponsoring Organization:
USDOE Office of Science (SC), High Energy Physics (HEP)
DOE Contract Number:
AC02-07CH11359
OSTI ID:
1433876
Report Number(s):
FERMILAB-CONF-11-866-CD; 1111385
Journal Information:
J.Phys.Conf.Ser., Vol. 331; Conference: 18th International Conference on Computing in High Energy and Nuclear Physics, Taipei, Taiwan, 10/18-10/22/2010
Country of Publication:
United States
Language:
English

Similar Records

Connecting Restricted, High-Availability, or Low-Latency Resources to a Seamless Global Pool for CMS
Journal Article · Tue Nov 21 00:00:00 EST 2017 · Journal of Physics. Conference Series · OSTI ID:1433876

Pooling the resources of the CMS Tier-1 sites
Journal Article · Wed Dec 23 00:00:00 EST 2015 · Journal of Physics. Conference Series · OSTI ID:1433876

Dynamic Distribution of High-Rate Data Processing from CERN to Remote HPC Data Centers
Journal Article · Mon Feb 08 00:00:00 EST 2021 · Computing and Software for Big Science · OSTI ID:1433876