CMS distributed analysis infrastructure and operations: Experience with the first LHC data
- Fermilab
The CMS distributed analysis infrastructure represents a heterogeneous pool of resources distributed across several continents. The resources are harnessed using glite and glidein-based work load management systems (WMS). We provide the operational experience of the analysis workflows using CRAB-based servers interfaced with the underlying WMS. The automatized interaction of the server with the WMS provides a successful analysis workflow. We present the operational experience as well as methods used in CMS to analyze the LHC data. The interaction with CMS Run-registry for Run and luminosity block selections via CRAB is discussed. The variations of different workflows during the LHC data-taking period and the lessons drawn from this experience are also outlined.
- Research Organization:
- Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States)
- Sponsoring Organization:
- USDOE Office of Science (SC), High Energy Physics (HEP)
- DOE Contract Number:
- AC02-07CH11359
- OSTI ID:
- 1437414
- Report Number(s):
- FERMILAB-CONF-11-876-CD; 1089583
- Journal Information:
- J.Phys.Conf.Ser., Vol. 331; Conference: 18th International Conference on Computing in High Energy and Nuclear Physics, Taipei, Taiwan, 10/18-10/22/2010
- Country of Publication:
- United States
- Language:
- English
Similar Records
Opportunistic Resource Usage in CMS
A gLite FTS based solution for managing user output in CMS