skip to main content

SciTech ConnectSciTech Connect

Title: Pooling the resources of the CMS Tier-1 sites

The CMS experiment at the LHC relies on 7 Tier-1 centres of the WLCG to perform the majority of its bulk processing activity, and to archive its data. During the first run of the LHC, these two functions were tightly coupled as each Tier-1 was constrained to process only the data archived on its hierarchical storage. This lack of flexibility in the assignment of processing workflows occasionally resulted in uneven resource utilisation and in an increased latency in the delivery of the results to the physics community.The long shutdown of the LHC in 2013-2014 was an opportunity to revisit this mode of operations, disentangling the processing and archive functionalities of the Tier-1 centres. The storage services at the Tier-1s were redeployed breaking the traditional hierarchical model: each site now provides a large disk storage to host input and output data for processing, and an independent tape storage used exclusively for archiving. Movement of data between the tape and disk endpoints is not automated, but triggered externally through the WLCG transfer management systems.With this new setup, CMS operations actively controls at any time which data is available on disk for processing and which data should be sent to archive. Thanks tomore » the high-bandwidth connectivity guaranteed by the LHCOPN, input data can be freely transferred between disk endpoints as needed to take advantage of free CPU, turning the Tier-1s into a large pool of shared resources. The output data can be validated before archiving them permanently, and temporary data formats can be produced without wasting valuable tape resources. Lastly, the data hosted on disk at Tier-1s can now be made available also for user analysis since there is no risk any longer of triggering chaotic staging from tape.In this contribution, we describe the technical solutions adopted for the new disk and tape endpoints at the sites, and we report on the commissioning and scale testing of the service. We detail the procedures implemented by CMS computing operations to actively manage data on disk at Tier-1 sites, and we give examples of the benefits brought to CMS workflows by the additional flexibility of the new system.« less
 [1] ;  [2] ;  [3] ;  [4] ;  [3] ;  [3] ;  [5] ;  [3] ;  [3] ;  [6] ;  [7] ;  [8] ;  [9] ;  [10]
  1. Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)
  2. Univ. de Los Andes, Bogota (Colombia)
  3. Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States)
  4. Centre de Calcul de l'Institut National de Physique Nucleaire et de Physique des Particules (IN2P3), Villeurbanne Cedex (France)
  5. Rutherford Appleton Lab., Didcot (United Kingdom)
  6. CIEMAT, Madrid (Spain)
  7. Karlsruhe Institut fuer Technologie, Karlsruhe (Germany)
  8. Univ. di Bologna, Bologna (Italy)
  9. Cukurova Univ., Adana (Turkey)
  10. DESY, Hamburg (Germany)
Publication Date:
OSTI Identifier:
Report Number(s):
Journal ID: ISSN 1742-6588; 1413883
Grant/Contract Number:
Accepted Manuscript
Journal Name:
Journal of Physics. Conference Series
Additional Journal Information:
Journal Volume: 664; Journal Issue: 4; Conference: 21st International Conference on Computing in High Energy and Nuclear Physics, Okinawa (Japan), 13-17 Apr 2015; Journal ID: ISSN 1742-6588
IOP Publishing
Research Org:
Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States)
Sponsoring Org:
USDOE Office of Science (SC), High Energy Physics (HEP) (SC-25)
Country of Publication:
United States