skip to main content

Title: Using the GlideinWMS System as a Common Resource Provisioning Layer in CMS

CMS will require access to more than 125k processor cores for the beginning of Run 2 in 2015 to carry out its ambitious physics program with more and higher complexity events. During Run1 these resources were predominantly provided by a mix of grid sites and local batch resources. During the long shut down cloud infrastructures, diverse opportunistic resources and HPC supercomputing centers were made available to CMS, which further complicated the operations of the submission infrastructure. In this presentation we will discuss the CMS effort to adopt and deploy the glideinWMS system as a common resource provisioning layer to grid, cloud, local batch, and opportunistic resources and sites. We will address the challenges associated with integrating the various types of resources, the efficiency gains and simplifications associated with using a common resource provisioning layer, and discuss the solutions found. We will finish with an outlook of future plans for how CMS is moving forward on resource provisioning for more heterogenous architectures and services.
 [1] ;  [2] ;  [3] ;  [4] ;  [5] ;  [5] ;  [6] ;  [5] ;  [7] ;  [8] ;  [5] ;  [7] ;  [9] ;  [7] ;  [7] ;  [7] ;  [10]
  1. Vilnius U.
  2. Trieste U.
  3. Nebraska U.
  4. Imperial Coll., London
  5. Fermilab
  6. Quaid-i-Azam U.
  7. UC, San Diego
  8. Milan Bicocca U.
  9. Brown U.
  10. DESY
Publication Date:
OSTI Identifier:
Report Number(s):
DOE Contract Number:
Resource Type:
Resource Relation:
Journal Name: J.Phys.Conf.Ser.; Journal Volume: 664; Journal Issue: 6; Conference: 21st International Conference on Computing in High Energy and Nuclear Physics, Okinawa, Japan, 04/13-04/17/2015
Research Org:
Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States)
Sponsoring Org:
USDOE Office of Science (SC), High Energy Physics (HEP) (SC-25)
Country of Publication:
United States