skip to main content

Title: Pushing HTCondor and glideinWMS to 200K+ Jobs in a Global Pool for CMS before Run 2

The CMS experiment at the LHC relies on HTCondor and glideinWMS as its primary batch and pilot-based Grid provisioning system. So far we have been running several independent resource pools, but we are working on unifying them all to reduce the operational load and more effectively share resources between various activities in CMS. The major challenge of this unification activity is scale. The combined pool size is expected to reach 200K job slots, which is significantly bigger than any other multi-user HTCondor based system currently in production. To get there we have studied scaling limitations in our existing pools, the biggest of which tops out at about 70K slots, providing valuable feedback to the development communities, who have responded by delivering improvements which have helped us reach higher and higher scales with more stability. We have also worked on improving the organization and support model for this critical service during Run 2 of the LHC. This contribution will present the results of the scale testing and experiences from the first months of running the Global Pool.
 [1] ;  [2] ;  [3] ;  [4] ;  [5] ;  [4] ;  [6] ;  [7] ;  [4] ;  [6] ;  [6] ;  [6]
  1. Vilnius U.
  2. Trieste U.
  3. Nebraska U.
  4. Fermilab
  5. Quaid-i-Azam U.
  6. UC, San Diego
  7. Milan Bicocca U.
Publication Date:
OSTI Identifier:
Report Number(s):
DOE Contract Number:
Resource Type:
Resource Relation:
Journal Name: J.Phys.Conf.Ser.; Journal Volume: 664; Journal Issue: 6; Conference: 21st International Conference on Computing in High Energy and Nuclear Physics, Okinawa, Japan, 04/13-04/17/2015
Research Org:
Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States)
Sponsoring Org:
USDOE Office of Science (SC), High Energy Physics (HEP) (SC-25)
Country of Publication:
United States