skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Virtual machine provisioning, code management, and data movement design for the Fermilab HEPCloud Facility

Journal Article · · Journal of Physics. Conference Series
 [1];  [1];  [1];  [1];  [1];  [1];  [1];  [1];  [2];  [2];  [2];  [2];  [2];  [3]
  1. Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States)
  2. Illinois Inst. of Technology, Chicago, IL (United States)
  3. Korea Inst. of Science and Technology, Daejeon (Korea, Republic of)

The Fermilab HEPCloud Facility Project has as its goal to extend the current Fermilab facility interface to provide transparent access to disparate resources including commercial and community clouds, grid federations, and HPC centers. This facility enables experiments to perform the full spectrum of computing tasks, including data-intensive simulation and reconstruction. We have evaluated the use of the commercial cloud to provide elasticity to respond to peaks of demand without overprovisioning local resources. Full scale data-intensive workflows have been successfully completed on Amazon Web Services for two High Energy Physics Experiments, CMS and NOνA, at the scale of 58000 simultaneous cores. This paper describes the significant improvements that were made to the virtual machine provisioning system, code caching system, and data movement system to accomplish this work. The virtual image provisioning and contextualization service was extended to multiple AWS regions, and to support experiment-specific data configurations. A prototype Decision Engine was written to determine the optimal availability zone and instance type to run on, minimizing cost and job interruptions. We have deployed a scalable on-demand caching service to deliver code and database information to jobs running on the commercial cloud. It uses the frontiersquid server and CERN VM File System (CVMFS) clients on EC2 instances and utilizes various services provided by AWS to build the infrastructure (stack). We discuss the architecture and load testing benchmarks on the squid servers. We also describe various approaches that were evaluated to transport experimental data to and from the cloud, and the optimal solutions that were used for the bulk of the data transport. Finally, we summarize lessons learned from this scale test, and our future plans to expand and improve the Fermilab HEP Cloud Facility.

Research Organization:
Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States)
Sponsoring Organization:
USDOE Office of Science (SC), High Energy Physics (HEP)
Grant/Contract Number:
AC02-07CH11359
OSTI ID:
1423235
Report Number(s):
FERMILAB-CONF-17-641-CD; 1638496
Journal Information:
Journal of Physics. Conference Series, Vol. 898, Issue 5; Conference: 22nd International Conference on Computing in High Energy and Nuclear Physics, San Francisco, CA, 10/10-10/14/2016; ISSN 1742-6588
Publisher:
IOP PublishingCopyright Statement
Country of Publication:
United States
Language:
English

Figures / Tables (7)


Similar Records

The HEPCloud Facility: elastic computing for High Energy Physics – The NOvA Use Case
Conference · Wed Mar 15 00:00:00 EDT 2017 · OSTI ID:1423235

Experience in using commercial clouds in CMS
Journal Article · Wed Nov 22 00:00:00 EST 2017 · Journal of Physics. Conference Series · OSTI ID:1423235

HEPCloud, an Elastic Hybrid HEP Facility using an Intelligent Decision Support System
Conference · Thu Apr 18 00:00:00 EDT 2019 · OSTI ID:1423235