Evolution of HEP Processing Frameworks
- Fermilab
- LBL, Berkeley
HEP data-processing software must support the disparate physics needs of many experiments. For both collider and neutrino environments, HEP experiments typically use data-processing frameworks to manage the computational complexities of their large-scale data processing needs. Data-processing frameworks are being faced with new challenges this decade. The computing landscape has changed from the past three decades of homogeneous single-core x86 batch jobs running on grid sites. Frameworks must now work on a heterogeneous mixture of different platforms: multi-core machines, different CPU architectures, and computational accelerators; and different computing sites: grid, cloud, and high-performance computing. We describe these challenges in more detail and how frameworks may confront them. Given their historic success, frameworks will continue to be critical software systems that enable HEP experiments to meet their computing needs. Frameworks have weathered computing revolutions in the past; they will do so again with support from the HEP community
- Research Organization:
- Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States)
- Sponsoring Organization:
- USDOE Office of Science (SC), High Energy Physics (HEP) (SC-25)
- DOE Contract Number:
- AC02-07CH11359
- OSTI ID:
- 1865342
- Report Number(s):
- FERMILAB-CONF-22-308-SCD; arXiv:2203.14345; oai:inspirehep.net:2058929
- Country of Publication:
- United States
- Language:
- English
Similar Records
The Future of Software and Computing for HEP: Pushing the Boundaries of the Possible
LArSoft and Future Framework Directions at Fermilab