skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: BigData and Computing Challenges in High Energy and Nuclear Physics

Authors:
; ; ;
Publication Date:
Research Org.:
Brookhaven National Laboratory (BNL), Upton, NY (United States)
Sponsoring Org.:
USDOE Office of Science (SC), High Energy Physics (HEP) (SC-25)
OSTI Identifier:
1394756
Report Number(s):
BNL-114326-2017-CP
Journal ID: ISSN 1748--0221
DOE Contract Number:
SC0012704
Resource Type:
Conference
Resource Relation:
Journal Volume: 12; Journal Issue: 06; Conference: International Conference on Instrumentation for Colliding Beam Physics; Budker Inst Nucl Phys, Novosibirsk, RUSSIA; 20170227 through 20170303
Country of Publication:
United States
Language:
English
Subject:
24 POWER TRANSMISSION AND DISTRIBUTION; Computing; Data processing Methods; Software Architectures

Citation Formats

Klimentov A., Grigorieva, M., Kiryanov, A., and Zarochentsev, A. BigData and Computing Challenges in High Energy and Nuclear Physics. United States: N. p., 2017. Web. doi:10.1088/1748-0221/12/06/C06044.
Klimentov A., Grigorieva, M., Kiryanov, A., & Zarochentsev, A. BigData and Computing Challenges in High Energy and Nuclear Physics. United States. doi:10.1088/1748-0221/12/06/C06044.
Klimentov A., Grigorieva, M., Kiryanov, A., and Zarochentsev, A. Mon . "BigData and Computing Challenges in High Energy and Nuclear Physics". United States. doi:10.1088/1748-0221/12/06/C06044. https://www.osti.gov/servlets/purl/1394756.
@article{osti_1394756,
title = {BigData and Computing Challenges in High Energy and Nuclear Physics},
author = {Klimentov A. and Grigorieva, M. and Kiryanov, A. and Zarochentsev, A.},
abstractNote = {},
doi = {10.1088/1748-0221/12/06/C06044},
journal = {},
number = 06,
volume = 12,
place = {United States},
year = {Mon Feb 27 00:00:00 EST 2017},
month = {Mon Feb 27 00:00:00 EST 2017}
}

Conference:
Other availability
Please see Document Availability for additional information on obtaining the full-text document. Library patrons may search WorldCat to identify libraries that hold this conference proceeding.

Save / Share:
  • Historically, progress in high-energy physics has largely been determined by development of more capable particle accelerators. This trend continues today with the imminent commissioning of the Large Hadron Collider at CERN, and the worldwide development effort toward the International Linear Collider. Looking ahead, there are two scientific areas ripe for further exploration--the energy frontier and the precision frontier. To explore the energy frontier, two approaches toward multi-TeV beams are being studied, an electron-positron linear collider based on a novel two-beam powering system (CLIC), and a Muon Collider. Work on the precision frontier involves accelerators with very high intensity, including amore » Super-BFactory and a muon-based Neutrino Factory. Without question, one of the most promising approaches is the development of muon-beam accelerators. Such machines have very high scientific potential, and would substantially advance the state-of-the-art in accelerator design. The challenges of the new generation of accelerators, and how these can be accommodated in the accelerator design, are described. To reap their scientific benefits, all of these frontier accelerators will require sophisticated instrumentation to characterize the beam and control it with unprecedented precision.« less
  • The mission of the U.S. Department of Energy Office of Science (DOE SC) is the delivery of scientific discoveries and major scientific tools to transform our understanding of nature and to advance the energy, economic, and national security missions of the United States. To achieve these goals in today’s world requires investments in not only the traditional scientific endeavors of theory and experiment, but also in computational science and the facilities that support large-scale simulation and data analysis. The Advanced Scientific Computing Research (ASCR) program addresses these challenges in the Office of Science. ASCR’s mission is to discover, develop, andmore » deploy computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to DOE. ASCR supports research in computational science, three high-performance computing (HPC) facilities — the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory and Leadership Computing Facilities at Argonne (ALCF) and Oak Ridge (OLCF) National Laboratories — and the Energy Sciences Network (ESnet) at Berkeley Lab. ASCR is guided by science needs as it develops research programs, computers, and networks at the leading edge of technologies. As we approach the era of exascale computing, technology changes are creating challenges for science programs in SC for those who need to use high performance computing and data systems effectively. Numerous significant modifications to today’s tools and techniques will be needed to realize the full potential of emerging computing systems and other novel computing architectures. To assess these needs and challenges, ASCR held a series of Exascale Requirements Reviews in 2015–2017, one with each of the six SC program offices,1 and a subsequent Crosscut Review that sought to integrate the findings from each. Participants at the reviews were drawn from the communities of leading domain scientists, experts in computer science and applied mathematics, ASCR facility staff, and DOE program managers in ASCR and the respective program offices. The purpose of these reviews was to identify mission-critical scientific problems within the DOE Office of Science (including experimental facilities) and determine the requirements for the exascale ecosystem that would be needed to address those challenges. The exascale ecosystem includes exascale computing systems, high-end data capabilities, efficient software at scale, libraries, tools, and other capabilities. This effort will contribute to the development of a strategic roadmap for ASCR compute and data facility investments and will help the ASCR Facility Division establish partnerships with Office of Science stakeholders. It will also inform the Office of Science research needs and agenda. The results of the six reviews have been published in reports available on the web at http://exascaleage.org/. This report presents a summary of the individual reports and of common and crosscutting findings, and it identifies opportunities for productive collaborations among the DOE SC program offices.« less
  • We present a new software system, UFMulti II, which provides tools for distributing HEP applications across multiple Unix workstations in order to take advantage of parallel computing. It is designed for high energy physics applications such as event reconstruction, simulation, and physics analysis. We discuss here a particular component of UFMulti called NetQueues (Network Queues) which permits the fast exchange of data between groups of processes. The NetQueue system is optimized for high data transfer rates across Ethernet and FDDI networks and can accommodate both event and non-event records in a flexible way. The results shown in this talk aremore » based on tests on a workstation farm at the University of Florida. Measurements of CPU and I/O performance during these tests, including some interesting effects caused by the I/O loading of the CPU, are presented.« less
  • Many members of large science collaborations already have specialized grids available to advance their research in the need of getting more computing resources for data analysis. This has forced the Collider Detector at Fermilab (CDF) collaboration to move beyond the usage of dedicated resources and start exploiting Grid resources. Nowadays, CDF experiment is increasingly relying on glidein-based computing pools for data reconstruction. Especially, Monte Carlo production and user data analysis, serving over 400 users by central analysis farm middleware (CAF) on the top of Condor batch system and CDF Grid infrastructure. Condor is designed as distributed architecture and its glideinmore » mechanism of pilot jobs is ideal for abstracting the Grid computing by making a virtual private computing pool. We would like to present the first production use of the generic pilot-based Workload Management System (glideinWMS), which is an implementation of the pilot mechanism based on the Condor distributed infrastructure. CDF Grid computing uses glideinWMS for its data reconstruction on the FNAL campus Grid, user analysis and Monte Carlo production across Open Science Grid (OSG). We review this computing model and setup used including CDF specific configuration within the glideinWMS system which provides powerful scalability and makes Grid computing working like in a local batch environment with ability to handle more than 10000 running jobs at a time.« less