skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Report of the Community Review of EIC Accelerator R&D for the Office of Nuclear Physics

Abstract

The Nuclear Science Advisory Committee (NSAC) of the Department of Energy (DOE) Office of Nuclear Physics (NP) recommended in the 2015 Long Range Plan (LRP) for Nuclear Science that the proposed Electron Ion Collider (EIC) be the highest priority for new construction. This report noted that, at that time, two independent designs for such a facility had evolved in the United States, each of which proposed using infrastructure already available in the U.S. nuclear science community.

Authors:
Publication Date:
Research Org.:
US Department of Energy, Washington, DC (United States)
Sponsoring Org.:
USDOE Office of Science (SC), Nuclear Physics (NP) (SC-26)
OSTI Identifier:
1367855
Resource Type:
Technical Report
Country of Publication:
United States
Language:
English
Subject:
43 PARTICLE ACCELERATORS; NUCLEAR PHYSICS; REVIEWS; ACCELERATORS; ADVISORY COMMITTEES

Citation Formats

None, None. Report of the Community Review of EIC Accelerator R&D for the Office of Nuclear Physics. United States: N. p., 2017. Web. doi:10.2172/1367855.
None, None. Report of the Community Review of EIC Accelerator R&D for the Office of Nuclear Physics. United States. doi:10.2172/1367855.
None, None. Mon . "Report of the Community Review of EIC Accelerator R&D for the Office of Nuclear Physics". United States. doi:10.2172/1367855. https://www.osti.gov/servlets/purl/1367855.
@article{osti_1367855,
title = {Report of the Community Review of EIC Accelerator R&D for the Office of Nuclear Physics},
author = {None, None},
abstractNote = {The Nuclear Science Advisory Committee (NSAC) of the Department of Energy (DOE) Office of Nuclear Physics (NP) recommended in the 2015 Long Range Plan (LRP) for Nuclear Science that the proposed Electron Ion Collider (EIC) be the highest priority for new construction. This report noted that, at that time, two independent designs for such a facility had evolved in the United States, each of which proposed using infrastructure already available in the U.S. nuclear science community.},
doi = {10.2172/1367855},
journal = {},
number = ,
volume = ,
place = {United States},
year = {Mon Feb 13 00:00:00 EST 2017},
month = {Mon Feb 13 00:00:00 EST 2017}
}

Technical Report:

Save / Share:
  • Imagine being able to predict — with unprecedented accuracy and precision — the structure of the proton and neutron, and the forces between them, directly from the dynamics of quarks and gluons, and then using this information in calculations of the structure and reactions of atomic nuclei and of the properties of dense neutron stars (NSs). Also imagine discovering new and exotic states of matter, and new laws of nature, by being able to collect more experimental data than we dream possible today, analyzing it in real time to feed back into an experiment, and curating the data with fullmore » tracking capabilities and with fully distributed data mining capabilities. Making this vision a reality would improve basic scientific understanding, enabling us to precisely calculate, for example, the spectrum of gravity waves emitted during NS coalescence, and would have important societal applications in nuclear energy research, stockpile stewardship, and other areas. This review presents the components and characteristics of the exascale computing ecosystems necessary to realize this vision.« less
  • The mission of the U.S. Department of Energy Office of Science (DOE SC) is the delivery of scientific discoveries and major scientific tools to transform our understanding of nature and to advance the energy, economic, and national security missions of the United States. To achieve these goals in today’s world requires investments in not only the traditional scientific endeavors of theory and experiment, but also in computational science and the facilities that support large-scale simulation and data analysis. The Advanced Scientific Computing Research (ASCR) program addresses these challenges in the Office of Science. ASCR’s mission is to discover, develop, andmore » deploy computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to DOE. ASCR supports research in computational science, three high-performance computing (HPC) facilities — the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory and Leadership Computing Facilities at Argonne (ALCF) and Oak Ridge (OLCF) National Laboratories — and the Energy Sciences Network (ESnet) at Berkeley Lab. ASCR is guided by science needs as it develops research programs, computers, and networks at the leading edge of technologies. As we approach the era of exascale computing, technology changes are creating challenges for science programs in SC for those who need to use high performance computing and data systems effectively. Numerous significant modifications to today’s tools and techniques will be needed to realize the full potential of emerging computing systems and other novel computing architectures. To assess these needs and challenges, ASCR held a series of Exascale Requirements Reviews in 2015–2017, one with each of the six SC program offices,1 and a subsequent Crosscut Review that sought to integrate the findings from each. Participants at the reviews were drawn from the communities of leading domain scientists, experts in computer science and applied mathematics, ASCR facility staff, and DOE program managers in ASCR and the respective program offices. The purpose of these reviews was to identify mission-critical scientific problems within the DOE Office of Science (including experimental facilities) and determine the requirements for the exascale ecosystem that would be needed to address those challenges. The exascale ecosystem includes exascale computing systems, high-end data capabilities, efficient software at scale, libraries, tools, and other capabilities. This effort will contribute to the development of a strategic roadmap for ASCR compute and data facility investments and will help the ASCR Facility Division establish partnerships with Office of Science stakeholders. It will also inform the Office of Science research needs and agenda. The results of the six reviews have been published in reports available on the web at http://exascaleage.org/. This report presents a summary of the individual reports and of common and crosscutting findings, and it identifies opportunities for productive collaborations among the DOE SC program offices.« less
  • The goal, objectives, and requirements (GOR) presented in this document define a framework for describing research directed specifically by the Ground-based Nuclear Detonation Detection (GNDD) Team of the National Nuclear Security Administration (NNSA). The intent of this document is to provide a communication tool for the GNDD Team with NNSA management and with its stakeholder community. It describes the GNDD expectation that much of the improvement in the proficiency of nuclear explosion monitoring will come from better understanding of the science behind the generation, propagation, recording, and interpretation of seismic, infrasound, hydroacoustic, and radionuclide signals and development of "game-changer" advancesmore » in science and technology.« less
  • During the 1990's, we focused our Accelerator Physics program on research and development of TeV polarized proton beams using Siberian snakes (a Siberian snake is a device which forces an accelerator ring's depolarizing fields to cancel themselves by rotating each proton's spin by 180{degree} on each turn around the ring): (1) Siberian snake experiments at the IUCF Cooler ring; (2) Design of polarized beam capability for the SSC; (3) Design of polarized beam capability for the Main Injector and Tevatron (funded by Fermilab); and (4) Design of polarized beam capability for HERA (funded by DESY). During FY 1994 to 1997,more » our Siberian snake experiments at IUCF continued to be unexpectedly successful. Their data have helped us to design polarized proton beam capability for Fermilab's Tevatron and Main Injector and now for DESY'S HERA.« less
  • ILC work at Illinois has concentrated primarily on technical issues relating to the design of the accelerator. Because many of the problems to be resolved require a working knowledge of classical mechanics and electrodynamics, most of our research projects lend themselves well to the participation of undergraduate research assistants. The undergraduates in the group are scientists, not technicians, and find solutions to problems that, for example, have stumped PhD-level staff elsewhere. The ILC Reference Design Report calls for 6.7 km circumference damping rings (which prepare the beams for focusing) using “conventional” stripline kickers driven by fast HV pulsers. Our primarymore » goal was to determine the suitability of the 16 MeV electron beam in the AØ region at Fermilab for precision kicker studies.We found that the low beam energy and lack of redundancy in the beam position monitor system complicated the analysis of our data. In spite of these issues we concluded that the precision we could obtain was adequate to measure the performance and stability of a production module of an ILC kicker, namely 0.5%. We concluded that the kicker was stable to an accuracy of ~2.0% and that we could measure this precision to an accuracy of ~0.5%. As a result, a low energy beam like that at AØ could be used as a rapid-turnaround facility for testing ILC production kicker modules. The ILC timing precision for arrival of bunches at the collision point is required to be 0.1 picosecond or better. We studied the bunch-to-bunch timing accuracy of a “phase detector” installed in AØ in order to determine its suitability as an ILC bunch timing device. A phase detector is an RF structure excited by the passage of a bunch. Its signal is fed through a 1240 MHz high-Q resonant circuit and then down-mixed with the AØ 1300 MHz accelerator RF. We used a kind of autocorrelation technique to compare the phase detector signal with a reference signal obtained from the phase detector’s response to an event at the beginning of the run. We determined that the device installed in our beam, which was instrumented with an 8-bit 500 MHz ADC, could measure the beam timing to an accuracy of 0.4 picoseconds. Simulations of the device showed that an increase in ADC clock rate to 2 GHz would improve measurement precision by the required factor of four. As a result, we felt that a device of this sort, assuming matters concerning dynamic range and long-term stability can be addressed successfully, would work at the ILC. Cost effective operation of the ILC will demand highly reliable, fault tolerant and adaptive solutions for both hardware and software. The large numbers of subsystems and large multipliers associated with the modules in those subsystems will cause even a strong level of unit reliability to become an unacceptable level of system availability. An evaluation effort is underway to evaluate standards associated with high availability, and to guide ILC development with standard practices and well-supported commercial solutions. One area of evaluation involves the Advanced Telecom Computing Architecture (ATCA) hardware and software. We worked with an ATCA crate, processor monitors, and a small amount of ATCA circuit boards in order to develop a backplane “spy” board that would let us watch the ATCA backplane communications and pursue development of an inexpensive processor monitor that could be used as a physics-driven component of the crate-level controls system. We made good progress, and felt that we had determined a productive direction to extend this work. We felt that we had learned enough to begin designing a workable processor monitor chip if there were to be sufficient interest in ATCA shown by the ILC community. Fault recognition is a challenging issue in the crafting a high reliability controls system. With tens of thousands of independent processors running hundreds of thousands of critical processes, how can the system identify that a problem has arisen and determine the appropriate steps to take to correct, or compensate, for the failure? One possible solution might come through the use of the OpenClovis supervisory system, which runs on Linux processors and allows a select set of processors to monitor the behavior of individual processes and processors in a large, distributed controls network. We found that OpenClovis exhibited an irritating amount of sensitivity to the exact version of the Linux kernel running on the processors, and that it was poorly equipped to help us sort through problems that arose through conflicts so deep in the operating systems of the processors. But once this issue was addressed, we found that it performed as expected, recognizing crashes and process (and processor) failures.« less