skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Pelegant : a parallel accelerator simulation code for electron generation and tracking.

Abstract

elegant is a general-purpose code for electron accelerator simulation that has a worldwide user base. Recently, many of the time-intensive elements were parallelized using MPI. Development has used modest Linux clusters and the BlueGene/L supercomputer at Argonne National Laboratory. This has provided very good performance for some practical simulations, such as multiparticle tracking with synchrotron radiation and emittance blow-up in the vertical rf kick scheme. The effort began with development of a concept that allowed for gradual parallelization of the code, using the existing beamline-element classification table in elegant. This was crucial as it allowed parallelization without major changes in code structure and without major conflicts with the ongoing evolution of elegant. Because of rounding error and finite machine precision, validating a parallel program against a uniprocessor program with the requirement of bitwise identical results is notoriously difficult. We will report validating simulation results of parallel elegant against those of serial elegant by applying Kahan's algorithm to improve accuracy dramatically for both versions. The quality of random numbers in a parallel implementation is very important for some simulations. Some practical experience with generating parallel random numbers by offsetting the seed of each random sequence according to the processor ID willmore » be reported.« less

Authors:
; ;  [1]
  1. (APS)
Publication Date:
Research Org.:
Argonne National Lab. (ANL), Argonne, IL (United States)
Sponsoring Org.:
USDOE
OSTI Identifier:
973755
Report Number(s):
ANL/AES/CP-119207
TRN: US1002023
DOE Contract Number:
DE-AC02-06CH11357
Resource Type:
Conference
Resource Relation:
Conference: Advanced Accelerator Concepts (AAC06); Jul. 10, 2006 - Jul. 15, 2006; Lake Geneva, WI
Country of Publication:
United States
Language:
ENGLISH
Subject:
43 PARTICLE ACCELERATORS; ACCELERATORS; ACCURACY; ALGORITHMS; ANL; CLASSIFICATION; ELECTRONS; IMPLEMENTATION; PERFORMANCE; SIMULATION; SUPERCOMPUTERS; SYNCHROTRON RADIATION

Citation Formats

Wang, Y., Borland, M. D., and Accelerator Systems Division. Pelegant : a parallel accelerator simulation code for electron generation and tracking.. United States: N. p., 2006. Web. doi:10.1063/1.2409141.
Wang, Y., Borland, M. D., & Accelerator Systems Division. Pelegant : a parallel accelerator simulation code for electron generation and tracking.. United States. doi:10.1063/1.2409141.
Wang, Y., Borland, M. D., and Accelerator Systems Division. Sun . "Pelegant : a parallel accelerator simulation code for electron generation and tracking.". United States. doi:10.1063/1.2409141.
@article{osti_973755,
title = {Pelegant : a parallel accelerator simulation code for electron generation and tracking.},
author = {Wang, Y. and Borland, M. D. and Accelerator Systems Division},
abstractNote = {elegant is a general-purpose code for electron accelerator simulation that has a worldwide user base. Recently, many of the time-intensive elements were parallelized using MPI. Development has used modest Linux clusters and the BlueGene/L supercomputer at Argonne National Laboratory. This has provided very good performance for some practical simulations, such as multiparticle tracking with synchrotron radiation and emittance blow-up in the vertical rf kick scheme. The effort began with development of a concept that allowed for gradual parallelization of the code, using the existing beamline-element classification table in elegant. This was crucial as it allowed parallelization without major changes in code structure and without major conflicts with the ongoing evolution of elegant. Because of rounding error and finite machine precision, validating a parallel program against a uniprocessor program with the requirement of bitwise identical results is notoriously difficult. We will report validating simulation results of parallel elegant against those of serial elegant by applying Kahan's algorithm to improve accuracy dramatically for both versions. The quality of random numbers in a parallel implementation is very important for some simulations. Some practical experience with generating parallel random numbers by offsetting the seed of each random sequence according to the processor ID will be reported.},
doi = {10.1063/1.2409141},
journal = {},
number = ,
volume = ,
place = {United States},
year = {Sun Jan 01 00:00:00 EST 2006},
month = {Sun Jan 01 00:00:00 EST 2006}
}

Conference:
Other availability
Please see Document Availability for additional information on obtaining the full-text document. Library patrons may search WorldCat to identify libraries that hold this conference proceeding.

Save / Share:
  • elegant is a general-purpose code for electron accelerator simulation that has a worldwide user base. Recently, many of the time-intensive elements were parallelized using MPI. Development has used modest Linux clusters and the BlueGene/L supercomputer at Argonne National Laboratory. This has provided very good performance for some practical simulations, such as multiparticle tracking with synchrotron radiation and emittance blow-up in the vertical rf kick scheme. The effort began with development of a concept that allowed for gradual parallelization of the code, using the existing beamline-element classification table in elegant. This was crucial as it allowed parallelization without major changes inmore » code structure and without major conflicts with the ongoing evolution of elegant. Because of rounding error and finite machine precision, validating a parallel program against a uniprocessor program with the requirement of bitwise identical results is notoriously difficult. We will report validating simulation results of parallel elegant against those of serial elegant by applying Kahan's algorithm to improve accuracy dramatically for both versions. The quality of random numbers in a parallel implementation is very important for some simulations. Some practical experience with generating parallel random numbers by offsetting the seed of each random sequence according to the processor ID will be reported.« less
  • The PATRICIA particle tracking program has been used to study chromatic effects in the Brookhaven CBA (Colliding Beam Accelerator). The short term behavior of particles in the CBA has been followed for particle histories of 300 turns. Contributions from magnet multipoles characteristic of superconducting magnets and closed orbit errors have been included in determining the dynamic aperture of the CBA for on and off momentum particles. The width of the third integer stopband produced by the temperature dependence of magnetization induced sextupoles in the CBA cable dipoles is evaluated for helium distribution systems having periodicity of one and six. Themore » stopband width at a tune of 68/3 is naturally zero for the system having a periodicity of six and is approx. 10/sup -4/ for the system having a periodicity of one. Results from theory are compared with results obtained with PATRICIA; the results agree within a factor of slightly more than two.« less
  • We develop a 3D simulation code for interaction between the proto-planetary disk and embedded proto-planets. The protoplanetary disk is treated as a three-dimensional (3D), self-gravitating gas whose motion is described by the locally isothermal Navier-Stokes equations in a spherical coordinate centered on the star. The differential equations for the disk are similar to those given in Kley et al. (2009) with a different gravitational potential that is defined in Nelson et al. (2000). The equations are solved by directional split Godunov method for the inviscid Euler equations plus operator-split method for the viscous source terms. We use a sub-cycling techniquemore » for the azimuthal sweep to alleviate the time step restriction. We also extend the FARGO scheme of Masset (2000) and modified in Li et al. (2001) to our 3D code to accelerate the transport in the azimuthal direction. Furthermore, we have implemented a reduced 2D (r, {theta}) and a fully 3D self-gravity solver on our uniform disk grid, which extends our 2D method (Li, Buoni, & Li 2008) to 3D. This solver uses a mode cut-off strategy and combines FFT in the azimuthal direction and direct summation in the radial and meridional direction. An initial axis-symmetric equilibrium disk is generated via iteration between the disk density profile and the 2D disk-self-gravity. We do not need any softening in the disk self-gravity calculation as we have used a shifted grid method (Li et al. 2008) to calculate the potential. The motion of the planet is limited on the mid-plane and the equations are the same as given in D'Angelo et al. (2005), which we adapted to the polar coordinates with a fourth-order Runge-Kutta solver. The disk gravitational force on the planet is assumed to evolve linearly with time between two hydrodynamics time steps. The Planetary potential acting on the disk is calculated accurately with a small softening given by a cubic-spline form (Kley et al. 2009). Since the torque is extremely sensitive to the position of the planet, we adopt the corotating frame that allows the planet moving only in radial direction if only one planet is present. This code has been extensively tested on a number of problems. For the earthmass planet with constant aspect ratio h = 0.05, the torque calculated using our code matches quite well with the the 3D linear theory results by Tanaka et al. (2002). The code is fully parallelized via message-passing interface (MPI) and has very high parallel efficiency. Several numerical examples for both fixed planet and moving planet are provided to demonstrate the efficacy of the numerical method and code.« less
  • CERBERUS, a six-equation parallel thermal-hydraulic system simulation code, has been developed at the Idaho National Engineering Laboratory (INEL). The starting point of CERBERUS is the RELAP5/MOD2.5{sup 4} system simulation code that has been developed for single, central processing unit (cpu), shared memory, Single Instruction Multiple Data (SIMD), computer architectures. The near-term development targets for CERBERUS are shared memory, Multiple Instruction Multiple Data (MIMD), four- to sixteen-cpu machines such as the CRAY X-MP/48 and Y-MP/832, which are respectively, four- and eight-cpu computers. In this paper we will summarize the major features of the CERBERUS Ver. 02 code and our experiences withmore » it on a CRAY Y-MP8/8-128.« less
  • Coherent synchrotron radiation (CSR) is of great interest to those designing accelerators as drivers for free-electron lasers (FELs). Although experimental evidence is incomplete, CSR is predicted to have potentially severe effects on the emittance of high-brightness electron beams. The performance of an FEL depends critically on the emittance, current, and energy spread of the beam. Attempts to increase the current through magnetic bunch compression can lead to increased emittance and energy spread due to CSR in the dipoles of such a compressor. The code elegant was used for design and simulation of the bunch compressor for the Low-Energy Undulator Testmore » Line (LEUTL) FEL at the Advanced Photon Source (APS). In order to facilitate this design, a fast algorithm was developed based on the 1-D formalism of Saldin and coworkers. In addition, a plausible method of including CSR effects in drift spaces following the chicane magnets was developed and implemented. The algorithm is fast enough to permit running hundreds of tolerance simulations including CSR for 50 thousand particles. This article describes the details of the implementation and shows results for the APS bunch compressor.« less