National Library of Energy BETA

Sample records for monte carlo mc

  1. Monte Carlo Benchmark

    Energy Science and Technology Software Center (OSTI)

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  2. Monte Carlo Example Programs

    Energy Science and Technology Software Center (OSTI)

    2006-05-09

    The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.

  3. SU-E-T-584: Commissioning of the MC2 Monte Carlo Dose Computation Engine

    SciTech Connect (OSTI)

    Titt, U; Mirkovic, D; Liu, A; Ciangaru, G; Mohan, R; Anand, A; Perles, L

    2014-06-01

    Purpose: An automated system, MC2, was developed to convert DICOM proton therapy treatment plans into a sequence MCNPX input files, and submit these to a computing cluster. MC2 converts the results into DICOM format, and any treatment planning system can import the data for comparison vs. conventional dose predictions. This work describes the data and the efforts made to validate the MC2 system against measured dose profiles and how the system was calibrated to predict the correct number of monitor units (MUs) to deliver the prescribed dose. Methods: A set of simulated lateral and longitudinal profiles was compared to data measured for commissioning purposes and during annual quality assurance efforts. Acceptance criteria were relative dose differences smaller than 3% and differences in range (in water) of less than 2 mm. For two out of three double scattering beam lines validation results were already published. Spot checks were performed to assure proper performance. For the small snout, all available measurements were used for validation vs. simulated data. To calibrate the dose per MU, the energy deposition per source proton at the center of the spread out Bragg peaks (SOBPs) was recorded for a set of SOBPs from each option. Subsequently these were then scaled to the results of dose per MU determination based on published methods. The simulations of the doses in the magnetically scanned beam line were also validated vs. measured longitudinal and lateral profiles. The source parameters were fine tuned to achieve maximum agreement with measured data. The dosimetric calibration was performed by scoring energy deposition per proton, and scaling the results to a standard dose measurement of a 10 x 10 x 10 cm3 volume irradiation using 100 MU. Results: All simulated data passed the acceptance criteria. Conclusion: MC2 is fully validated and ready for clinical application.

  4. The LSNDMC Monte Carlo

    SciTech Connect (OSTI)

    McIlhany, K.; Whitehouse, D.; Smith, D.; Eisner, A.M.; Wang, Y.X.

    1994-12-31

    A Monte Carlo program describing the response of the Liquid Scintillation Neutrino Detector (LSND) at the Los Alamos Meson Physics Facility (LAMPF) was written using the GEANT geometry and simulation package. Neutrino interactions were simulated in the detector through the production of Cerenkov and scintillation light in the range of 2-3 eV. Since GEANT does not normally track photons to electron-volt energies, the tracking program (TRAK) was modified to produce both Cerenkov and scintillator light, the latter being simulated using the Birks equation. The LSND Monte Carlo program was used to predict the quantity of scintillator (b-PBD) used in the mineral oil to provide a ratio of roughly 4:1 light output resulting from scintillation and Cerenkov light respectively.

  5. Quantum Monte Carlo for

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Quantum Monte Carlo for the Electronic Structure of Atoms and Molecules Brian Austin Lester Group, U.C. Berkeley BES Requirements Workshop Rockville, MD February 9, 2010 Outline  Applying QMC to diverse chemical systems  Select systems with high interest and impact  Phenol: bond dissociation energy  Retinal: excitation energy  Algorithmic details  Parallel Strategy  Wave function evaluation O-H Bond Dissociation Energy of Phenol  Ph-OH Ph-O * + H * (36 valence electrons)

  6. MCMini: Monte Carlo on GPGPU

    SciTech Connect (OSTI)

    Marcus, Ryan C.

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  7. SU-E-T-71: Commissioning and Acceptance Testing of a Commercial Monte Carlo Electron Dose Calculation Model (eMC) for TrueBeam

    SciTech Connect (OSTI)

    Sheu, R; Tseng, T; Powers, A; Lo, Y

    2014-06-01

    Purpose: To provide commissioning and acceptance test data of the Varian Eclipse electron Monte Carlo model (eMC v.11) for TrueBeam linac. We also investigated the uncertainties in beam model parameters and dose calculation results for different geometric configurations. Methods: For beam commissioning, PTW CC13 thimble chamber and IBA Blue Phantom2 were used to collect PDD and dose profiles in air. Cone factors were measured with a parallel plate chamber (PTW N23342) in solid water. GafChromic EBT3 films were used for dose calculation verifications to compare with parallel plate chamber results in the following test geometries: oblique incident, extended distance, small cutouts, elongated cutouts, irregular surface, and heterogeneous layers. Results: Four electron energies (6e, 9e, 12e, and 15e) and five cones (66, 1010, 1515, 2020, and 2525) with standard cutouts were calculated for different grid sizes (1, 1.5,2, and 2.5 mm) and compared with chamber measurements. The results showed calculations performed with a coarse grid size underestimated the absolute dose. The underestimation decreased as energy increased. For 6e, the underestimation (max 3.3 %) was greater than the statistical uncertainty level (3%) and was systematically observed for all cone sizes. By using a 1mm grid size, all the calculation results agreed with measurements within 5% for all test configurations. The calculations took 21s and 46s for 6e and 15e (2.5mm grid size) respectively distributed on 4 calculation servants. Conclusion: In general, commissioning the eMC dose calculation model on TrueBeam is straightforward and thedose calculation is in good agreement with measurements for all test cases. Monte Carlo dose calculation provides more accurate results which improves treatment planning quality. However, the normal acceptable grid size (2.5mm) would cause systematic underestimation in absolute dose calculation for lower energies, such as 6e. Users need to be cautious in this

  8. Monte Carlo simulation for the transport beamline

    SciTech Connect (OSTI)

    Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A.; Attili, A.; Marchetto, F.; Russo, G.; Cirrone, G. A. P.; Schillaci, F.; Scuderi, V.; Carpinelli, M.

    2013-07-26

    In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.

  9. Isotropic Monte Carlo Grain Growth

    Energy Science and Technology Software Center (OSTI)

    2013-04-25

    IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.

  10. Exact Monte Carlo for molecules

    SciTech Connect (OSTI)

    Lester, W.A. Jr.; Reynolds, P.J.

    1985-03-01

    A brief summary of the fixed-node quantum Monte Carlo method is presented. Results obtained for binding energies, the classical barrier height for H + H2, and the singlet-triplet splitting in methylene are presented and discussed. 17 refs.

  11. Monte Carlo Simulations of APEX

    SciTech Connect (OSTI)

    Xu, G.

    1995-10-01

    Monte Carlo simulationsof the APEX apparatus, a spectrometer designed to meausre positron-electron pairs produced in heavy-ion collisions, carried out using GEANT are reported. The results of these simulations are compared with data from measurements of conversion electron, positron and part emitting sources as well as with the results of in-beam measurements of positrons and electrons. The overall description of the performance of the apparatus is excellent.

  12. Optimized nested Markov chain Monte Carlo sampling: theory (Conference...

    Office of Scientific and Technical Information (OSTI)

    Optimized nested Markov chain Monte Carlo sampling: theory Citation Details In-Document Search Title: Optimized nested Markov chain Monte Carlo sampling: theory Metropolis Monte ...

  13. The Monte Carlo Independent Column Approximation Model Intercomparison

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Project (McMIP) The Monte Carlo Independent Column Approximation Model Intercomparison Project (McMIP) Barker, Howard Meteorological Service of Canada Cole, Jason Meteorological Service of Canada Raisanen, Petri Finnish Meteorological Institute Pincus, Robert NOAA-CIRES Climate Diagnostics Center Morcrette, Jean-Jacques European Centre for Medium-Range Weather Forecasts Li, Jiangnan Canadian Center for Climate Modelling Stephens, Graeme Colorado State University Vaillancourt, Paul

  14. Energy Monte Carlo (EMCEE) | Open Energy Information

    Open Energy Info (EERE)

    with a specific set of distributions. Both programs run as spreadsheet workbooks in Microsoft Excel. EMCEE and Emc2 require Crystal Ball, a commercially available Monte Carlo...

  15. Monte%20Carlo.jpg | OSTI, US Dept of Energy Office of Scientific and

    Office of Scientific and Technical Information (OSTI)

    Technical Information Monte%20Carlo

  16. CONTINUOUS-ENERGY MONTE CARLO METHODS FOR CALCULATING GENERALIZED RESPONSE SENSITIVITIES USING TSUNAMI-3D

    SciTech Connect (OSTI)

    Perfetti, Christopher M; Rearden, Bradley T

    2014-01-01

    This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.

  17. Element Agglomeration Algebraic Multilevel Monte-Carlo Library

    Energy Science and Technology Software Center (OSTI)

    2015-02-19

    ElagMC is a parallel C++ library for Multilevel Monte Carlo simulations with algebraically constructed coarse spaces. ElagMC enables Multilevel variance reduction techniques in the context of general unstructured meshes by using the specialized element-based agglomeration techniques implemented in ELAG (the Element-Agglomeration Algebraic Multigrid and Upscaling Library developed by U. Villa and P. Vassilevski and currently under review for public release). The ElabMC library can support different type of deterministic problems, including mixed finite element discretizationsmore » of subsurface flow problems.« less

  18. A hybrid Monte Carlo method for equilibrium equation of state...

    Office of Scientific and Technical Information (OSTI)

    MONTE CARLO SIMULATION METHODS Benchmark for perturbation theory methods NPT, NVT - single ... EXPLOSIVES; MIXTURES; MONTE CARLO METHOD; PERTURBATION THEORY; SHOCK WAVES; SIMULATION

  19. Monte-Carlo particle dynamics in a variable specific impulse...

    Office of Scientific and Technical Information (OSTI)

    Monte-Carlo particle dynamics in a variable specific impulse magnetoplasma rocket Citation Details In-Document Search Title: Monte-Carlo particle dynamics in a variable specific ...

  20. Applications of FLUKA Monte Carlo Code for Nuclear and Accelerator...

    Office of Scientific and Technical Information (OSTI)

    Applications of FLUKA Monte Carlo Code for Nuclear and Accelerator Physics Citation Details In-Document Search Title: Applications of FLUKA Monte Carlo Code for Nuclear and ...

  1. Fundamentals of Monte Carlo (Technical Report) | SciTech Connect

    Office of Scientific and Technical Information (OSTI)

    Fundamentals of Monte Carlo Citation Details In-Document Search Title: Fundamentals of Monte Carlo Authors: Wollaber, Allan Benton 1 + Show Author Affiliations Los Alamos ...

  2. Monte Carlo Ion Transport Analysis Code.

    Energy Science and Technology Software Center (OSTI)

    2009-04-15

    Version: 00 TRIPOS is a versatile Monte Carlo ion transport analysis code. It has been applied to the treatment of both surface and bulk radiation effects. The media considered is composed of multilayer polyatomic materials.

  3. Improved Monte Carlo Renormalization Group Method

    DOE R&D Accomplishments [OSTI]

    Gupta, R.; Wilson, K. G.; Umrigar, C.

    1985-01-01

    An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.

  4. Quantitative Monte Carlo-based holmium-166 SPECT reconstruction

    SciTech Connect (OSTI)

    Elschot, Mattijs; Smits, Maarten L. J.; Nijsen, Johannes F. W.; Lam, Marnix G. E. H.; Zonnenberg, Bernard A.; Bosch, Maurice A. A. J. van den; Jong, Hugo W. A. M. de; Viergever, Max A.

    2013-11-15

    Purpose: Quantitative imaging of the radionuclide distribution is of increasing interest for microsphere radioembolization (RE) of liver malignancies, to aid treatment planning and dosimetry. For this purpose, holmium-166 ({sup 166}Ho) microspheres have been developed, which can be visualized with a gamma camera. The objective of this work is to develop and evaluate a new reconstruction method for quantitative {sup 166}Ho SPECT, including Monte Carlo-based modeling of photon contributions from the full energy spectrum.Methods: A fast Monte Carlo (MC) simulator was developed for simulation of {sup 166}Ho projection images and incorporated in a statistical reconstruction algorithm (SPECT-fMC). Photon scatter and attenuation for all photons sampled from the full {sup 166}Ho energy spectrum were modeled during reconstruction by Monte Carlo simulations. The energy- and distance-dependent collimator-detector response was modeled using precalculated convolution kernels. Phantom experiments were performed to quantitatively evaluate image contrast, image noise, count errors, and activity recovery coefficients (ARCs) of SPECT-fMC in comparison with those of an energy window-based method for correction of down-scattered high-energy photons (SPECT-DSW) and a previously presented hybrid method that combines MC simulation of photopeak scatter with energy window-based estimation of down-scattered high-energy contributions (SPECT-ppMC+DSW). Additionally, the impact of SPECT-fMC on whole-body recovered activities (A{sup est}) and estimated radiation absorbed doses was evaluated using clinical SPECT data of six {sup 166}Ho RE patients.Results: At the same noise level, SPECT-fMC images showed substantially higher contrast than SPECT-DSW and SPECT-ppMC+DSW in spheres ≥17 mm in diameter. The count error was reduced from 29% (SPECT-DSW) and 25% (SPECT-ppMC+DSW) to 12% (SPECT-fMC). ARCs in five spherical volumes of 1.96–106.21 ml were improved from 32%–63% (SPECT-DSW) and 50%–80

  5. Optimal sampling efficiency in Monte Carlo sampling with an approximat...

    Office of Scientific and Technical Information (OSTI)

    Journal Article: Optimal sampling efficiency in Monte Carlo sampling with an approximate potential Citation Details In-Document Search Title: Optimal sampling efficiency in Monte ...

  6. Quantum Monte Carlo by message passing

    SciTech Connect (OSTI)

    Bonca, J.; Gubernatis, J.E.

    1993-01-01

    We summarize results of quantum Monte Carlo simulations of the degenerate single-impurity Anderson model using the impurity algorithm of Hirsch and Fye. Using methods of Bayesian statistical inference, coupled with the principle of maximum entropy, we extracted the single-particle spectral density from the imaginary-time Green's function. The variations of resulting spectral densities with model parameters agree qualitatively with the spectral densities predicted by NCA calculations. All the simulations were performed on a cluster of 16 IBM R6000/560 workstations under the control of the message-passing software PVM. We described the trivial parallelization of our quantum Monte Carlo code both for the cluster and the CM-5 computer. Other issues for effective parallelization of the impurity algorithm are also discussed.

  7. Quantum Monte Carlo by message passing

    SciTech Connect (OSTI)

    Bonca, J.; Gubernatis, J.E.

    1993-05-01

    We summarize results of quantum Monte Carlo simulations of the degenerate single-impurity Anderson model using the impurity algorithm of Hirsch and Fye. Using methods of Bayesian statistical inference, coupled with the principle of maximum entropy, we extracted the single-particle spectral density from the imaginary-time Green`s function. The variations of resulting spectral densities with model parameters agree qualitatively with the spectral densities predicted by NCA calculations. All the simulations were performed on a cluster of 16 IBM R6000/560 workstations under the control of the message-passing software PVM. We described the trivial parallelization of our quantum Monte Carlo code both for the cluster and the CM-5 computer. Other issues for effective parallelization of the impurity algorithm are also discussed.

  8. MC

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    MC performance study for an early ν µ charged-current inclusive analysis with MicroBooNE MICROBOONE-NOTE-1004-PUB The MicroBooNE Collaboration November 6, 2015 Abstract This note describes an analysis performed on Monte Carlo data to evaluate the sensitivity of MicroBooNE for an early ν µ charged-current inclusive cross section measurement. Such an analysis is intended to be done using the first three months of Booster Neutrino Beam data. The event selection is entirely based on an automated

  9. Four decades of implicit Monte Carlo

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Wollaber, Allan B.

    2016-04-25

    In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less

  10. Status of Monte-Carlo Event Generators

    SciTech Connect (OSTI)

    Hoeche, Stefan; /SLAC

    2011-08-11

    Recent progress on general-purpose Monte-Carlo event generators is reviewed with emphasis on the simulation of hard QCD processes and subsequent parton cascades. Describing full final states of high-energy particle collisions in contemporary experiments is an intricate task. Hundreds of particles are typically produced, and the reactions involve both large and small momentum transfer. The high-dimensional phase space makes an exact solution of the problem impossible. Instead, one typically resorts to regarding events as factorized into different steps, ordered descending in the mass scales or invariant momentum transfers which are involved. In this picture, a hard interaction, described through fixed-order perturbation theory, is followed by multiple Bremsstrahlung emissions off initial- and final-state and, finally, by the hadronization process, which binds QCD partons into color-neutral hadrons. Each of these steps can be treated independently, which is the basic concept inherent to general-purpose event generators. Their development is nowadays often focused on an improved description of radiative corrections to hard processes through perturbative QCD. In this context, the concept of jets is introduced, which allows to relate sprays of hadronic particles in detectors to the partons in perturbation theory. In this talk, we briefly review recent progress on perturbative QCD in event generation. The main focus lies on the general-purpose Monte-Carlo programs HERWIG, PYTHIA and SHERPA, which will be the workhorses for LHC phenomenology. A detailed description of the physics models included in these generators can be found in [8]. We also discuss matrix-element generators, which provide the parton-level input for general-purpose Monte Carlo.

  11. A Monte Carlo algorithm for degenerate plasmas

    SciTech Connect (OSTI)

    Turrell, A.E. Sherlock, M.; Rose, S.J.

    2013-09-15

    A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the FermiDirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electronion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.

  12. Quantum Monte Carlo methods for nuclear physics

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.

    2014-10-19

    Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-bodymore » interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less

  13. Quantum Monte Carlo methods for nuclear physics

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.

    2015-09-09

    Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit,more » and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less

  14. Quantum Monte Carlo methods for nuclear physics

    SciTech Connect (OSTI)

    Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.

    2015-09-09

    Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.

  15. Quantum Monte Carlo methods for nuclear physics

    SciTech Connect (OSTI)

    Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.

    2014-10-19

    Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.

  16. THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE

    SciTech Connect (OSTI)

    WATERS, LAURIE S.; MCKINNEY, GREGG W.; DURKEE, JOE W.; FENSIN, MICHAEL L.; JAMES, MICHAEL R.; JOHNS, RUSSELL C.; PELOWITZ, DENISE B.

    2007-01-10

    MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.

  17. Multilevel Monte Carlo simulation of Coulomb collisions

    SciTech Connect (OSTI)

    Rosin, M.S.; Ricketson, L.F.; Dimits, A.M.; Caflisch, R.E.; Cohen, B.I.

    2014-10-01

    We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε, the computational cost of the method is O(ε{sup −2}) or O(ε{sup −2}(lnε){sup 2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε{sup −3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10{sup −5}. We discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.

  18. Cluster expansion modeling and Monte Carlo simulation of alnico...

    Office of Scientific and Technical Information (OSTI)

    Accepted Manuscript: Cluster expansion modeling and Monte Carlo simulation of alnico 5-7 permanent magnets This content will become publicly available on March 5, 2016 Prev Next...

  19. Evaluation of Monte Carlo Electron-Transport Algorithms in the...

    Office of Scientific and Technical Information (OSTI)

    Series Codes for Stochastic-Media Simulations. Citation Details In-Document Search Title: Evaluation of Monte Carlo Electron-Transport Algorithms in the Integrated Tiger Series ...

  20. Quantum Monte Carlo Calculations of Light Nuclei Using Chiral...

    Office of Scientific and Technical Information (OSTI)

    Title: Quantum Monte Carlo Calculations of Light Nuclei Using Chiral Potentials Authors: Lynn, J. E. ; Carlson, J. ; Epelbaum, E. ; Gandolfi, S. ; Gezerlis, A. ; Schwenk, A. ...

  1. Molecular Monte Carlo Simulations Using Graphics Processing Units...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    allocation of the GPU hardware resources. We make comparisons between the GPU and the serial CPU Monte Carlo implementations to assess speedup over conventional microprocessors....

  2. HILO: Quasi Diffusion Accelerated Monte Carlo on Hybrid Architectures

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    fidelity simulation of a diverse range of kinetic systems. Available for thumbnail of Feynman Center (505) 665-9090 Email HILO: Quasi Diffusion Accelerated Monte Carlo on Hybrid...

  3. Fuel temperature reactivity coefficient calculation by Monte Carlo perturbation techniques

    SciTech Connect (OSTI)

    Shim, H. J.; Kim, C. H.

    2013-07-01

    We present an efficient method to estimate the fuel temperature reactivity coefficient (FTC) by the Monte Carlo adjoint-weighted correlated sampling method. In this method, a fuel temperature change is regarded as variations of the microscopic cross sections and the temperature in the free gas model which is adopted to correct the asymptotic double differential scattering kernel. The effectiveness of the new method is examined through the continuous energy MC neutronics calculations for PWR pin cell problems. The isotope-wise and reaction-type-wise contributions to the FTCs are investigated for two free gas models - the constant scattering cross section model and the exact model. It is shown that the proposed method can efficiently predict the reactivity change due to the fuel temperature variation. (authors)

  4. Exploring theory space with Monte Carlo reweighting

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun

    2014-10-13

    Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists andmore » experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less

  5. Exploring theory space with Monte Carlo reweighting

    SciTech Connect (OSTI)

    Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun

    2014-10-13

    Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists and experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.

  6. Exploring theory space with Monte Carlo reweighting

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun

    2014-10-13

    Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. In particular, we suggest procedures that allow more efficient collaboration between theoristsmoreand experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.less

  7. Monte Carlo Implementation Of Up- Or Down-Scattering Due To Collisions...

    Office of Scientific and Technical Information (OSTI)

    Monte Carlo Implementation Of Up- Or Down-Scattering Due To Collisions With Material At Finite Temperature Citation Details In-Document Search Title: Monte Carlo Implementation Of ...

  8. Recent advances and future prospects for Monte Carlo

    SciTech Connect (OSTI)

    Brown, Forrest B

    2010-01-01

    The history of Monte Carlo methods is closely linked to that of computers: The first known Monte Carlo program was written in 1947 for the ENIAC; a pre-release of the first Fortran compiler was used for Monte Carlo In 1957; Monte Carlo codes were adapted to vector computers in the 1980s, clusters and parallel computers in the 1990s, and teraflop systems in the 2000s. Recent advances include hierarchical parallelism, combining threaded calculations on multicore processors with message-passing among different nodes. With the advances In computmg, Monte Carlo codes have evolved with new capabilities and new ways of use. Production codes such as MCNP, MVP, MONK, TRIPOLI and SCALE are now 20-30 years old (or more) and are very rich in advanced featUres. The former 'method of last resort' has now become the first choice for many applications. Calculations are now routinely performed on office computers, not just on supercomputers. Current research and development efforts are investigating the use of Monte Carlo methods on FPGAs. GPUs, and many-core processors. Other far-reaching research is exploring ways to adapt Monte Carlo methods to future exaflop systems that may have 1M or more concurrent computational processes.

  9. Multilevel Monte Carlo for two phase flow and BuckleyLeverett transport in random heterogeneous porous media

    SciTech Connect (OSTI)

    Mller, Florian Jenny, Patrick Meyer, Daniel W.

    2013-10-01

    Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and BuckleyLeverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.

  10. Monte Carlo Hauser-Feshbach Calculations of Prompt Fission Neutrons...

    Office of Scientific and Technical Information (OSTI)

    Technical Report: Monte Carlo Hauser-Feshbach Calculations of Prompt Fission Neutrons and Gamma Rays: Application to Thermal Neutron-Induced Fission Reactions on U-235 and Pu-239 ...

  11. Generalizing the self-healing diffusion Monte Carlo approach...

    Office of Scientific and Technical Information (OSTI)

    Generalizing the self-healing diffusion Monte Carlo approach to finite temperature: A path for the optimization of low-energy many-body bases Citation Details In-Document Search ...

  12. Efficient Monte Carlo Simulations of Gas Molecules Inside Porous...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Efficient Monte Carlo Simulations of Gas Molecules Inside Porous Materials Previous Next List J. Kim and B. Smit, J. Chem. Theory Comput. 8 (7), 2336 (2012) DOI: 10.1021ct3003699 ...

  13. Fast Monte Carlo for radiation therapy: the PEREGRINE Project (Conference)

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    | SciTech Connect Fast Monte Carlo for radiation therapy: the PEREGRINE Project Citation Details In-Document Search Title: Fast Monte Carlo for radiation therapy: the PEREGRINE Project × You are accessing a document from the Department of Energy's (DOE) SciTech Connect. This site is a product of DOE's Office of Scientific and Technical Information (OSTI) and is provided as a public service. Visit OSTI to utilize additional information resources in energy science and technology. A paper copy

  14. Monte Carlo Hybrid Applied to Binary Stochastic Mixtures

    Energy Science and Technology Software Center (OSTI)

    2008-08-11

    The purpose of this set of codes isto use an inexpensive, approximate deterministic flux distribution to generate weight windows, wihich will then be used to bound particle weights for the Monte Carlo code run. The process is not automated; the user must run the deterministic code and use the output file as a command-line argument for the Monte Carlo code. Two sets of text input files are included as test problems/templates.

  15. Multiscale MonteCarlo equilibration: Pure Yang-Mills theory

    SciTech Connect (OSTI)

    Endres, Michael G.; Brower, Richard C.; Orginos, Kostas; Detmold, William; Pochinsky, Andrew V.

    2015-12-29

    In this study, we present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.

  16. Reactor physics simulations with coupled Monte Carlo calculation and computational fluid dynamics.

    SciTech Connect (OSTI)

    Seker, V.; Thomas, J. W.; Downar, T. J.; Purdue Univ.

    2007-01-01

    A computational code system based on coupling the Monte Carlo code MCNP5 and the Computational Fluid Dynamics (CFD) code STAR-CD was developed as an audit tool for lower order nuclear reactor calculations. This paper presents the methodology of the developed computer program 'McSTAR'. McSTAR is written in FORTRAN90 programming language and couples MCNP5 and the commercial CFD code STAR-CD. MCNP uses a continuous energy cross section library produced by the NJOY code system from the raw ENDF/B data. A major part of the work was to develop and implement methods to update the cross section library with the temperature distribution calculated by STARCD for every region. Three different methods were investigated and implemented in McSTAR. The user subroutines in STAR-CD are modified to read the power density data and assign them to the appropriate variables in the program and to write an output data file containing the temperature, density and indexing information to perform the mapping between MCNP and STAR-CD cells. Preliminary testing of the code was performed using a 3x3 PWR pin-cell problem. The preliminary results are compared with those obtained from a STAR-CD coupled calculation with the deterministic transport code DeCART. Good agreement in the k{sub eff} and the power profile was observed. Increased computational capabilities and improvements in computational methods have accelerated interest in high fidelity modeling of nuclear reactor cores during the last several years. High-fidelity has been achieved by utilizing full core neutron transport solutions for the neutronics calculation and computational fluid dynamics solutions for the thermal-hydraulics calculation. Previous researchers have reported the coupling of 3D deterministic neutron transport method to CFD and their application to practical reactor analysis problems. One of the principal motivations of the work here was to utilize Monte Carlo methods to validate the coupled deterministic neutron

  17. Quantum Process Matrix Computation by Monte Carlo

    Energy Science and Technology Software Center (OSTI)

    2012-09-11

    The software package, processMC, is a python script that allows for the rapid modeling of small , noisy quantum systems and the computation of the averaged quantum evolution map.

  18. Quantum Monte Carlo for electronic structure: Recent developments and applications

    SciTech Connect (OSTI)

    Rodriquez, M. M.S.

    1995-04-01

    Quantum Monte Carlo (QMC) methods have been found to give excellent results when applied to chemical systems. The main goal of the present work is to use QMC to perform electronic structure calculations. In QMC, a Monte Carlo simulation is used to solve the Schroedinger equation, taking advantage of its analogy to a classical diffusion process with branching. In the present work the author focuses on how to extend the usefulness of QMC to more meaningful molecular systems. This study is aimed at questions concerning polyatomic and large atomic number systems. The accuracy of the solution obtained is determined by the accuracy of the trial wave function`s nodal structure. Efforts in the group have given great emphasis to finding optimized wave functions for the QMC calculations. Little work had been done by systematically looking at a family of systems to see how the best wave functions evolve with system size. In this work the author presents a study of trial wave functions for C, CH, C{sub 2}H and C{sub 2}H{sub 2}. The goal is to study how to build wave functions for larger systems by accumulating knowledge from the wave functions of its fragments as well as gaining some knowledge on the usefulness of multi-reference wave functions. In a MC calculation of a heavy atom, for reasonable time steps most moves for core electrons are rejected. For this reason true equilibration is rarely achieved. A method proposed by Batrouni and Reynolds modifies the way the simulation is performed without altering the final steady-state solution. It introduces an acceleration matrix chosen so that all coordinates (i.e., of core and valence electrons) propagate at comparable speeds. A study of the results obtained using their proposed matrix suggests that it may not be the optimum choice. In this work the author has found that the desired mixing of coordinates between core and valence electrons is not achieved when using this matrix. A bibliography of 175 references is included.

  19. Capabilities, Implementation, and Benchmarking of Shift, a Massively Parallel Monte Carlo Radiation Transport Code

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Pandya, Tara M; Johnson, Seth R; Evans, Thomas M; Davidson, Gregory G; Hamilton, Steven P; Godfrey, Andrew T

    2016-01-01

    This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemorespecific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 R problems. These benchmark and scaling studies show promising results.less

  20. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.

    2015-12-21

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results.« less

  1. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    SciTech Connect (OSTI)

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.

    2015-12-21

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results.

  2. Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method

    SciTech Connect (OSTI)

    Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin

    2015-12-31

    The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problem are presented.

  3. Bayesian Monte Carlo Method for Nuclear Data Evaluation

    SciTech Connect (OSTI)

    Koning, A.J.

    2015-01-15

    A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using TALYS. The result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by an experiment based weight.

  4. Monte Carlo event generators for hadron-hadron collisions

    SciTech Connect (OSTI)

    Knowles, I.G.; Protopopescu, S.D.

    1993-06-01

    A brief review of Monte Carlo event generators for simulating hadron-hadron collisions is presented. Particular emphasis is placed on comparisons of the approaches used to describe physics elements and identifying their relative merits and weaknesses. This review summarizes a more detailed report.

  5. Monte-Carlo simulation of noise in hard X-ray Transmission Crystal...

    Office of Scientific and Technical Information (OSTI)

    Monte-Carlo simulation of noise in hard X-ray Transmission Crystal Spectrometers: ... Title: Monte-Carlo simulation of noise in hard X-ray Transmission Crystal Spectrometers: ...

  6. DOE Science Showcase - Monte Carlo Methods | OSTI, US Dept of Energy Office

    Office of Scientific and Technical Information (OSTI)

    of Scientific and Technical Information Monte Carlo Methods Monte Carlo calculation methods are algorithms for solving various kinds of computational problems by using (pseudo)random numbers. Developed in the 1940s during the Manhattan Project, the Monte Carlo method signified a radical change in how scientists solved problems. Learn about the ways these methods are used in DOE's research endeavors today in "Monte Carlo Methods" by Dr. William Watson, Physicist, OSTI staff. Image

  7. Evaluation of vectorized Monte Carlo algorithms on GPUs for a neutron Eigenvalue problem

    SciTech Connect (OSTI)

    Du, X.; Liu, T.; Ji, W.; Xu, X. G.; Brown, F. B.

    2013-07-01

    Conventional Monte Carlo (MC) methods for radiation transport computations are 'history-based', which means that one particle history at a time is tracked. Simulations based on such methods suffer from thread divergence on the graphics processing unit (GPU), which severely affects the performance of GPUs. To circumvent this limitation, event-based vectorized MC algorithms can be utilized. A versatile software test-bed, called ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - was used for this study. ARCHER facilitates the development and testing of a MC code based on the vectorized MC algorithm implemented on GPUs by using NVIDIA's Compute Unified Device Architecture (CUDA). The ARCHER{sub GPU} code was designed to solve a neutron eigenvalue problem and was tested on a NVIDIA Tesla M2090 Fermi card. We found that although the vectorized MC method significantly reduces the occurrence of divergent branching and enhances the warp execution efficiency, the overall simulation speed is ten times slower than the conventional history-based MC method on GPUs. By analyzing detailed GPU profiling information from ARCHER, we discovered that the main reason was the large amount of global memory transactions, causing severe memory access latency. Several possible solutions to alleviate the memory latency issue are discussed. (authors)

  8. Monte Carlo simulations of channeling spectra recorded for samples containing complex defects

    SciTech Connect (OSTI)

    Jagielski, Jacek; Turos, Prof. Andrzej; Nowicki, Lech; Jozwik, P.; Shutthanandan, Vaithiyalingam; Zhang, Yanwen; Sathish, N.; Thome, Lionel; Stonert, A.; Jozwik-Biala, Iwona

    2012-01-01

    The aim of the present paper is to describe the current status of the development of McChasy, a Monte Carlo simulation code, to make it suitable for the analysis of dislocations and dislocation loops in crystals. Such factors like the shape of the bent channel and geometrical distortions of the crystalline structure in the vicinity of dislocation has been discussed. The results obtained demonstrate that the new procedure applied to the spectra recorded on crystals containing dislocation yields damage profiles which are independent of the energy of the analyzing beam.

  9. Monte Carlo simulations of channeling spectra recorded for samples containing complex defects

    SciTech Connect (OSTI)

    Jagielski, Jacek K.; Turos, Andrzej W.; Nowicki, L.; Jozwik, Przemyslaw A.; Shutthanandan, V.; Zhang, Yanwen; Sathish, N.; Thome, Lionel; Stonert, A.; Jozwik Biala, Iwona

    2012-02-15

    The main aim of the present paper is to describe the current status of the development of McChasy, a Monte Carlo simulation code, to make it suitable for the analysis of dislocations and dislocation loops in crystals. Such factors like the shape of the bent channel and geometrical distortions of the crystalline structure in the vicinity of dislocation has been discussed. Several examples of the analysis performed at different energies of analyzing ions are presented. The results obtained demonstrate that the new procedure applied to the spectra recorded on crystals containing dislocation yields damage profiles which are independent of the energy of the analyzing beam.

  10. Distributed Monte Carlo production for D0

    SciTech Connect (OSTI)

    Snow, Joel; /Langston U.

    2010-01-01

    The D0 collaboration uses a variety of resources on four continents to pursue a strategy of flexibility and automation in the generation of simulation data. This strategy provides a resilient and opportunistic system which ensures an adequate and timely supply of simulation data to support D0's physics analyses. A mixture of facilities, dedicated and opportunistic, specialized and generic, large and small, grid job enabled and not, are used to provide a production system that has adapted to newly developing technologies. This strategy has increased the event production rate by a factor of seven and the data production rate by a factor of ten in the last three years despite diminishing manpower. Common to all production facilities is the SAM (Sequential Access to Metadata) data-grid. Job submission to the grid uses SAMGrid middleware which may forward jobs to the OSG, the WLCG, or native SAMGrid sites. The distributed computing and data handling system used by D0 will be described and the results of MC production since the deployment of grid technologies will be presented.

  11. Monte Carlo calculations for r-process nucleosynthesis

    SciTech Connect (OSTI)

    Mumpower, Matthew Ryan

    2015-11-12

    A Monte Carlo framework is developed for exploring the impact of nuclear model uncertainties on the formation of the heavy elements. Mass measurements tightly constrain the macroscopic sector of FRDM2012. For r-process nucleosynthesis, it is necessary to understand the microscopic physics of the nuclear model employed. A combined approach of measurements and a deeper understanding of the microphysics is thus warranted to elucidate the site of the r-process.

  12. Quantum Monte Carlo Calculations in Nuclear Theory | Argonne Leadership

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Facility Blue Gene/Q scaling This figure shows Blue Gene/Q scaling with respect to increasing number of nodes for calculations of the first isospin-1 state of $^{12}$C. The good multinode scaling is a result of the ADLB library. Quantum Monte Carlo Calculations in Nuclear Theory PI Name: Steven Pieper PI Email: spieper@anl.gov Institution: Argonne National Laboratory Allocation Program: ESP Year: 2015 Research Domain: Physics Tier 2 Code Development Project Numerical

  13. Calculations of pair production by Monte Carlo methods

    SciTech Connect (OSTI)

    Bottcher, C.; Strayer, M.R.

    1991-01-01

    We describe some of the technical design issues associated with the production of particle-antiparticle pairs in very large accelerators. To answer these questions requires extensive calculation of Feynman diagrams, in effect multi-dimensional integrals, which we evaluate by Monte Carlo methods on a variety of supercomputers. We present some portable algorithms for generating random numbers on vector and parallel architecture machines. 12 refs., 14 figs.

  14. Billion-atom synchronous parallel kinetic Monte Carlo simulations of critical 3D Ising systems

    SciTech Connect (OSTI)

    Martinez, E.; Monasterio, P.R.; Marian, J.

    2011-02-20

    An extension of the synchronous parallel kinetic Monte Carlo (spkMC) algorithm developed by Martinez et al. [J. Comp. Phys. 227 (2008) 3804] to discrete lattices is presented. The method solves the master equation synchronously by recourse to null events that keep all processors' time clocks current in a global sense. Boundary conflicts are resolved by adopting a chessboard decomposition into non-interacting sublattices. We find that the bias introduced by the spatial correlations attendant to the sublattice decomposition is within the standard deviation of serial calculations, which confirms the statistical validity of our algorithm. We have analyzed the parallel efficiency of spkMC and find that it scales consistently with problem size and sublattice partition. We apply the method to the calculation of scale-dependent critical exponents in billion-atom 3D Ising systems, with very good agreement with state-of-the-art multispin simulations.

  15. Monte Carlo Simulation of Electron Transport in 4H- and 6H-SiC

    SciTech Connect (OSTI)

    Sun, C. C.; You, A. H.; Wong, E. K.

    2010-07-07

    The Monte Carlo (MC) simulation of electron transport properties at high electric field region in 4H- and 6H-SiC are presented. This MC model includes two non-parabolic conduction bands. Based on the material parameters, the electron scattering rates included polar optical phonon scattering, optical phonon scattering and acoustic phonon scattering are evaluated. The electron drift velocity, energy and free flight time are simulated as a function of applied electric field at an impurity concentration of 1x10{sup 18} cm{sup 3} in room temperature. The simulated drift velocity with electric field dependencies is in a good agreement with experimental results found in literature. The saturation velocities for both polytypes are close, but the scattering rates are much more pronounced for 6H-SiC. Our simulation model clearly shows complete electron transport properties in 4H- and 6H-SiC.

  16. Hybrid and Parallel Domain-Decomposition Methods Development to Enable Monte Carlo for Reactor Analyses

    SciTech Connect (OSTI)

    Wagner, John C; Mosher, Scott W; Evans, Thomas M; Peplow, Douglas E.; Turner, John A

    2011-01-01

    This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform real commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the gold standard for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method, which

  17. Hybrid and Parallel Domain-Decomposition Methods Development to Enable Monte Carlo for Reactor Analyses

    SciTech Connect (OSTI)

    Wagner, John C; Mosher, Scott W; Evans, Thomas M; Peplow, Douglas E.; Turner, John A

    2010-01-01

    This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform ''real'' commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the ''gold standard'' for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method

  18. MCViNE- An object oriented Monte Carlo neutron ray tracing simulation package

    SciTech Connect (OSTI)

    Lin, J. Y. Y.; Smith, Hillary L.; Granroth, Garrett E.; Abernathy, Douglas L.; Lumsden, Mark D.; Winn, Barry L.; Aczel, Adam A.; Aivazis, Michael; Fultz, Brent

    2015-11-28

    MCViNE (Monte-Carlo VIrtual Neutron Experiment) is an open-source Monte Carlo (MC) neutron ray-tracing software for performing computer modeling and simulations that mirror real neutron scattering experiments. We exploited the close similarity between how instrument components are designed and operated and how such components can be modeled in software. For example we used object oriented programming concepts for representing neutron scatterers and detector systems, and recursive algorithms for implementing multiple scattering. Combining these features together in MCViNE allows one to handle sophisticated neutron scattering problems in modern instruments, including, for example, neutron detection by complex detector systems, and single and multiple scattering events in a variety of samples and sample environments. In addition, MCViNE can use simulation components from linear-chain-based MC ray tracing packages which facilitates porting instrument models from those codes. Furthermore it allows for components written solely in Python, which expedites prototyping of new components. These developments have enabled detailed simulations of neutron scattering experiments, with non-trivial samples, for time-of-flight inelastic instruments at the Spallation Neutron Source. Examples of such simulations for powder and single-crystal samples with various scattering kernels, including kernels for phonon and magnon scattering, are presented. As a result, with simulations that closely reproduce experimental results, scattering mechanisms can be turned on and off to determine how they contribute to the measured scattering intensities, improving our understanding of the underlying physics.

  19. MCViNE- An object oriented Monte Carlo neutron ray tracing simulation package

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Lin, J. Y. Y.; Smith, Hillary L.; Granroth, Garrett E.; Abernathy, Douglas L.; Lumsden, Mark D.; Winn, Barry L.; Aczel, Adam A.; Aivazis, Michael; Fultz, Brent

    2015-11-28

    MCViNE (Monte-Carlo VIrtual Neutron Experiment) is an open-source Monte Carlo (MC) neutron ray-tracing software for performing computer modeling and simulations that mirror real neutron scattering experiments. We exploited the close similarity between how instrument components are designed and operated and how such components can be modeled in software. For example we used object oriented programming concepts for representing neutron scatterers and detector systems, and recursive algorithms for implementing multiple scattering. Combining these features together in MCViNE allows one to handle sophisticated neutron scattering problems in modern instruments, including, for example, neutron detection by complex detector systems, and single and multiplemore » scattering events in a variety of samples and sample environments. In addition, MCViNE can use simulation components from linear-chain-based MC ray tracing packages which facilitates porting instrument models from those codes. Furthermore it allows for components written solely in Python, which expedites prototyping of new components. These developments have enabled detailed simulations of neutron scattering experiments, with non-trivial samples, for time-of-flight inelastic instruments at the Spallation Neutron Source. Examples of such simulations for powder and single-crystal samples with various scattering kernels, including kernels for phonon and magnon scattering, are presented. As a result, with simulations that closely reproduce experimental results, scattering mechanisms can be turned on and off to determine how they contribute to the measured scattering intensities, improving our understanding of the underlying physics.« less

  20. McPhD

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    is a research code designed to explore the applications of the Haskell programming language to Monte Carlo algorithms. June 29, 2013 software McPhD is a research code designed...

  1. Monte Carlo semi-empirical model for Si(Li) x-ray detector: Differences between nominal and fitted parameters

    SciTech Connect (OSTI)

    Lopez-Pino, N.; Padilla-Cabal, F.; Garcia-Alvarez, J. A.; Vazquez, L.; D'Alessandro, K.; Correa-Alfonso, C. M.; Godoy, W.; Maidana, N. L.; Vanin, V. R.

    2013-05-06

    A detailed characterization of a X-ray Si(Li) detector was performed to obtain the energy dependence of efficiency in the photon energy range of 6.4 - 59.5 keV, which was measured and reproduced by Monte Carlo (MC) simulations. Significant discrepancies between MC and experimental values were found when the manufacturer parameters of the detector were used in the simulation. A complete Computerized Tomography (CT) detector scan allowed to find the correct crystal dimensions and position inside the capsule. The computed efficiencies with the resulting detector model differed with the measured values no more than 10% in most of the energy range.

  2. Properties of reactive oxygen species by quantum Monte Carlo

    SciTech Connect (OSTI)

    Zen, Andrea; Trout, Bernhardt L.; Guidoni, Leonardo

    2014-07-07

    The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N{sup 3} ? N{sup 4}, where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.

  3. Coupled Monte Carlo neutronics and thermal hydraulics for power reactors

    SciTech Connect (OSTI)

    Bernnat, W.; Buck, M.; Mattes, M.; Zwermann, W.; Pasichnyk, I.; Velkov, K.

    2012-07-01

    The availability of high performance computing resources enables more and more the use of detailed Monte Carlo models even for full core power reactors. The detailed structure of the core can be described by lattices, modeled by so-called repeated structures e.g. in Monte Carlo codes such as MCNP5 or MCNPX. For cores with mainly uniform material compositions, fuel and moderator temperatures, there is no problem in constructing core models. However, when the material composition and the temperatures vary strongly a huge number of different material cells must be described which complicate the input and in many cases exceed code or memory limits. The second problem arises with the preparation of corresponding temperature dependent cross sections and thermal scattering laws. Only if these problems can be solved, a realistic coupling of Monte Carlo neutronics with an appropriate thermal-hydraulics model is possible. In this paper a method for the treatment of detailed material and temperature distributions in MCNP5 is described based on user-specified internal functions which assign distinct elements of the core cells to material specifications (e.g. water density) and temperatures from a thermal-hydraulics code. The core grid itself can be described with a uniform material specification. The temperature dependency of cross sections and thermal neutron scattering laws is taken into account by interpolation, requiring only a limited number of data sets generated for different temperatures. Applications will be shown for the stationary part of the Purdue PWR benchmark using ATHLET for thermal- hydraulics and for a generic Modular High Temperature reactor using THERMIX for thermal- hydraulics. (authors)

  4. Optimization of Gutzwiller wave functions in quantum Monte Carlo

    SciTech Connect (OSTI)

    Koch, E.; Gunnarsson, O.; Martin, R.M.

    1999-06-01

    Gutzwiller functions are popular variational wave functions for correlated electrons in Hubbard models. Following the variational principle, we are interested in the Gutzwiller parameters that minimize, e.g., the expectation value of the energy. Rewriting the expectation value as a rational function in the Gutzwiller parameters, we find a very efficient way for performing that minimization. The method can be used to optimize general Gutzwiller-type wave functions both in variational and in fixed-node diffusion Monte Carlo. {copyright} {ital 1999} {ital The American Physical Society}

  5. Quantum Monte Carlo Simulation of Overpressurized Liquid {sup 4}He

    SciTech Connect (OSTI)

    Vranjes, L.; Boronat, J.; Casulleras, J.; Cazorla, C.

    2005-09-30

    A diffusion Monte Carlo simulation of superfluid {sup 4}He at zero temperature and pressures up to 275 bar is presented. Increasing the pressure beyond freezing ({approx}25 bar), the liquid enters the overpressurized phase in a metastable state. In this regime, we report results of the equation of state and the pressure dependence of the static structure factor, the condensate fraction, and the excited-state energy corresponding to the roton. Along this large pressure range, both the condensate fraction and the roton energy decrease but do not become zero. The roton energies obtained are compared with recent experimental data in the overpressurized regime.

  6. Communication: Water on hexagonal boron nitride from diffusion Monte Carlo

    SciTech Connect (OSTI)

    Al-Hamdani, Yasmine S.; Ma, Ming; Michaelides, Angelos; Alf, Dario; Lilienfeld, O. Anatole von

    2015-05-14

    Despite a recent flurry of experimental and simulation studies, an accurate estimate of the interaction strength of water molecules with hexagonal boron nitride is lacking. Here, we report quantum Monte Carlo results for the adsorption of a water monomer on a periodic hexagonal boron nitride sheet, which yield a water monomer interaction energy of ?84 5 meV. We use the results to evaluate the performance of several widely used density functional theory (DFT) exchange correlation functionals and find that they all deviate substantially. Differences in interaction energies between different adsorption sites are however better reproduced by DFT.

  7. Cluster Monte Carlo simulations of the nematic-isotropic transition

    SciTech Connect (OSTI)

    Priezjev, N. V.; Pelcovits, Robert A.

    2001-06-01

    We report the results of simulations of the three-dimensional Lebwohl-Lasher model of the nematic-isotropic transition using a single cluster Monte Carlo algorithm. The algorithm, first introduced by Kunz and Zumbach to study two-dimensional nematics, is a modification of the Wolff algorithm for spin systems, and greatly reduces critical slowing down. We calculate the free energy in the neighborhood of the transition for systems up to linear size 70. We find a double well structure with a barrier that grows with increasing system size. We thus obtain an upper estimate of the value of the transition temperature in the thermodynamic limit.

  8. A Post-Monte-Carlo Sensitivity Analysis Code

    Energy Science and Technology Software Center (OSTI)

    2000-04-04

    SATOOL (Sensitivity Analysis TOOL) is a code for sensitivity analysis, following an uncertainity analysis with Monte Carlo simulations. Sensitivity analysis identifies those input variables, whose variance contributes dominatly to the variance in the output. This analysis can be used to reduce the variance in the output variables by redefining the "sensitive" variables with greater precision, i.e. with lower variance. The code identifies a group of sensitive variables, ranks them in the order of importance andmore » also quantifies the relative importance among the sensitive variables.« less

  9. Application of Monte Carlo Methods in Molecular Targeted Radionuclide Therapy

    SciTech Connect (OSTI)

    Hartmann Siantar, C; Descalle, M-A; DeNardo, G L; Nigg, D W

    2002-02-19

    Targeted radionuclide therapy promises to expand the role of radiation beyond the treatment of localized tumors. This novel form of therapy targets metastatic cancers by combining radioactive isotopes with tumor-seeking molecules such as monoclonal antibodies and custom-designed synthetic agents. Ultimately, like conventional radiotherapy, the effectiveness of targeted radionuclide therapy is limited by the maximum dose that can be given to a critical, normal tissue, such as bone marrow, kidneys, and lungs. Because radionuclide therapy relies on biological delivery of radiation, its optimization and characterization are necessarily different than for conventional radiation therapy. We have initiated the development of a new, Monte Carlo transport-based treatment planning system for molecular targeted radiation therapy as part of the MINERVA treatment planning system. This system calculates patient-specific radiation dose estimates using a set of computed tomography scans to describe the 3D patient anatomy, combined with 2D (planar image) and 3D (SPECT, or single photon emission computed tomography) to describe the time-dependent radiation source. The accuracy of such a dose calculation is limited primarily by the accuracy of the initial radiation source distribution, overlaid on the patient's anatomy. This presentation provides an overview of MINERVA functionality for molecular targeted radiation therapy, and describes early validation and implementation results of Monte Carlo simulations.

  10. Monte Carlo Bayesian search for the plausible source of the Telescope...

    Office of Scientific and Technical Information (OSTI)

    Title: Monte Carlo Bayesian search for the plausible source of the Telescope Array hotspot Authors: He, Hao-Ning ; Kusenko, Alexander ; Nagataki, Shigehiro ; Zhang, Bin-Bin ; Yang, ...

  11. Monte Carlo Fundamentals E B. BROWN and T M. S N

    Office of Scientific and Technical Information (OSTI)

    or reflect those of the United States Government or any ... Monte Carlo approach: Generate a sequence of states, (pol ... partide from birth to death During the particle ...

  12. OSTIblog Articles in the Monte Carlo Topic | OSTI, US Dept of Energy Office

    Office of Scientific and Technical Information (OSTI)

    of Scientific and Technical Information Monte Carlo Topic The Unbelievable Accuracy of the Monte Carlo Method by Kathy Chambers 18 Jan, 2013 in Science Communications 4680 Monte%20Carlo.jpg The Unbelievable Accuracy of the Monte Carlo Method Read more about 4680 The year was 1945, the year I was born. That in itself is of great significance to me. However, it was a momentous year in history. World War II came to its merciful end and the development of the first electronic computer - the

  13. Hybrid Deterministic/Monte Carlo Solutions to the Neutron Transport k-Eigenvalue Problem with a Comparison to Pure Monte Carlo Solutions

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Hybrid Deterministic/Monte Carlo Solutions to the Neutron Transport k-Eigenvalue Problem with a Comparison to Pure Monte Carlo Solutions Jeffrey A. Willert Los Alamos National Laboratory September 16, 2013 Joint work with: Dana Knoll (LANL), Ryosuke Park (LANL), and C. T. Kelley (NCSU) Jeffrey A. Willert Hybrid k-Eigenvalue Methods September 16, 2013 1 / 25 CASL-U-2013-0309-000 1 Introduction 2 Nonlinear Diffusion Acceleration for k-Eigenvalue Problems 3 Hybrid Methods 4 Classic Monte Carlo

  14. ACCELERATING FUSION REACTOR NEUTRONICS MODELING BY AUTOMATIC COUPLING OF HYBRID MONTE CARLO/DETERMINISTIC TRANSPORT ON CAD GEOMETRY

    SciTech Connect (OSTI)

    Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W; Grove, Robert E

    2015-01-01

    Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNce reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).

  15. TH-A-19A-10: Fast Four Dimensional Monte Carlo Dose Computations for Proton Therapy of Lung Cancer

    SciTech Connect (OSTI)

    Mirkovic, D; Titt, U; Mohan, R; Yepes, P

    2014-06-15

    Purpose: To develop and validate a fast and accurate four dimensional (4D) Monte Carlo (MC) dose computation system for proton therapy of lung cancer and other thoracic and abdominal malignancies in which the delivered dose distributions can be affected by respiratory motion of the patient. Methods: A 4D computer tomography (CT) scan for a lung cancer patient treated with protons in our clinic was used to create a time dependent patient model using our in-house, MCNPX-based Monte Carlo system (“MC{sup 2}”). The beam line configurations for two passively scattered proton beams used in the actual treatment were extracted from the clinical treatment plan and a set of input files was created automatically using MC{sup 2}. A full MC simulation of the beam line was computed using MCNPX and a set of phase space files for each beam was collected at the distal surface of the range compensator. The particles from these phase space files were transported through the 10 voxelized patient models corresponding to the 10 phases of the breathing cycle in the 4DCT, using MCNPX and an accelerated (fast) MC code called “FDC”, developed by us and which is based on the track repeating algorithm. The accuracy of the fast algorithm was assessed by comparing the two time dependent dose distributions. Results: The error of less than 1% in 100% of the voxels in all phases of the breathing cycle was achieved using this method with a speedup of more than 1000 times. Conclusion: The proposed method, which uses full MC to simulate the beam line and the accelerated MC code FDC for the time consuming particle transport inside the complex, time dependent, geometry of the patient shows excellent accuracy together with an extraordinary speed.

  16. Centrality measures highlight proton traps and access points to proton highways in kinetic Monte Carlo trajectories

    SciTech Connect (OSTI)

    Krueger, Rachel A.; Haibach, Frederick G.; Fry, Dana L.; Gomez, Maria A.

    2015-04-21

    A centrality measure based on the time of first returns rather than the number of steps is developed and applied to finding proton traps and access points to proton highways in the doped perovskite oxides: AZr{sub 0.875}D{sub 0.125}O{sub 3}, where A is Ba or Sr and the dopant D is Y or Al. The high centrality region near the dopant is wider in the SrZrO{sub 3} systems than the BaZrO{sub 3} systems. In the aluminum-doped systems, a region of intermediate centrality (secondary region) is found in a plane away from the dopant. Kinetic Monte Carlo (kMC) trajectories show that this secondary region is an entry to fast conduction planes in the aluminum-doped systems in contrast to the highest centrality area near the dopant trap. The yttrium-doped systems do not show this secondary region because the fast conduction routes are in the same plane as the dopant and hence already in the high centrality trapped area. This centrality measure complements kMC by highlighting key areas in trajectories. The limiting activation barriers found via kMC are in very good agreement with experiments and related to the barriers to escape dopant traps.

  17. Development of Subspace-based Hybrid Monte Carlo-Deterministric Algorithms for Reactor Physics Calculations

    SciTech Connect (OSTI)

    Abdel-Khalik, Hany S.; Zhang, Qiong

    2014-05-20

    The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 103 - 105 times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.

  18. Brachytherapy structural shielding calculations using Monte Carlo generated, monoenergetic data

    SciTech Connect (OSTI)

    Zourari, K.; Peppa, V.; Papagiannis, P.; Ballester, Facundo; Siebert, Frank-Andr

    2014-04-15

    Purpose: To provide a method for calculating the transmission of any broad photon beam with a known energy spectrum in the range of 201090 keV, through concrete and lead, based on the superposition of corresponding monoenergetic data obtained from Monte Carlo simulation. Methods: MCNP5 was used to calculate broad photon beam transmission data through varying thickness of lead and concrete, for monoenergetic point sources of energy in the range pertinent to brachytherapy (201090 keV, in 10 keV intervals). The three parameter empirical model introduced byArcher et al. [Diagnostic x-ray shielding design based on an empirical model of photon attenuation, Health Phys. 44, 507517 (1983)] was used to describe the transmission curve for each of the 216 energy-material combinations. These three parameters, and hence the transmission curve, for any polyenergetic spectrum can then be obtained by superposition along the lines of Kharrati et al. [Monte Carlo simulation of x-ray buildup factors of lead and its applications in shielding of diagnostic x-ray facilities, Med. Phys. 34, 13981404 (2007)]. A simple program, incorporating a graphical user interface, was developed to facilitate the superposition of monoenergetic data, the graphical and tabular display of broad photon beam transmission curves, and the calculation of material thickness required for a given transmission from these curves. Results: Polyenergetic broad photon beam transmission curves of this work, calculated from the superposition of monoenergetic data, are compared to corresponding results in the literature. A good agreement is observed with results in the literature obtained from Monte Carlo simulations for the photon spectra emitted from bare point sources of various radionuclides. Differences are observed with corresponding results in the literature for x-ray spectra at various tube potentials, mainly due to the different broad beam conditions or x-ray spectra assumed. Conclusions: The data of

  19. Monte Carlo prompt dose calculations for the National Ingition Facility

    SciTech Connect (OSTI)

    Latkowski, J.F.; Phillips, T.W.

    1997-01-01

    During peak operation, the National Ignition Facility (NIF) will conduct as many as 600 experiments per year and attain deuterium- tritium fusion yields as high as 1200 MJ/yr. The radiation effective dose equivalent (EDE) to workers is limited to an average of 03 mSv/yr (30 mrem/yr) in occupied areas of the facility. Laboratory personnel determined located outside the facility will receive EDEs <= 0.5 mSv/yr (<= 50 mrem/yr). The total annual occupational EDE for the facility will be maintained at <= 0.1 person-Sv/yr (<= 10 person- rem/yr). To ensure that prompt EDEs meet these limits, three- dimensional Monte Carlo calculations have been completed.

  20. Quantum Monte Carlo simulation of spin-polarized H

    SciTech Connect (OSTI)

    Markic, L. Vranjes; Boronat, J.; Casulleras, J.

    2007-02-01

    The ground-state properties of spin polarized hydrogen H{down_arrow} are obtained by means of diffusion Monte Carlo calculations. Using the most accurate to date ab initio H{down_arrow}-H{down_arrow} interatomic potential we have studied its gas phase, from the very dilute regime until densities above its freezing point. At very small densities, the equation of state of the gas is very well described in terms of the gas parameter {rho}a{sup 3}, with a the s-wave scattering length. The solid phase has also been studied up to high pressures. The gas-solid phase transition occurs at a pressure of 173 bar, a much higher value than suggested by previous approximate descriptions.

  1. Peelle's pertinent puzzle using the Monte Carlo technique

    SciTech Connect (OSTI)

    Kawano, Toshihiko; Talou, Patrick; Burr, Thomas; Pan, Feng

    2009-01-01

    We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, and if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.

  2. Optimization of Monte Carlo transport simulations in stochastic media

    SciTech Connect (OSTI)

    Liang, C.; Ji, W.

    2012-07-01

    This paper presents an accurate and efficient approach to optimize radiation transport simulations in a stochastic medium of high heterogeneity, like the Very High Temperature Gas-cooled Reactor (VHTR) configurations packed with TRISO fuel particles. Based on a fast nearest neighbor search algorithm, a modified fast Random Sequential Addition (RSA) method is first developed to speed up the generation of the stochastic media systems packed with both mono-sized and poly-sized spheres. A fast neutron tracking method is then developed to optimize the next sphere boundary search in the radiation transport procedure. In order to investigate their accuracy and efficiency, the developed sphere packing and neutron tracking methods are implemented into an in-house continuous energy Monte Carlo code to solve an eigenvalue problem in VHTR unit cells. Comparison with the MCNP benchmark calculations for the same problem indicates that the new methods show considerably higher computational efficiency. (authors)

  3. A study of Monte Carlo radiative transfer through fractal clouds

    SciTech Connect (OSTI)

    Gautier, C.; Lavallec, D.; O`Hirok, W.; Ricchiazzi, P.

    1996-04-01

    An understanding of radiation transport (RT) through clouds is fundamental to studies of the earth`s radiation budget and climate dynamics. The transmission through horizontally homogeneous clouds has been studied thoroughly using accurate, discreet ordinates radiative transfer models. However, the applicability of these results to general problems of global radiation budget is limited by the plane parallel assumption and the fact that real clouds fields show variability, both vertically and horizontally, on all size scales. To understand how radiation interacts with realistic clouds, we have used a Monte Carlo radiative transfer model to compute the details of the photon-cloud interaction on synthetic cloud fields. Synthetic cloud fields, generated by a cascade model, reproduce the scaling behavior, as well as the cloud variability observed and estimated from cloud satellite data.

  4. Monte-Carlo Continuous Energy Burnup Code System.

    Energy Science and Technology Software Center (OSTI)

    2007-08-31

    Version 00 MCB is a Monte Carlo Continuous Energy Burnup Code for a general-purpose use to calculate a nuclide density time evolution with burnup or decay. It includes eigenvalue calculations of critical and subcritical systems as well as neutron transport calculations in fixed source mode or k-code mode to obtain reaction rates and energy deposition that are necessary for burnup calculations. The MCB-1C patch file and data packages as distributed by the NEADB are verymore » well organized and are being made available through RSICC as received. The RSICC package includes the MCB-1C patch and MCB data libraries. Installation of MCB requires MCNP4C source code and utility programs, which are not included in this MCB distribution. They were provided with the now obsolete CCC-700/MCNP-4C package.« less

  5. Optimized nested Markov chain Monte Carlo sampling: theory

    SciTech Connect (OSTI)

    Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D

    2009-01-01

    Metropolis Monte Carlo sampling of a reference potential is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is reevaluated at a different level of approximation (the 'full' energy) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. By manipulating the thermodynamic variables characterizing the reference system we maximize the average acceptance probability of composite moves, lengthening significantly the random walk made between consecutive evaluations of the full energy at a fixed acceptance probability. This provides maximally decorrelated samples of the full potential, thereby lowering the total number required to build ensemble averages of a given variance. The efficiency of the method is illustrated using model potentials appropriate to molecular fluids at high pressure. Implications for ab initio or density functional theory (DFT) treatment are discussed.

  6. Monte Carlo Simulation Tool Installation and Operation Guide

    SciTech Connect (OSTI)

    Aguayo Navarrete, Estanislao; Ankney, Austin S.; Berguson, Timothy J.; Kouzes, Richard T.; Orrell, John L.; Troy, Meredith D.; Wiseman, Clinton G.

    2013-09-02

    This document provides information on software and procedures for Monte Carlo simulations based on the Geant4 toolkit, the ROOT data analysis software and the CRY cosmic ray library. These tools have been chosen for its application to shield design and activation studies as part of the simulation task for the Majorana Collaboration. This document includes instructions for installation, operation and modification of the simulation code in a high cyber-security computing environment, such as the Pacific Northwest National Laboratory network. It is intended as a living document, and will be periodically updated. It is a starting point for information collection by an experimenter, and is not the definitive source. Users should consult with one of the authors for guidance on how to find the most current information for their needs.

  7. Improved version of the PHOBOS Glauber Monte Carlo

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Loizides, C.; Nagle, J.; Steinberg, P.

    2015-09-01

    “Glauber” models are used to calculate geometric quantities in the initial state of heavy ion collisions, such as impact parameter, number of participating nucleons and initial eccentricity. Experimental heavy-ion collaborations, in particular at RHIC and LHC, use Glauber Model calculations for various geometric observables for determination of the collision centrality. In this document, we describe the assumptions inherent to the approach, and provide an updated implementation (v2) of the Monte Carlo based Glauber Model calculation, which originally was used by the PHOBOS collaboration. The main improvement w.r.t. the earlier version (v1) (Alver et al. 2008) is the inclusion of Tritium,more » Helium-3, and Uranium, as well as the treatment of deformed nuclei and Glauber–Gribov fluctuations of the proton in p +A collisions. A users’ guide (updated to reflect changes in v2) is provided for running various calculations.« less

  8. MONTE-CARLO BURNUP CALCULATION UNCERTAINTY QUANTIFICATION AND PROPAGATION DETERMINATION

    SciTech Connect (OSTI)

    Nichols, T.; Sternat, M.; Charlton, W.

    2011-05-08

    MONTEBURNS is a Monte-Carlo depletion routine utilizing MCNP and ORIGEN 2.2. Uncertainties exist in the MCNP transport calculation, but this information is not passed to the depletion calculation in ORIGEN or saved. To quantify this transport uncertainty and determine how it propagates between burnup steps, a statistical analysis of a multiple repeated depletion runs is performed. The reactor model chosen is the Oak Ridge Research Reactor (ORR) in a single assembly, infinite lattice configuration. This model was burned for a 25.5 day cycle broken down into three steps. The output isotopics as well as effective multiplication factor (k-effective) were tabulated and histograms were created at each burnup step using the Scott Method to determine the bin width. It was expected that the gram quantities and k-effective histograms would produce normally distributed results since they were produced from a Monte-Carlo routine, but some of results do not. The standard deviation at each burnup step was consistent between fission product isotopes as expected, while the uranium isotopes created some unique results. The variation in the quantity of uranium was small enough that, from the reaction rate MCNP tally, round off error occurred producing a set of repeated results with slight variation. Statistical analyses were performed using the {chi}{sup 2} test against a normal distribution for several isotopes and the k-effective results. While the isotopes failed to reject the null hypothesis of being normally distributed, the {chi}{sup 2} statistic grew through the steps in the k-effective test. The null hypothesis was rejected in the later steps. These results suggest, for a high accuracy solution, MCNP cell material quantities less than 100 grams and greater kcode parameters are needed to minimize uncertainty propagation and minimize round off effects.

  9. Modeling granular phosphor screens by Monte Carlo methods

    SciTech Connect (OSTI)

    Liaparinos, Panagiotis F.; Kandarakis, Ioannis S.; Cavouras, Dionisis A.; Delis, Harry B.; Panayiotakis, George S.

    2006-12-15

    The intrinsic phosphor properties are of significant importance for the performance of phosphor screens used in medical imaging systems. In previous analytical-theoretical and Monte Carlo studies on granular phosphor materials, values of optical properties, and light interaction cross sections were found by fitting to experimental data. These values were then employed for the assessment of phosphor screen imaging performance. However, it was found that, depending on the experimental technique and fitting methodology, the optical parameters of a specific phosphor material varied within a wide range of values, i.e., variations of light scattering with respect to light absorption coefficients were often observed for the same phosphor material. In this study, x-ray and light transport within granular phosphor materials was studied by developing a computational model using Monte Carlo methods. The model was based on the intrinsic physical characteristics of the phosphor. Input values required to feed the model can be easily obtained from tabulated data. The complex refractive index was introduced and microscopic probabilities for light interactions were produced, using Mie scattering theory. Model validation was carried out by comparing model results on x-ray and light parameters (x-ray absorption, statistical fluctuations in the x-ray to light conversion process, number of emitted light photons, output light spatial distribution) with previous published experimental data on Gd{sub 2}O{sub 2}S:Tb phosphor material (Kodak Min-R screen). Results showed the dependence of the modulation transfer function (MTF) on phosphor grain size and material packing density. It was predicted that granular Gd{sub 2}O{sub 2}S:Tb screens of high packing density and small grain size may exhibit considerably better resolution and light emission properties than the conventional Gd{sub 2}O{sub 2}S:Tb screens, under similar conditions (x-ray incident energy, screen thickness)

  10. SU-E-T-188: Film Dosimetry Verification of Monte Carlo Generated Electron Treatment Plans

    SciTech Connect (OSTI)

    Enright, S; Asprinio, A; Lu, L

    2014-06-01

    Purpose: The purpose of this study was to compare dose distributions from film measurements to Monte Carlo generated electron treatment plans. Irradiation with electrons offers the advantages of dose uniformity in the target volume and of minimizing the dose to deeper healthy tissue. Using the Monte Carlo algorithm will improve dose accuracy in regions with heterogeneities and irregular surfaces. Methods: Dose distributions from GafChromic{sup } EBT3 films were compared to dose distributions from the Electron Monte Carlo algorithm in the Eclipse{sup } radiotherapy treatment planning system. These measurements were obtained for 6MeV, 9MeV and 12MeV electrons at two depths. All phantoms studied were imported into Eclipse by CT scan. A 1 cm thick solid water template with holes for bonelike and lung-like plugs was used. Different configurations were used with the different plugs inserted into the holes. Configurations with solid-water plugs stacked on top of one another were also used to create an irregular surface. Results: The dose distributions measured from the film agreed with those from the Electron Monte Carlo treatment plan. Accuracy of Electron Monte Carlo algorithm was also compared to that of Pencil Beam. Dose distributions from Monte Carlo had much higher pass rates than distributions from Pencil Beam when compared to the film. The pass rate for Monte Carlo was in the 80%99% range, where the pass rate for Pencil Beam was as low as 10.76%. Conclusion: The dose distribution from Monte Carlo agreed with the measured dose from the film. When compared to the Pencil Beam algorithm, pass rates for Monte Carlo were much higher. Monte Carlo should be used over Pencil Beam for regions with heterogeneities and irregular surfaces.

  11. SU-E-J-145: Validation of An Analytical Model for in Vivo Range Verification Using GATE Monte Carlo Simulation in Proton Therapy

    SciTech Connect (OSTI)

    Lee, C; Lin, H; Chao, T; Hsiao, I; Chuang, K

    2015-06-15

    Purpose: Predicted PET images on the basis of analytical filtering approach for proton range verification has been successful developed and validated using FLUKA Monte Carlo (MC) codes and phantom measurements. The purpose of the study is to validate the effectiveness of analytical filtering model for proton range verification on GATE/GEANT4 Monte Carlo simulation codes. Methods: In this study, we performed two experiments for validation of predicted β+-isotope by the analytical model with GATE/GEANT4 simulations. The first experiments to evaluate the accuracy of predicting β+-yields as a function of irradiated proton energies. In second experiment, we simulate homogeneous phantoms of different materials irradiated by a mono-energetic pencil-like proton beam. The results of filtered β+-yields distributions by the analytical model is compared with those of MC simulated β+-yields in proximal and distal fall-off ranges. Results: The results investigate the distribution between filtered β+-yields and MC simulated β+-yields distribution in different conditions. First, we found that the analytical filtering can be applied over the whole range of the therapeutic energies. Second, the range difference between filtered β+-yields and MC simulated β+-yields at the distal fall-off region are within 1.5mm for all materials used. The findings validated the usefulness of analytical filtering model on range verification of proton therapy on GATE Monte Carlo simulations. In addition, there is a larger discrepancy between filtered prediction and MC simulated β+-yields using GATE code, especially in proximal region. This discrepancy might Result from the absence of wellestablished theoretical models for predicting the nuclear interactions. Conclusion: Despite the fact that large discrepancies of the distributions between MC-simulated and predicted β+-yields were observed, the study prove the effectiveness of analytical filtering model for proton range verification using

  12. Development of a generalized perturbation theory method for sensitivity analysis using continuous-energy Monte Carlo methods

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Perfetti, Christopher M.; Rearden, Bradley T.

    2016-03-01

    The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less

  13. SU-E-T-323: The FLUKA Monte Carlo Code in Ion Beam Therapy

    SciTech Connect (OSTI)

    Rinaldi, I

    2014-06-01

    Purpose: Monte Carlo (MC) codes are increasingly used in the ion beam therapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code demands accurate and reliable physical models for the transport and the interaction of all components of the mixed radiation field. This contribution will address an overview of the recent developments in the FLUKA code oriented to its application in ion beam therapy. Methods: FLUKA is a general purpose MC code which allows the calculations of particle transport and interactions with matter, covering an extended range of applications. The user can manage the code through a graphic interface (FLAIR) developed using the Python programming language. Results: This contribution will present recent refinements in the description of the ionization processes and comparisons between FLUKA results and experimental data of ion beam therapy facilities. Moreover, several validations of the largely improved FLUKA nuclear models for imaging application to treatment monitoring will be shown. The complex calculation of prompt gamma ray emission compares favorably with experimental data and can be considered adequate for the intended applications. New features in the modeling of proton induced nuclear interactions also provide reliable cross section predictions for the production of radionuclides. Of great interest for the community are the developments introduced in FLAIR. The most recent efforts concern the capability of importing computed-tomography images in order to build automatically patient geometries and the implementation of different types of existing positron-emission-tomography scanner devices for imaging applications. Conclusion: The FLUA code has been already chosen as reference MC code in many ion beam therapy centers, and is being continuously improved in order to match the needs of ion beam therapy applications. Parts of this work have been supported by the European

  14. Relative efficiency calculations of biological and physical systems exposed to charged particle beams through Monte Carlo simulation

    SciTech Connect (OSTI)

    Avila, Olga; Brandan, Maria-Ester

    1998-08-28

    A theoretical investigation of thermoluminescence response of Lithium Fluoride after heavy ion irradiation has been performed through Monte Carlo simulation of the energy deposition process. Efficiencies for the total TL signal of LiF irradiated with 0.7, 1.5 and 3 MeV protons and 3, 5.3 and 7.5 MeV helium ions have been calculated using the radial dose distribution profiles obtained from the MC procedure and applying Track Structure Theory and Modified Track Structure Theory. Results were compared with recent experimental data. The models correctly describe the observed decrease in efficiency as a function of the ion LET.

  15. Complete Monte Carlo Simulation of Neutron Scattering Experiments

    SciTech Connect (OSTI)

    Drosg, M.

    2011-12-13

    In the far past, it was not possible to accurately correct for the finite geometry and the finite sample size of a neutron scattering set-up. The limited calculation power of the ancient computers as well as the lack of powerful Monte Carlo codes and the limitation in the data base available then prevented a complete simulation of the actual experiment. Using e.g. the Monte Carlo neutron transport code MCNPX [1], neutron scattering experiments can be simulated almost completely with a high degree of precision using a modern PC, which has a computing power that is ten thousand times that of a super computer of the early 1970s. Thus, (better) corrections can also be obtained easily for previous published data provided that these experiments are sufficiently well documented. Better knowledge of reference data (e.g. atomic mass, relativistic correction, and monitor cross sections) further contributes to data improvement. Elastic neutron scattering experiments from liquid samples of the helium isotopes performed around 1970 at LANL happen to be very well documented. Considering that the cryogenic targets are expensive and complicated, it is certainly worthwhile to improve these data by correcting them using this comparatively straightforward method. As two thirds of all differential scattering cross section data of {sup 3}He(n,n){sup 3}He are connected to the LANL data, it became necessary to correct the dependent data measured in Karlsruhe, Germany, as well. A thorough simulation of both the LANL experiments and the Karlsruhe experiment is presented, starting from the neutron production, followed by the interaction in the air, the interaction with the cryostat structure, and finally the scattering medium itself. In addition, scattering from the hydrogen reference sample was simulated. For the LANL data, the multiple scattering corrections are smaller by a factor of five at least, making this work relevant. Even more important are the corrections to the Karlsruhe data

  16. Quantum Monte Carlo Calculations Applied to Magnetic Molecules

    SciTech Connect (OSTI)

    Larry Engelhardt

    2006-08-09

    We have calculated the equilibrium thermodynamic properties of Heisenberg spin systems using a quantum Monte Carlo (QMC) method. We have used some of these systems as models to describe recently synthesized magnetic molecules, and-upon comparing the results of these calculations with experimental data-have obtained accurate estimates for the basic parameters of these models. We have also performed calculations for other systems that are of more general interest, being relevant both for existing experimental data and for future experiments. Utilizing the concept of importance sampling, these calculations can be carried out in an arbitrarily large quantum Hilbert space, while still avoiding any approximations that would introduce systematic errors. The only errors are statistical in nature, and as such, their magnitudes are accurately estimated during the course of a simulation. Frustrated spin systems present a major challenge to the QMC method, nevertheless, in many instances progress can be made. In this chapter, the field of magnetic molecules is introduced, paying particular attention to the characteristics that distinguish magnetic molecules from other systems that are studied in condensed matter physics. We briefly outline the typical path by which we learn about magnetic molecules, which requires a close relationship between experiments and theoretical calculations. The typical experiments are introduced here, while the theoretical methods are discussed in the next chapter. Each of these theoretical methods has a considerable limitation, also described in Chapter 2, which together serve to motivate the present work. As is shown throughout the later chapters, the present QMC method is often able to provide useful information where other methods fail. In Chapter 3, the use of Monte Carlo methods in statistical physics is reviewed, building up the fundamental ideas that are necessary in order to understand the method that has been used in this work. With these

  17. Monte Carlo Simulations of Cosmic Rays Hadronic Interactions

    SciTech Connect (OSTI)

    Aguayo Navarrete, Estanislao; Orrell, John L.; Kouzes, Richard T.

    2011-04-01

    This document describes the construction and results of the MaCoR software tool, developed to model the hadronic interactions of cosmic rays with different geometries of materials. The ubiquity of cosmic radiation in the environment results in the activation of stable isotopes, referred to as cosmogenic activities. The objective is to use this application in conjunction with a model of the MAJORANA DEMONSTRATOR components, from extraction to deployment, to evaluate cosmogenic activation of such components before and after deployment. The cosmic ray showers include several types of particles with a wide range of energy (MeV to GeV). It is infeasible to compute an exact result with a deterministic algorithm for this problem; Monte Carlo simulations are a more suitable approach to model cosmic ray hadronic interactions. In order to validate the results generated by the application, a test comparing experimental muon flux measurements and those predicted by the application is presented. The experimental and simulated results have a deviation of 3%.

  18. Pseudopotentials for quantum Monte Carlo studies of transition metal oxides

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Krogel, Jaron T.; Santana Palacio, Juan A.; Reboredo, Fernando A.

    2016-02-22

    Quantum Monte Carlo (QMC) calculations of transition metal oxides are partially limited by the availability of high-quality pseudopotentials that are both accurate in QMC and compatible with major plane-wave electronic structure codes. We have generated a set of neon-core pseudopotentials with small cutoff radii for the early transition metal elements Sc to Zn within the local density approximation of density functional theory. The pseudopotentials have been directly tested for accuracy within QMC by calculating the first through fourth ionization potentials of the isolated transition metal (M) atoms and the binding curve of each M-O dimer. We find the ionization potentialsmore » to be accurate to 0.16(1) eV, on average, relative to experiment. The equilibrium bond lengths of the dimers are within 0.5(1)% of experimental values, on average, and the binding energies are also typically accurate to 0.18(3) eV. The level of accuracy we find for atoms and dimers is comparable to what has recently been observed for bulk metals and oxides using the same pseudopotentials. Our QMC pseudopotential results compare well with the findings of previous QMC studies and benchmark quantum chemical calculations.« less

  19. High order Chin actions in path integral Monte Carlo

    SciTech Connect (OSTI)

    Sakkos, K.; Casulleras, J.; Boronat, J.

    2009-05-28

    High order actions proposed by Chin have been used for the first time in path integral Monte Carlo simulations. Contrary to the Takahashi-Imada action, which is accurate to the fourth order only for the trace, the Chin action is fully fourth order, with the additional advantage that the leading fourth-order error coefficients are finely tunable. By optimizing two free parameters entering in the new action, we show that the time step error dependence achieved is best fitted with a sixth order law. The computational effort per bead is increased but the total number of beads is greatly reduced and the efficiency improvement with respect to the primitive approximation is approximately a factor of 10. The Chin action is tested in a one-dimensional harmonic oscillator, a H{sub 2} drop, and bulk liquid {sup 4}He. In all cases a sixth-order law is obtained with values of the number of beads that compare well with the pair action approximation in the stringent test of superfluid {sup 4}He.

  20. Reduced Variance for Material Sources in Implicit Monte Carlo

    SciTech Connect (OSTI)

    Urbatsch, Todd J.

    2012-06-25

    Implicit Monte Carlo (IMC), a time-implicit method due to Fleck and Cummings, is used for simulating supernovae and inertial confinement fusion (ICF) systems where x-rays tightly and nonlinearly interact with hot material. The IMC algorithm represents absorption and emission within a timestep as an effective scatter. Similarly, the IMC time-implicitness splits off a portion of a material source directly into the radiation field. We have found that some of our variance reduction and particle management schemes will allow large variances in the presence of small, but important, material sources, as in the case of ICF hot electron preheat sources. We propose a modification of our implementation of the IMC method in the Jayenne IMC Project. Instead of battling the sampling issues associated with a small source, we bypass the IMC implicitness altogether and simply deterministically update the material state with the material source if the temperature of the spatial cell is below a user-specified cutoff. We describe the modified method and present results on a test problem that show the elimination of variance for small sources.

  1. Random Number Generation for Petascale Quantum Monte Carlo

    SciTech Connect (OSTI)

    Ashok Srinivasan

    2010-03-16

    The quality of random number generators can affect the results of Monte Carlo computations, especially when a large number of random numbers are consumed. Furthermore, correlations present between different random number streams in a parallel computation can further affect the results. The SPRNG software, which the author had developed earlier, has pseudo-random number generators (PRNGs) capable of producing large numbers of streams with large periods. However, they had been empirically tested on only thousand streams earlier. In the work summarized here, we tested the SPRNG generators with over a hundred thousand streams, involving over 10^14 random numbers per test, on some tests. We also tested the popular Mersenne Twister. We believe that these are the largest tests of PRNGs, both in terms of the numbers of streams tested and the number of random numbers tested. We observed defects in some of these generators, including the Mersenne Twister, while a few generators appeared to perform well. We also corrected an error in the implementation of one of the SPRNG generators.

  2. Improving computational efficiency of Monte Carlo simulations with variance reduction

    SciTech Connect (OSTI)

    Turner, A.

    2013-07-01

    CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)

  3. Cohesion Energetics of Carbon Allotropes: Quantum Monte Carlo Study

    SciTech Connect (OSTI)

    Shin, Hyeondeok; Kang, Sinabro; Koo, Jahyun; Lee, Hoonkyung; Kim, Jeongnim; Kwon, Yongkyung

    2014-01-01

    We have performed quantum Monte Carlo calculations to study the cohesion energetics of carbon allotropes, including sp3-bonded diamond, sp2-bonded graphene, sp-sp2 hybridized graphynes, and sp-bonded carbyne. The comput- ed cohesive energies of diamond and graphene are found to be in excellent agreement with the corresponding values de- termined experimentally for diamond and graphite, respectively, when the zero-point energies, along with the interlayer binding in the case of graphite, are included. We have also found that the cohesive energy of graphyne decreases system- atically as the ratio of sp-bonded carbon atoms increases. The cohesive energy of -graphyne, the most energetically- stable graphyne, turns out to be 6.766(6) eV/atom, which is smaller than that of graphene by 0.698(12) eV/atom. Experi- mental difficulty in synthesizing graphynes could be explained by their significantly smaller cohesive energies. Finally we conclude that the cohesive energy of a newly-proposed two-dimensional carbon network can be accurately estimated with the carbon-carbon bond energies determined from the cohesive energies of graphene and three different graphynes.

  4. Status of the MORSE multigroup Monte Carlo radiation transport code

    SciTech Connect (OSTI)

    Emmett, M.B.

    1993-06-01

    There are two versions of the MORSE multigroup Monte Carlo radiation transport computer code system at Oak Ridge National Laboratory. MORSE-CGA is the most well-known and has undergone extensive use for many years. MORSE-SGC was originally developed in about 1980 in order to restructure the cross-section handling and thereby save storage. However, with the advent of new computer systems having much larger storage capacity, that aspect of SGC has become unnecessary. Both versions use data from multigroup cross-section libraries, although in somewhat different formats. MORSE-SGC is the version of MORSE that is part of the SCALE system, but it can also be run stand-alone. Both CGA and SGC use the Multiple Array System (MARS) geometry package. In the last six months the main focus of the work on these two versions has been on making them operational on workstations, in particular, the IBM RISC 6000 family. A new version of SCALE for workstations is being released to the Radiation Shielding Information Center (RSIC). MORSE-CGA, Version 2.0, is also being released to RSIC. Both SGC and CGA have undergone other revisions recently. This paper reports on the current status of the MORSE code system.

  5. Monte Carlo analysis of localization errors in magnetoencephalography

    SciTech Connect (OSTI)

    Medvick, P.A.; Lewis, P.S.; Aine, C.; Flynn, E.R.

    1989-01-01

    In magnetoencephalography (MEG), the magnetic fields created by electrical activity in the brain are measured on the surface of the skull. To determine the location of the activity, the measured field is fit to an assumed source generator model, such as a current dipole, by minimizing chi-square. For current dipoles and other nonlinear source models, the fit is performed by an iterative least squares procedure such as the Levenberg-Marquardt algorithm. Once the fit has been computed, analysis of the resulting value of chi-square can determine whether the assumed source model is adequate to account for the measurements. If the source model is adequate, then the effect of measurement error on the fitted model parameters must be analyzed. Although these kinds of simulation studies can provide a rough idea of the effect that measurement error can be expected to have on source localization, they cannot provide detailed enough information to determine the effects that the errors in a particular measurement situation will produce. In this work, we introduce and describe the use of Monte Carlo-based techniques to analyze model fitting errors for real data. Given the details of the measurement setup and a statistical description of the measurement errors, these techniques determine the effects the errors have on the fitted model parameters. The effects can then be summarized in various ways such as parameter variances/covariances or multidimensional confidence regions. 8 refs., 3 figs.

  6. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    SciTech Connect (OSTI)

    Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  7. Non-adiabatic molecular dynamics by accelerated semiclassical Monte Carlo

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    White, Alexander J.; Gorshkov, Vyacheslav N.; Tretiak, Sergei; Mozyrsky, Dmitry

    2015-07-07

    Non-adiabatic dynamics, where systems non-radiatively transition between electronic states, plays a crucial role in many photo-physical processes, such as fluorescence, phosphorescence, and photoisomerization. Methods for the simulation of non-adiabatic dynamics are typically either numerically impractical, highly complex, or based on approximations which can result in failure for even simple systems. Recently, the Semiclassical Monte Carlo (SCMC) approach was developed in an attempt to combine the accuracy of rigorous semiclassical methods with the efficiency and simplicity of widely used surface hopping methods. However, while SCMC was found to be more efficient than other semiclassical methods, it is not yet as efficientmore » as is needed to be used for large molecular systems. Here, we have developed two new methods: the accelerated-SCMC and the accelerated-SCMC with re-Gaussianization, which reduce the cost of the SCMC algorithm up to two orders of magnitude for certain systems. In many cases shown here, the new procedures are nearly as efficient as the commonly used surface hopping schemes, with little to no loss of accuracy. This implies that these modified SCMC algorithms will be of practical numerical solutions for simulating non-adiabatic dynamics in realistic molecular systems.« less

  8. Non-adiabatic molecular dynamics by accelerated semiclassical Monte Carlo

    SciTech Connect (OSTI)

    White, Alexander J.; Gorshkov, Vyacheslav N.; Tretiak, Sergei; Mozyrsky, Dmitry

    2015-07-07

    Non-adiabatic dynamics, where systems non-radiatively transition between electronic states, plays a crucial role in many photo-physical processes, such as fluorescence, phosphorescence, and photoisomerization. Methods for the simulation of non-adiabatic dynamics are typically either numerically impractical, highly complex, or based on approximations which can result in failure for even simple systems. Recently, the Semiclassical Monte Carlo (SCMC) approach was developed in an attempt to combine the accuracy of rigorous semiclassical methods with the efficiency and simplicity of widely used surface hopping methods. However, while SCMC was found to be more efficient than other semiclassical methods, it is not yet as efficient as is needed to be used for large molecular systems. Here, we have developed two new methods: the accelerated-SCMC and the accelerated-SCMC with re-Gaussianization, which reduce the cost of the SCMC algorithm up to two orders of magnitude for certain systems. In many cases shown here, the new procedures are nearly as efficient as the commonly used surface hopping schemes, with little to no loss of accuracy. This implies that these modified SCMC algorithms will be of practical numerical solutions for simulating non-adiabatic dynamics in realistic molecular systems.

  9. MARKOV CHAIN MONTE CARLO POSTERIOR SAMPLING WITH THE HAMILTONIAN METHOD

    SciTech Connect (OSTI)

    K. HANSON

    2001-02-01

    The Markov Chain Monte Carlo technique provides a means for drawing random samples from a target probability density function (pdf). MCMC allows one to assess the uncertainties in a Bayesian analysis described by a numerically calculated posterior distribution. This paper describes the Hamiltonian MCMC technique in which a momentum variable is introduced for each parameter of the target pdf. In analogy to a physical system, a Hamiltonian H is defined as a kinetic energy involving the momenta plus a potential energy {var_phi}, where {var_phi} is minus the logarithm of the target pdf. Hamiltonian dynamics allows one to move along trajectories of constant H, taking large jumps in the parameter space with relatively few evaluations of {var_phi} and its gradient. The Hamiltonian algorithm alternates between picking a new momentum vector and following such trajectories. The efficiency of the Hamiltonian method for multidimensional isotropic Gaussian pdfs is shown to remain constant at around 7% for up to several hundred dimensions. The Hamiltonian method handles correlations among the variables much better than the standard Metropolis algorithm. A new test, based on the gradient of {var_phi}, is proposed to measure the convergence of the MCMC sequence.

  10. SU-E-T-239: Monte Carlo Modelling of SMC Proton Nozzles Using TOPAS

    SciTech Connect (OSTI)

    Chung, K; Kim, J; Shin, J; Han, Y; Ju, S; Hong, C; Kim, D; Kim, H; Shin, E; Ahn, S; Chung, S; Choi, D

    2014-06-01

    Purpose: To expedite and cross-check the commissioning of the proton therapy nozzles at Samsung Medical Center using TOPAS. Methods: We have two different types of nozzles at Samsung Medical Center (SMC), a multi-purpose nozzle and a pencil beam scanning dedicated nozzle. Both nozzles have been modelled in Monte Carlo simulation by using TOPAS based on the vendor-provided geometry. The multi-purpose nozzle is mainly composed of wobbling magnets, scatterers, ridge filters and multi-leaf collimators (MLC). Including patient specific apertures and compensators, all the parts of the nozzle have been implemented in TOPAS following the geometry information from the vendor.The dedicated scanning nozzle has a simpler structure than the multi-purpose nozzle with a vacuum pipe at the down stream of the nozzle.A simple water tank volume has been implemented to measure the dosimetric characteristics of proton beams from the nozzles. Results: We have simulated the two proton beam nozzles at SMC. Two different ridge filters have been tested for the spread-out Bragg peak (SOBP) generation of wobbling mode in the multi-purpose nozzle. The spot sizes and lateral penumbra in two nozzles have been simulated and analyzed using a double Gaussian model. Using parallel geometry, both the depth dose curve and dose profile have been measured simultaneously. Conclusion: The proton therapy nozzles at SMC have been successfully modelled in Monte Carlo simulation using TOPAS. We will perform a validation with measured base data and then use the MC simulation to interpolate/extrapolate the measured data. We believe it will expedite the commissioning process of the proton therapy nozzles at SMC.

  11. Tests of Monte Carlo Independent Column Approximation in the...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    In this work, McICA and a stochastic cloud generator have been implemented to the Max Planck Institute for Meteorology's ECHAM5 atmospheric GCM. The impact of McICA noise on ...

  12. MO-E-18C-02: Hands-On Monte Carlo Project Assignment as a Method to Teach Radiation Physics

    SciTech Connect (OSTI)

    Pater, P; Vallieres, M; Seuntjens, J

    2014-06-15

    Purpose: To present a hands-on project on Monte Carlo methods (MC) recently added to the curriculum and to discuss the students' appreciation. Methods: Since 2012, a 1.5 hour lecture dedicated to MC fundamentals follows the detailed presentation of photon and electron interactions. Students also program all sampling steps (interaction length and type, scattering angle, energy deposit) of a MC photon transport code. A handout structured in a step-by-step fashion guides student in conducting consistency checks. For extra points, students can code a fully working MC simulation, that simulates a dose distribution for 50 keV photons. A kerma approximation to dose deposition is assumed. A survey was conducted to which 10 out of the 14 attending students responded. It compared MC knowledge prior to and after the project, questioned the usefulness of radiation physics teaching through MC and surveyed possible project improvements. Results: According to the survey, 76% of students had no or a basic knowledge of MC methods before the class and 65% estimate to have a good to very good understanding of MC methods after attending the class. 80% of students feel that the MC project helped them significantly to understand simulations of dose distributions. On average, students dedicated 12.5 hours to the project and appreciated the balance between hand-holding and questions/implications. Conclusion: A lecture on MC methods with a hands-on MC programming project requiring about 14 hours was added to the graduate study curriculum since 2012. MC methods produce “gold standard” dose distributions and slowly enter routine clinical work and a fundamental understanding of MC methods should be a requirement for future students. Overall, the lecture and project helped students relate crosssections to dose depositions and presented numerical sampling methods behind the simulation of these dose distributions. Research funding from governments of Canada and Quebec. PP acknowledges

  13. On-the-fly nuclear data processing methods for Monte Carlo simulations of fast spectrum systems

    SciTech Connect (OSTI)

    Walsh, Jon

    2015-08-31

    The presentation summarizes work performed over summer 2015 related to Monte Carlo simulations. A flexible probability table interpolation scheme has been implemented and tested with results comparing favorably to the continuous phase-space on-the-fly approach.

  14. APR1400 LBLOCA uncertainty quantification by Monte Carlo method and comparison with Wilks' formula

    SciTech Connect (OSTI)

    Hwang, M.; Bae, S.; Chung, B. D.

    2012-07-01

    An analysis of the uncertainty quantification for the PWR LBLOCA by the Monte Carlo calculation has been performed and compared with the tolerance level determined by Wilks' formula. The uncertainty range and distribution of each input parameter associated with the LBLOCA accident were determined by the PIRT results from the BEMUSE project. The Monte-Carlo method shows that the 95. percentile PCT value can be obtained reliably with a 95% confidence level using the Wilks' formula. The extra margin by the Wilks' formula over the true 95. percentile PCT by the Monte-Carlo method was rather large. Even using the 3 rd order formula, the calculated value using the Wilks' formula is nearly 100 K over the true value. It is shown that, with the ever increasing computational capability, the Monte-Carlo method is accessible for the nuclear power plant safety analysis within a realistic time frame. (authors)

  15. On-the-Fly Doppler Broadening for Monte Carlo Codes (Journal...

    Office of Scientific and Technical Information (OSTI)

    Title: On-the-Fly Doppler Broadening for Monte Carlo Codes Authors: Yesilyurt, G. ; Martin, W. ; Brown, F. 1 ; Univ. of Michigan) 2 ; Los Alamos National Laboratory) 2 + Show ...

  16. Multiscale Monte Carlo equilibration: Pure Yang-Mills theory

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Endres, Michael G.; Brower, Richard C.; Orginos, Kostas; Detmold, William; Pochinsky, Andrew V.

    2015-12-29

    In this study, we present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.

  17. Quantum Monte Carlo methods and lithium cluster properties. [Atomic clusters

    SciTech Connect (OSTI)

    Owen, R.K.

    1990-12-01

    Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) (0.1981), 0.1895(9) (0.1874(4)), 0.1530(34) (0.1599(73)), 0.1664(37) (0.1724(110)), 0.1613(43) (0.1675(110)) Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) (0.0203(12)), 0.0188(10) (0.0220(21)), 0.0247(8) (0.0310(12)), 0.0253(8) (0.0351(8)) Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.

  18. Quantum Monte Carlo methods and lithium cluster properties

    SciTech Connect (OSTI)

    Owen, R.K.

    1990-12-01

    Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) [0.1981], 0.1895(9) [0.1874(4)], 0.1530(34) [0.1599(73)], 0.1664(37) [0.1724(110)], 0.1613(43) [0.1675(110)] Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) [0.0203(12)], 0.0188(10) [0.0220(21)], 0.0247(8) [0.0310(12)], 0.0253(8) [0.0351(8)] Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.

  19. Utility of Monte Carlo Modelling for Holdup Measurements.

    SciTech Connect (OSTI)

    Belian, Anthony P.; Russo, P. A.; Weier, Dennis R. ,

    2005-01-01

    Non-destructive assay (NDA) measurements performed to locate and quantify holdup in the Oak Ridge K25 enrichment cascade used neutron totals counting and low-resolution gamma-ray spectroscopy. This facility housed the gaseous diffusion process for enrichment of uranium, in the form of UF{sub 6} gas, from {approx} 20% to 93%. Inventory of {sup 235}U inventory in K-25 is all holdup. These buildings have been slated for decontaminatino and decommissioning. The NDA measurements establish the inventory quantities and will be used to assure criticality safety and meet criteria for waste analysis and transportation. The tendency to err on the side of conservatism for the sake of criticality safety in specifying total NDA uncertainty argues, in the interests of safety and costs, for obtaining the best possible value of uncertainty at the conservative confidence level for each item of process equipment. Variable deposit distribution is a complex systematic effect (i.e., determined by multiple independent variables) on the portable NDA results for very large and bulk converters that contributes greatly to total uncertainty for holdup in converters measured by gamma or neutron NDA methods. Because the magnitudes of complex systematic effects are difficult to estimate, computational tools are important for evaluating those that are large. Motivated by very large discrepancies between gamma and neutron measurements of high-mass converters with gamma results tending to dominate, the Monte Carlo code MCNP has been used to determine the systematic effects of deposit distribution on gamma and neutron results for {sup 235}U holdup mass in converters. This paper details the numerical methodology used to evaluate large systematic effects unique to each measurement type, validates the methodology by comparison with measurements, and discusses how modeling tools can supplement the calibration of instruments used for holdup measurements by providing realistic values at well

  20. MONTE CARLO SIMULATION OF METASTABLE OXYGEN PHOTOCHEMISTRY IN COMETARY ATMOSPHERES

    SciTech Connect (OSTI)

    Bisikalo, D. V.; Shematovich, V. I. [Institute of Astronomy of the Russian Academy of Sciences, Moscow (Russian Federation); Grard, J.-C.; Hubert, B. [Laboratory for Planetary and Atmospheric Physics (LPAP), University of Lige, Lige (Belgium); Jehin, E.; Decock, A. [Origines Cosmologiques et Astrophysiques (ORCA), University of Lige (Belgium); Hutsemkers, D. [Extragalactic Astrophysics and Space Observations (EASO), University of Lige (Belgium); Manfroid, J., E-mail: B.Hubert@ulg.ac.be [High Energy Astrophysics Group (GAPHE), University of Lige (Belgium)

    2015-01-01

    Cometary atmospheres are produced by the outgassing of material, mainly H{sub 2}O, CO, and CO{sub 2} from the nucleus of the comet under the energy input from the Sun. Subsequent photochemical processes lead to the production of other species generally absent from the nucleus, such as OH. Although all comets are different, they all have a highly rarefied atmosphere, which is an ideal environment for nonthermal photochemical processes to take place and influence the detailed state of the atmosphere. We develop a Monte Carlo model of the coma photochemistry. We compute the energy distribution functions (EDF) of the metastable O({sup 1}D) and O({sup 1}S) species and obtain the red (630nm) and green (557.7nm) spectral line shapes of the full coma, consistent with the computed EDFs and the expansion velocity. We show that both species have a severely non-Maxwellian EDF, that results in broad spectral lines and the suprathermal broadening dominates due to the expansion motion. We apply our model to the atmosphere of comet C/1996 B2 (Hyakutake) and 103P/Hartley 2. The computed width of the green line, expressed in terms of speed, is lower than that of the red line. This result is comparable to previous theoretical analyses, but in disagreement with observations. We explain that the spectral line shape does not only depend on the exothermicity of the photochemical production mechanisms, but also on thermalization, due to elastic collisions, reducing the width of the emission line coming from the O({sup 1}D) level, which has a longer lifetime.

  1. PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code

    SciTech Connect (OSTI)

    Iandola, F N; O'Brien, M J; Procassini, R J

    2010-11-29

    Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.

  2. Multiparticle Monte Carlo Code System for Shielding and Criticality Use.

    Energy Science and Technology Software Center (OSTI)

    2015-06-01

    Version 00 COG is a modern, full-featured Monte Carlo radiation transport code that provides accurate answers to complex shielding, criticality, and activation problems.COG was written to be state-of-the-art and free of physics approximations and compromises found in earlier codes. COG is fully 3-D, uses point-wise cross sections and exact angular scattering, and allows a full range of biasing options to speed up solutions for deep penetration problems. Additionally, a criticality option is available for computingmore » Keff for assemblies of fissile materials. ENDL or ENDFB cross section libraries may be used. COG home page: http://cog.llnl.gov. Cross section libraries are included in the package. COG can use either the LLNL ENDL-90 cross section set or the ENDFB/VI set. Analytic surfaces are used to describe geometric boundaries. Parts (volumes) are described by a method of Constructive Solid Geometry. Surface types include surfaces of up to fourth order, and pseudo-surfaces such as boxes, finite cylinders, and figures of revolution. Repeated assemblies need be defined only once. Parts are visualized in cross-section and perspective picture views. A lattice feature simplifies the specification of regular arrays of parts. Parallel processing under MPI is supported for multi-CPU systems. Source and random-walk biasing techniques may be selected to improve solution statistics. These include source angular biasing, importance weighting, particle splitting and Russian roulette, pathlength stretching, point detectors, scattered direction biasing, and forced collisions. Criticality – For a fissioning system, COG will compute Keff by transporting batches of neutrons through the system. Activation – COG can compute gamma-ray doses due to neutron-activated materials, starting with just a neutron source. Coupled Problems – COG can solve coupled problems involving neutrons, photons, and electrons. COG 11.1 is an updated version of COG11.1 BETA 2 (RSICC C00777MNYCP02

  3. Monte Carlo based beam model using a photon MLC for modulated electron radiotherapy

    SciTech Connect (OSTI)

    Henzen, D. Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Vetterli, D.; Chatelain, C.; Fix, M. K.; Neuenschwander, H.; Stampanoni, M. F. M.

    2014-02-15

    Purpose: Modulated electron radiotherapy (MERT) promises sparing of organs at risk for certain tumor sites. Any implementation of MERT treatment planning requires an accurate beam model. The aim of this work is the development of a beam model which reconstructs electron fields shaped using the Millennium photon multileaf collimator (MLC) (Varian Medical Systems, Inc., Palo Alto, CA) for a Varian linear accelerator (linac). Methods: This beam model is divided into an analytical part (two photon and two electron sources) and a Monte Carlo (MC) transport through the MLC. For dose calculation purposes the beam model has been coupled with a macro MC dose calculation algorithm. The commissioning process requires a set of measurements and precalculated MC input. The beam model has been commissioned at a source to surface distance of 70 cm for a Clinac 23EX (Varian Medical Systems, Inc., Palo Alto, CA) and a TrueBeam linac (Varian Medical Systems, Inc., Palo Alto, CA). For validation purposes, measured and calculated depth dose curves and dose profiles are compared for four different MLC shaped electron fields and all available energies. Furthermore, a measured two-dimensional dose distribution for patched segments consisting of three 18 MeV segments, three 12 MeV segments, and a 9 MeV segment is compared with corresponding dose calculations. Finally, measured and calculated two-dimensional dose distributions are compared for a circular segment encompassed with a C-shaped segment. Results: For 15 34, 5 5, and 2 2 cm{sup 2} fields differences between water phantom measurements and calculations using the beam model coupled with the macro MC dose calculation algorithm are generally within 2% of the maximal dose value or 2 mm distance to agreement (DTA) for all electron beam energies. For a more complex MLC pattern, differences between measurements and calculations are generally within 3% of the maximal dose value or 3 mm DTA for all electron beam energies. For the two

  4. SU-E-I-28: Evaluating the Organ Dose From Computed Tomography Using Monte Carlo Calculations

    SciTech Connect (OSTI)

    Ono, T; Araki, F

    2014-06-01

    Purpose: To evaluate organ doses from computed tomography (CT) using Monte Carlo (MC) calculations. Methods: A Philips Brilliance CT scanner (64 slice) was simulated using the GMctdospp (IMPS, Germany) based on the EGSnrc user code. The X-ray spectra and a bowtie filter for MC simulations were determined to coincide with measurements of half-value layer (HVL) and off-center ratio (OCR) profile in air. The MC dose was calibrated from absorbed dose measurements using a Farmer chamber and a cylindrical water phantom. The dose distribution from CT was calculated using patient CT images and organ doses were evaluated from dose volume histograms. Results: The HVLs of Al at 80, 100, and 120 kV were 6.3, 7.7, and 8.7 mm, respectively. The calculated HVLs agreed with measurements within 0.3%. The calculated and measured OCR profiles agreed within 3%. For adult head scans (CTDIvol) =51.4 mGy), mean doses for brain stem, eye, and eye lens were 23.2, 34.2, and 37.6 mGy, respectively. For pediatric head scans (CTDIvol =35.6 mGy), mean doses for brain stem, eye, and eye lens were 19.3, 24.5, and 26.8 mGy, respectively. For adult chest scans (CTDIvol=19.0 mGy), mean doses for lung, heart, and spinal cord were 21.1, 22.0, and 15.5 mGy, respectively. For adult abdominal scans (CTDIvol=14.4 mGy), the mean doses for kidney, liver, pancreas, spleen, and spinal cord were 17.4, 16.5, 16.8, 16.8, and 13.1 mGy, respectively. For pediatric abdominal scans (CTDIvol=6.76 mGy), mean doses for kidney, liver, pancreas, spleen, and spinal cord were 8.24, 8.90, 8.17, 8.31, and 6.73 mGy, respectively. In head scan, organ doses were considerably different from CTDIvol values. Conclusion: MC dose distributions calculated by using patient CT images are useful to evaluate organ doses absorbed to individual patients.

  5. Evaluation of a new commercial Monte Carlo dose calculation algorithm for electron beams

    SciTech Connect (OSTI)

    Vandervoort, Eric J. Cygler, Joanna E.; The Faculty of Medicine, The University of Ottawa, Ottawa, Ontario K1H 8M5; Department of Physics, Carleton University, Ottawa, Ontario K1S 5B6 ; Tchistiakova, Ekaterina; Department of Medical Biophysics, University of Toronto, Ontario M5G 2M9; Heart and Stroke Foundation Centre for Stroke Recovery, Sunnybrook Research Institute, University of Toronto, Ontario M4N 3M5 ; La Russa, Daniel J.; The Faculty of Medicine, The University of Ottawa, Ottawa, Ontario K1H 8M5

    2014-02-15

    Purpose: In this report the authors present the validation of a Monte Carlo dose calculation algorithm (XiO EMC from Elekta Software) for electron beams. Methods: Calculated and measured dose distributions were compared for homogeneous water phantoms and for a 3D heterogeneous phantom meant to approximate the geometry of a trachea and spine. Comparisons of measurements and calculated data were performed using 2D and 3D gamma index dose comparison metrics. Results: Measured outputs agree with calculated values within estimated uncertainties for standard and extended SSDs for open applicators, and for cutouts, with the exception of the 17 MeV electron beam at extended SSD for cutout sizes smaller than 5 5 cm{sup 2}. Good agreement was obtained between calculated and experimental depth dose curves and dose profiles (minimum number of measurements that pass a 2%/2 mm agreement 2D gamma index criteria for any applicator or energy was 97%). Dose calculations in a heterogeneous phantom agree with radiochromic film measurements (>98% of pixels pass a 3 dimensional 3%/2 mm ?-criteria) provided that the steep dose gradient in the depth direction is considered. Conclusions: Clinically acceptable agreement (at the 2%/2 mm level) between the measurements and calculated data for measurements in water are obtained for this dose calculation algorithm. Radiochromic film is a useful tool to evaluate the accuracy of electron MC treatment planning systems in heterogeneous media.

  6. Monte Carlo Solution for Uncertainty Propagation in Particle...

    Office of Scientific and Technical Information (OSTI)

    the International Conference on Math. and Comp. Methods Applied to Nucl. Sci. and Engg. (M&C 2013) held May 5-9, 2013 in Sun Valley, ID. Research Org: Sandia National Laboratories

  7. Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry

    SciTech Connect (OSTI)

    Bostani, Maryam McMillan, Kyle; Cagnon, Chris H.; McNitt-Gray, Michael F.; Mueller, Jonathon W.; Cody, Dianna D.; DeMarco, John J.

    2015-02-15

    Purpose: The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. Methods: MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for all exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. Results: The calculated mean percent difference between TLD measurements and Monte Carlo simulations was −4.9% with standard deviation of 8.7% and a range of −22.7% to 5.7%. Conclusions: The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.

  8. Fission matrix-based Monte Carlo criticality analysis of fuel storage pools

    SciTech Connect (OSTI)

    Farlotti, M.; Larsen, E. W.

    2013-07-01

    Standard Monte Carlo transport procedures experience difficulties in solving criticality problems in fuel storage pools. Because of the strong neutron absorption between fuel assemblies, source convergence can be very slow, leading to incorrect estimates of the eigenvalue and the eigenfunction. This study examines an alternative fission matrix-based Monte Carlo transport method that takes advantage of the geometry of a storage pool to overcome this difficulty. The method uses Monte Carlo transport to build (essentially) a fission matrix, which is then used to calculate the criticality and the critical flux. This method was tested using a test code on a simple problem containing 8 assemblies in a square pool. The standard Monte Carlo method gave the expected eigenfunction in 5 cases out of 10, while the fission matrix method gave the expected eigenfunction in all 10 cases. In addition, the fission matrix method provides an estimate of the error in the eigenvalue and the eigenfunction, and it allows the user to control this error by running an adequate number of cycles. Because of these advantages, the fission matrix method yields a higher confidence in the results than standard Monte Carlo. We also discuss potential improvements of the method, including the potential for variance reduction techniques. (authors)

  9. Calculation of radiation therapy dose using all particle Monte Carlo transport

    DOE Patents [OSTI]

    Chandler, W.P.; Hartmann-Siantar, C.L.; Rathkopf, J.A.

    1999-02-09

    The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media. 57 figs.

  10. Calculation of radiation therapy dose using all particle Monte Carlo transport

    DOE Patents [OSTI]

    Chandler, William P.; Hartmann-Siantar, Christine L.; Rathkopf, James A.

    1999-01-01

    The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.

  11. Crossing the mesoscale no-mans land via parallel kinetic Monte Carlo.

    SciTech Connect (OSTI)

    Garcia Cardona, Cristina (San Diego State University); Webb, Edmund Blackburn, III; Wagner, Gregory John; Tikare, Veena; Holm, Elizabeth Ann; Plimpton, Steven James; Thompson, Aidan Patrick; Slepoy, Alexander (U. S. Department of Energy, NNSA); Zhou, Xiao Wang; Battaile, Corbett Chandler; Chandross, Michael Evan

    2009-10-01

    The kinetic Monte Carlo method and its variants are powerful tools for modeling materials at the mesoscale, meaning at length and time scales in between the atomic and continuum. We have completed a 3 year LDRD project with the goal of developing a parallel kinetic Monte Carlo capability and applying it to materials modeling problems of interest to Sandia. In this report we give an overview of the methods and algorithms developed, and describe our new open-source code called SPPARKS, for Stochastic Parallel PARticle Kinetic Simulator. We also highlight the development of several Monte Carlo models in SPPARKS for specific materials modeling applications, including grain growth, bubble formation, diffusion in nanoporous materials, defect formation in erbium hydrides, and surface growth and evolution.

  12. A Proposal for a Standard Interface Between Monte Carlo Tools And One-Loop Programs

    SciTech Connect (OSTI)

    Binoth, T.; Boudjema, F.; Dissertori, G.; Lazopoulos, A.; Denner, A.; Dittmaier, S.; Frederix, R.; Greiner, N.; Hoeche, Stefan; Giele, W.; Skands, P.; Winter, J.; Gleisberg, T.; Archibald, J.; Heinrich, G.; Krauss, F.; Maitre, D.; Huber, M.; Huston, J.; Kauer, N.; Maltoni, F.; /Louvain U., CP3 /Milan Bicocca U. /INFN, Turin /Turin U. /Granada U., Theor. Phys. Astrophys. /CERN /NIKHEF, Amsterdam /Heidelberg U. /Oxford U., Theor. Phys.

    2011-11-11

    Many highly developed Monte Carlo tools for the evaluation of cross sections based on tree matrix elements exist and are used by experimental collaborations in high energy physics. As the evaluation of one-loop matrix elements has recently been undergoing enormous progress, the combination of one-loop matrix elements with existing Monte Carlo tools is on the horizon. This would lead to phenomenological predictions at the next-to-leading order level. This note summarises the discussion of the next-to-leading order multi-leg (NLM) working group on this issue which has been taking place during the workshop on Physics at TeV Colliders at Les Houches, France, in June 2009. The result is a proposal for a standard interface between Monte Carlo tools and one-loop matrix element programs.

  13. Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis

    SciTech Connect (OSTI)

    Wilson, Paul; Evans, Thomas; Tautges, Tim

    2012-12-24

    This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well

  14. An Advanced Neutronic Analysis Toolkit with Inline Monte Carlo capability for BHTR Analysis

    SciTech Connect (OSTI)

    William R. Martin; John C. Lee

    2009-12-30

    Monte Carlo capability has been combined with a production LWR lattice physics code to allow analysis of high temperature gas reactor configurations, accounting for the double heterogeneity due to the TRISO fuel. The Monte Carlo code MCNP5 has been used in conjunction with CPM3, which was the testbench lattice physics code for this project. MCNP5 is used to perform two calculations for the geometry of interest, one with homogenized fuel compacts and the other with heterogeneous fuel compacts, where the TRISO fuel kernels are resolved by MCNP5.

  15. The GENIE Neutrino Monte Carlo Generator: Physics and User Manual

    SciTech Connect (OSTI)

    Andreopoulos, Costas; Barry, Christopher; Dytman, Steve; Gallagher, Hugh; Golan, Tomasz; Hatcher, Robert; Perdue, Gabriel; Yarba, Julia

    2015-10-19

    GENIE is a suite of products for the experimental neutrino physics community. This suite includes i) a modern software framework for implementing neutrino event generators, a state-of-the-art comprehensive physics model and tools to support neutrino interaction simulation for realistic experimental setups (the Generator product), ii) extensive archives of neutrino, charged-lepton and hadron scattering data and software to produce a comprehensive set of data/MC comparisons (the Comparisons product), and iii) a generator tuning framework and fitting applications (the Tuning product). This book provides the definite guide for the GENIE Generator: It presents the software architecture and a detailed description of its physics model and official tunes. In addition, it provides a rich set of data/MC comparisons that characterise the physics performance of GENIE. Detailed step-by-step instructions on how to install and configure the Generator, run its applications and analyze its outputs are also included.

  16. SU-E-T-314: The Application of Cloud Computing in Pencil Beam Scanning Proton Therapy Monte Carlo Simulation

    SciTech Connect (OSTI)

    Wang, Z; Gao, M

    2014-06-01

    Purpose: Monte Carlo simulation plays an important role for proton Pencil Beam Scanning (PBS) technique. However, MC simulation demands high computing power and is limited to few large proton centers that can afford a computer cluster. We study the feasibility of utilizing cloud computing in the MC simulation of PBS beams. Methods: A GATE/GEANT4 based MC simulation software was installed on a commercial cloud computing virtual machine (Linux 64-bits, Amazon EC2). Single spot Integral Depth Dose (IDD) curves and in-air transverse profiles were used to tune the source parameters to simulate an IBA machine. With the use of StarCluster software developed at MIT, a Linux cluster with 2100 nodes can be conveniently launched in the cloud. A proton PBS plan was then exported to the cloud where the MC simulation was run. Results: The simulated PBS plan has a field size of 1010cm{sup 2}, 20cm range, 10cm modulation, and contains over 10,000 beam spots. EC2 instance type m1.medium was selected considering the CPU/memory requirement and 40 instances were used to form a Linux cluster. To minimize cost, master node was created with on-demand instance and worker nodes were created with spot-instance. The hourly cost for the 40-node cluster was $0.63 and the projected cost for a 100-node cluster was $1.41. Ten million events were simulated to plot PDD and profile, with each job containing 500k events. The simulation completed within 1 hour and an overall statistical uncertainty of < 2% was achieved. Good agreement between MC simulation and measurement was observed. Conclusion: Cloud computing is a cost-effective and easy to maintain platform to run proton PBS MC simulation. When proton MC packages such as GATE and TOPAS are combined with cloud computing, it will greatly facilitate the pursuing of PBS MC studies, especially for newly established proton centers or individual researchers.

  17. Green's function Monte Carlo calculation for the ground state of helium trimers

    SciTech Connect (OSTI)

    Cabral, F.; Kalos, M.H.

    1981-02-01

    The ground state energy of weakly bound boson trimers interacting via Lennard-Jones (12,6) pair potentials is calculated using a Monte Carlo Green's Function Method. Threshold coupling constants for self binding are obtained by extrapolation to zero binding.

  18. MUSiC - An Automated Scan for Deviations between Data and Monte Carlo Simulation

    SciTech Connect (OSTI)

    Meyer, Arnd

    2010-02-10

    A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.

  19. K-effective of the world: and other concerns for Monte Carlo Eigenvalue calculations

    SciTech Connect (OSTI)

    Brown, Forrest B

    2010-01-01

    Monte Carlo methods have been used to compute k{sub eff} and the fundamental model eigenfunction of critical systems since the 1950s. Despite the sophistication of today's Monte Carlo codes for representing realistic geometry and physics interactions, correct results can be obtained in criticality problems only if users pay attention to source convergence in the Monte Carlo iterations and to running a sufficient number of neutron histories to adequately sample all significant regions of the problem. Recommended best practices for criticality calculations are reviewed and applied to several practical problems for nuclear reactors and criticality safety, including the 'K-effective of the World' problem. Numerical results illustrate the concerns about convergence and bias. The general conclusion is that with today's high-performance computers, improved understanding of the theory, new tools for diagnosing convergence (e.g., Shannon entropy of the fission distribution), and clear practical guidance for performing calculations, practitioners will have a greater degree of confidence than ever of obtaining correct results for Monte Carlo criticality calculations.

  20. First collision source method for coupling Monte Carlo and discrete ordinates for localized source problems

    SciTech Connect (OSTI)

    Alcouffe, R.E.

    1985-01-01

    A difficult class of problems for the discrete-ordinates neutral particle transport method is to accurately compute the flux due to a spatially localized source. Because the transport equation is solved for discrete directions, the so-called ray effect causes the flux at space points far from the source to be inaccurate. Thus, in general, discrete ordinates would not be the method of choice to solve such problems. It is better suited for calculating problems with significant scattering. The Monte Carlo method is suited to localized source problems, particularly if the amount of collisional interactions in minimal. However, if there are many scattering collisions and the flux at all space points is desired, then the Monte Carlo method becomes expensive. To take advantage of the attributes of both approaches, we have devised a first collision source method to combine the Monte Carlo and discrete-ordinates solutions. That is, particles are tracked from the source to their first scattering collision and tallied to produce a source for the discrete-ordinates calculation. A scattered flux is then computed by discrete ordinates, and the total flux is the sum of the Monte Carlo and discrete ordinates calculated fluxes. In this paper, we present calculational results using the MCNP and TWODANT codes for selected two-dimensional problems that show the effectiveness of this method.

  1. Use of single scatter electron monte carlo transport for medical radiation sciences

    DOE Patents [OSTI]

    Svatos, Michelle M.

    2001-01-01

    The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.

  2. 3D Direct Simulation Monte Carlo Code Which Solves for Geometrics

    Energy Science and Technology Software Center (OSTI)

    1998-01-13

    Pegasus is a 3D Direct Simulation Monte Carlo Code which solves for geometries which can be represented by bodies of revolution. Included are all the surface chemistry enhancements in the 2D code Icarus as well as a real vacuum pump model. The code includes multiple species transport.

  3. Self-evolving atomistic kinetic Monte Carlo simulations of defects in materials

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Xu, Haixuan; Beland, Laurent K.; Stoller, Roger E.; Osetskiy, Yury N.

    2015-01-29

    The recent development of on-the-fly atomistic kinetic Monte Carlo methods has led to an increased amount attention on the methods and their corresponding capabilities and applications. In this review, the framework and current status of Self-Evolving Atomistic Kinetic Monte Carlo (SEAKMC) are discussed. SEAKMC particularly focuses on defect interaction and evolution with atomistic details without assuming potential defect migration/interaction mechanisms and energies. The strength and limitation of using an active volume, the key concept introduced in SEAKMC, are discussed. Potential criteria for characterizing an active volume are discussed and the influence of active volume size on saddle point energies ismore » illustrated. A procedure starting with a small active volume followed by larger active volumes was found to possess higher efficiency. Applications of SEAKMC, ranging from point defect diffusion, to complex interstitial cluster evolution, to helium interaction with tungsten surfaces, are summarized. A comparison of SEAKMC with molecular dynamics and conventional object kinetic Monte Carlo is demonstrated. Overall, SEAKMC is found to be complimentary to conventional molecular dynamics, especially when the harmonic approximation of transition state theory is accurate. However it is capable of reaching longer time scales than molecular dynamics and it can be used to systematically increase the accuracy of other methods such as object kinetic Monte Carlo. Furthermore, the challenges and potential development directions are also outlined.« less

  4. The effects of mapping CT images to Monte Carlo materials on GEANT4 proton simulation accuracy

    SciTech Connect (OSTI)

    Barnes, Samuel; McAuley, Grant; Slater, James; Wroe, Andrew

    2013-04-15

    Purpose: Monte Carlo simulations of radiation therapy require conversion from Hounsfield units (HU) in CT images to an exact tissue composition and density. The number of discrete densities (or density bins) used in this mapping affects the simulation accuracy, execution time, and memory usage in GEANT4 and other Monte Carlo code. The relationship between the number of density bins and CT noise was examined in general for all simulations that use HU conversion to density. Additionally, the effect of this on simulation accuracy was examined for proton radiation. Methods: Relative uncertainty from CT noise was compared with uncertainty from density binning to determine an upper limit on the number of density bins required in the presence of CT noise. Error propagation analysis was also performed on continuously slowing down approximation range calculations to determine the proton range uncertainty caused by density binning. These results were verified with Monte Carlo simulations. Results: In the presence of even modest CT noise (5 HU or 0.5%) 450 density bins were found to only cause a 5% increase in the density uncertainty (i.e., 95% of density uncertainty from CT noise, 5% from binning). Larger numbers of density bins are not required as CT noise will prevent increased density accuracy; this applies across all types of Monte Carlo simulations. Examining uncertainty in proton range, only 127 density bins are required for a proton range error of <0.1 mm in most tissue and <0.5 mm in low density tissue (e.g., lung). Conclusions: By considering CT noise and actual range uncertainty, the number of required density bins can be restricted to a very modest 127 depending on the application. Reducing the number of density bins provides large memory and execution time savings in GEANT4 and other Monte Carlo packages.

  5. SU-E-T-224: Is Monte Carlo Dose Calculation Method Necessary for Cyberknife Brain Treatment Planning?

    SciTech Connect (OSTI)

    Wang, L; Fourkal, E; Hayes, S; Jin, L; Ma, C

    2014-06-01

    Purpose: To study the dosimetric difference resulted in using the pencil beam algorithm instead of Monte Carlo (MC) methods for tumors adjacent to the skull. Methods: We retrospectively calculated the dosimetric differences between RT and MC algorithms for brain tumors treated with CyberKnife located adjacent to the skull for 18 patients (total of 27 tumors). The median tumor sizes was 0.53-cc (range 0.018-cc to 26.2-cc). The absolute mean distance from the tumor to the skull was 2.11 mm (range - 17.0 mm to 9.2 mm). The dosimetric variables examined include the mean, maximum, and minimum doses to the target, the target coverage (TC) and conformality index. The MC calculation used the same MUs as the RT dose calculation without further normalization and 1% statistical uncertainty. The differences were analyzed by tumor size and distance from the skull. Results: The TC was generally reduced with the MC calculation (24 out of 27 cases). The average difference in TC between RT and MC was 3.3% (range 0.0% to 23.5%). When the TC was deemed unacceptable, the plans were re-normalized in order to increase the TC to 99%. This resulted in a 6.9% maximum change in the prescription isodose line. The maximum changes in the mean, maximum, and minimum doses were 5.4 %, 7.7%, and 8.4%, respectively, before re-normalization. When the TC was analyzed with regards to target size, it was found that the worst coverage occurred with the smaller targets (0.018-cc). When the TC was analyzed with regards to the distance to the skull, there was no correlation between proximity to the skull and TC between the RT and MC plans. Conclusions: For smaller targets (< 4.0-cc), MC should be used to re-evaluate the dose coverage after RT is used for the initial dose calculation in order to ensure target coverage.

  6. TH-A-18C-09: Ultra-Fast Monte Carlo Simulation for Cone Beam CT Imaging of Brain Trauma

    SciTech Connect (OSTI)

    Sisniega, A; Zbijewski, W; Stayman, J; Yorkston, J; Aygun, N; Koliatsos, V; Siewerdsen, J

    2014-06-15

    Purpose: Application of cone-beam CT (CBCT) to low-contrast soft tissue imaging, such as in detection of traumatic brain injury, is challenged by high levels of scatter. A fast, accurate scatter correction method based on Monte Carlo (MC) estimation is developed for application in high-quality CBCT imaging of acute brain injury. Methods: The correction involves MC scatter estimation executed on an NVIDIA GTX 780 GPU (MC-GPU), with baseline simulation speed of ~1e7 photons/sec. MC-GPU is accelerated by a novel, GPU-optimized implementation of variance reduction (VR) techniques (forced detection and photon splitting). The number of simulated tracks and projections is reduced for additional speed-up. Residual noise is removed and the missing scatter projections are estimated via kernel smoothing (KS) in projection plane and across gantry angles. The method is assessed using CBCT images of a head phantom presenting a realistic simulation of fresh intracranial hemorrhage (100 kVp, 180 mAs, 720 projections, source-detector distance 700 mm, source-axis distance 480 mm). Results: For a fixed run-time of ~1 sec/projection, GPU-optimized VR reduces the noise in MC-GPU scatter estimates by a factor of 4. For scatter correction, MC-GPU with VR is executed with 4-fold angular downsampling and 1e5 photons/projection, yielding 3.5 minute run-time per scan, and de-noised with optimized KS. Corrected CBCT images demonstrate uniformity improvement of 18 HU and contrast improvement of 26 HU compared to no correction, and a 52% increase in contrast-tonoise ratio in simulated hemorrhage compared to “oracle” constant fraction correction. Conclusion: Acceleration of MC-GPU achieved through GPU-optimized variance reduction and kernel smoothing yields an efficient (<5 min/scan) and accurate scatter correction that does not rely on additional hardware or simplifying assumptions about the scatter distribution. The method is undergoing implementation in a novel CBCT dedicated to brain

  7. SU-E-T-585: Commissioning of Electron Monte Carlo in Eclipse Treatment Planning System for TrueBeam

    SciTech Connect (OSTI)

    Yang, X; Lasio, G; Zhou, J; Lin, M; Yi, B; Guerrero, M

    2014-06-01

    Purpose: To commission electron Monte Carlo (eMC) algorithm in Eclipse Treatment Planning System (TPS) for TrueBeam Linacs, including the evaluation of dose calculation accuracy for small fields and oblique beams and comparison with the existing eMC model for Clinacs. Methods: Electron beam percent-depth-dose (PDDs) and profiles with and without applicators, as well as output factors, were measured from two Varian TrueBeam machines. Measured data were compared against the Varian TrueBeam Representative Beam Data (VTBRBD). The selected data set was transferred into Eclipse for beam configuration. Dose calculation accuracy from eMC was evaluated for open fields, small cut-out fields, and oblique beams at different incident angles. The TrueBeam data was compared to the existing Clinac data and eMC model to evaluate the differences among Linac types. Results: Our measured data indicated that electron beam PDDs from our TrueBeam machines are well matched to those from our Varian Clinac machines, but in-air profiles, cone factors and open-filed output factors are significantly different. The data from our two TrueBeam machines were well represented by the VTBRBD. Variations of TrueBeam PDDs and profiles were within the 2% /2mm criteria for all energies, and the output factors for fields with and without applicators all agree within 2%. Obliquity factor for two clinically relevant applicator sizes (1010 and 1515 cm{sup 2}) and three oblique angles (15, 30, and 45 degree) were measured for nominal R100, R90, and R80 of each electron beam energy. Comparisons of calculations using eMC of obliquity factors and cut-out factors versus measurements will be presented. Conclusion: eMC algorithm in Eclipse TPS can be configured using the VTBRBD. Significant differences between TrueBeam and Clinacs were found in in-air profiles and open field output factors. The accuracy of the eMC algorithm was evaluated for a wide range of cut-out factors and oblique incidence.

  8. A Hybrid Monte Carlo-Deterministic Method for Global Binary Stochastic Medium Transport Problems

    SciTech Connect (OSTI)

    Keady, K P; Brantley, P

    2010-03-04

    Global deep-penetration transport problems are difficult to solve using traditional Monte Carlo techniques. In these problems, the scalar flux distribution is desired at all points in the spatial domain (global nature), and the scalar flux typically drops by several orders of magnitude across the problem (deep-penetration nature). As a result, few particle histories may reach certain regions of the domain, producing a relatively large variance in tallies in those regions. Implicit capture (also known as survival biasing or absorption suppression) can be used to increase the efficiency of the Monte Carlo transport algorithm to some degree. A hybrid Monte Carlo-deterministic technique has previously been developed by Cooper and Larsen to reduce variance in global problems by distributing particles more evenly throughout the spatial domain. This hybrid method uses an approximate deterministic estimate of the forward scalar flux distribution to automatically generate weight windows for the Monte Carlo transport simulation, avoiding the necessity for the code user to specify the weight window parameters. In a binary stochastic medium, the material properties at a given spatial location are known only statistically. The most common approach to solving particle transport problems involving binary stochastic media is to use the atomic mix (AM) approximation in which the transport problem is solved using ensemble-averaged material properties. The most ubiquitous deterministic model developed specifically for solving binary stochastic media transport problems is the Levermore-Pomraning (L-P) model. Zimmerman and Adams proposed a Monte Carlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another Monte Carlo algorithm (Algorithm B) that is more accurate as a result of improved local material realization modeling. Recent benchmark studies have shown that Algorithm B is often significantly more accurate than Algorithm A (and therefore the L-P model

  9. Coupled molecular dynamics-Monte Carlo model to study the role of chemical processes during laser ablation of polymeric materials

    SciTech Connect (OSTI)

    Prasad, Manish; Conforti, Patrick F.; Garrison, Barbara J.

    2007-08-28

    The coarse grained chemical reaction model is enhanced to build a molecular dynamics (MD) simulation framework with an embedded Monte Carlo (MC) based reaction scheme. The MC scheme utilizes predetermined reaction chemistry, energetics, and rate kinetics of materials to incorporate chemical reactions occurring in a substrate into the MD simulation. The kinetics information is utilized to set the probabilities for the types of reactions to perform based on radical survival times and reaction rates. Implementing a reaction involves changing the reactants species types which alters their interaction potentials and thus produces the required energy change. We discuss the application of this method to study the initiation of ultraviolet laser ablation in poly(methyl methacrylate). The use of this scheme enables the modeling of all possible photoexcitation pathways in the polymer. It also permits a direct study of the role of thermal, mechanical, and chemical processes that can set off ablation. We demonstrate that the role of laser induced heating, thermomechanical stresses, pressure wave formation and relaxation, and thermochemical decomposition of the polymer substrate can be investigated directly by suitably choosing the potential energy and chemical reaction energy landscape. The results highlight the usefulness of such a modeling approach by showing that various processes in polymer ablation are intricately linked leading to the transformation of the substrate and its ejection. The method, in principle, can be utilized to study systems where chemical reactions are expected to play a dominant role or interact strongly with other physical processes.

  10. Forward treatment planning for modulated electron radiotherapy (MERT) employing Monte Carlo methods

    SciTech Connect (OSTI)

    Henzen, D. Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Lssl, K.; Aebersold, D. M.; Fix, M. K.; Neuenschwander, H.; Stampanoni, M. F. M.

    2014-03-15

    Purpose: This paper describes the development of a forward planning process for modulated electron radiotherapy (MERT). The approach is based on a previously developed electron beam model used to calculate dose distributions of electron beams shaped by a photon multi leaf collimator (pMLC). Methods: As the electron beam model has already been implemented into the Swiss Monte Carlo Plan environment, the Eclipse treatment planning system (Varian Medical Systems, Palo Alto, CA) can be included in the planning process for MERT. In a first step, CT data are imported into Eclipse and a pMLC shaped electron beam is set up. This initial electron beam is then divided into segments, with the electron energy in each segment chosen according to the distal depth of the planning target volume (PTV) in beam direction. In order to improve the homogeneity of the dose distribution in the PTV, a feathering process (Gaussian edge feathering) is launched, which results in a number of feathered segments. For each of these segments a dose calculation is performed employing the in-house developed electron beam model along with the macro Monte Carlo dose calculation algorithm. Finally, an automated weight optimization of all segments is carried out and the total dose distribution is read back into Eclipse for display and evaluation. One academic and two clinical situations are investigated for possible benefits of MERT treatment compared to standard treatments performed in our clinics and treatment with a bolus electron conformal (BolusECT) method. Results: The MERT treatment plan of the academic case was superior to the standard single segment electron treatment plan in terms of organs at risk (OAR) sparing. Further, a comparison between an unfeathered and a feathered MERT plan showed better PTV coverage and homogeneity for the feathered plan, with V{sub 95%} increased from 90% to 96% and V{sub 107%} decreased from 8% to nearly 0%. For a clinical breast boost irradiation, the MERT plan

  11. Oxygen distribution in tumors: A qualitative analysis and modeling study providing a novel Monte Carlo approach

    SciTech Connect (OSTI)

    Lagerlöf, Jakob H.; Kindblom, Jon; Bernhardt, Peter

    2014-09-15

    Purpose: To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO{sub 2})]. Methods: A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumor oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO{sub 2}), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO{sub 2} were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. Results: For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO{sub 2} distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became truncated in the

  12. SU-E-T-557: Monte Carlo Modeling of Philips RT-250 Orthovoltage Unit for Beam Spectrum Modulation

    SciTech Connect (OSTI)

    Reynoso, F; Cho, S

    2015-06-15

    Purpose: To develop and validate a Monte Carlo (MC) model of a Phillips RT-250 orthovoltage unit to test various beam spectrum modulation strategies for in vitro/vivo studies. A model of this type would enable the production of unconventional beams from a typical orthovoltage unit for novel therapeutic applications such as gold nanoparticle-aided radiotherapy. Methods: The MCNP5 code system was used to create a MC model of the head of RT-250 and a 30 × 30 × 30 cm{sup 3} water phantom. For the x-ray machine head, the current model includes the vacuum region, beryllium window, collimators, inherent filters and exterior steel housing. For increased computational efficiency, the primary x-ray spectrum from the target was calculated from a well-validated analytical software package. Calculated percentage-depth-dose (PDD) values and photon spectra were validated against experimental data from film and Compton-scatter spectrum measurements. Results: The model was validated for three common settings of the machine namely, 250 kVp (0.25 mm Cu), 125 kVp (2 mm Al), and 75 kVp (2 mm Al). The MC results for the PDD curves were compared with film measurements and showed good agreement for all depths with a maximum difference of 4 % around dmax and under 2.5 % for all other depths. The primary photon spectra were also measured and compared with the MC results showing reasonable agreement between the two, validating the input spectra and the final spectra as predicted by the current MC model. Conclusion: The current MC model accurately predicted the dosimetric and spectral characteristics of each beam from the RT-250 orthovoltage unit, demonstrating its applicability and reliability for beam spectrum modulation tasks. It accomplished this without the need to model the bremsstrahlung xray production from the target, while significantly improved on computational efficiency by at least two orders of magnitude. Supported by DOD/PCRP grant W81XWH-12-1-0198.

  13. “Full Model” Nuclear Data and Covariance Evaluation Process Using TALYS, Total Monte Carlo and Backward-forward Monte Carlo

    SciTech Connect (OSTI)

    Bauge, E.

    2015-01-15

    The “Full model” evaluation process, that is used in CEA DAM DIF to evaluate nuclear data in the continuum region, makes extended use of nuclear models implemented in the TALYS code to account for experimental data (both differential and integral) by varying the parameters of these models until a satisfactory description of these experimental data is reached. For the evaluation of the covariance data associated with this evaluated data, the Backward-forward Monte Carlo (BFMC) method was devised in such a way that it mirrors the process of the “Full model” evaluation method. When coupled with the Total Monte Carlo method via the T6 system developed by NRG Petten, the BFMC method allows to make use of integral experiments to constrain the distribution of model parameters, and hence the distribution of derived observables and their covariance matrix. Together, TALYS, TMC, BFMC, and T6, constitute a powerful integrated tool for nuclear data evaluation, that allows for evaluation of nuclear data and the associated covariance matrix, all at once, making good use of all the available experimental information to drive the distribution of the model parameters and the derived observables.

  14. Kinetic Activation-Relaxation Technique and Self-Evolving Atomistic Kinetic Monte Carlo: Comparison of on-the-fly kinetic Monte Carlo algorithms

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Beland, Laurent Karim; Osetskiy, Yury N.; Stoller, Roger E.; Xu, Haixuan

    2015-02-07

    Here, we present a comparison of the Kinetic Activation–Relaxation Technique (k-ART) and the Self-Evolving Atomistic Kinetic Monte Carlo (SEAKMC), two off-lattice, on-the-fly Kinetic Monte Carlo (KMC) techniques that were recently used to solve several materials science problems. We show that if the initial displacements are localized the dimer method and the Activation–Relaxation Technique nouveau provide similar performance. We also show that k-ART and SEAKMC, although based on different approximations, are in agreement with each other, as demonstrated by the examples of 50 vacancies in a 1950-atom Fe box and of interstitial loops in 16,000-atom boxes. Generally speaking, k-ART’s treatment ofmore » geometry and flickers is more flexible, e.g. it can handle amorphous systems, and rigorous than SEAKMC’s, while the later’s concept of active volumes permits a significant speedup of simulations for the systems under consideration and therefore allows investigations of processes requiring large systems that are not accessible if not localizing calculations.« less

  15. Molecular dynamics and Monte Carlo simulations resolve apparent diffusion rate differences for proteins confined in nanochannels

    SciTech Connect (OSTI)

    Tringe, J. W.; Ileri, N.; Levie, H. W.; Stroeve, P.; Ustach, V.; Faller, R.; Renaud, P.

    2015-08-01

    We use Molecular Dynamics and Monte Carlo simulations to examine molecular transport phenomena in nanochannels, explaining four orders of magnitude difference in wheat germ agglutinin (WGA) protein diffusion rates observed by fluorescence correlation spectroscopy (FCS) and by direct imaging of fluorescently-labeled proteins. We first use the ESPResSo Molecular Dynamics code to estimate the surface transport distance for neutral and charged proteins. We then employ a Monte Carlo model to calculate the paths of protein molecules on surfaces and in the bulk liquid transport medium. Our results show that the transport characteristics depend strongly on the degree of molecular surface coverage. Atomic force microscope characterization of surfaces exposed to WGA proteins for 1000 s show large protein aggregates consistent with the predicted coverage. These calculations and experiments provide useful insight into the details of molecular motion in confined geometries.

  16. Molecular dynamics and Monte Carlo simulations resolve apparent diffusion rate differences for proteins confined in nanochannels

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Tringe, J. W.; Ileri, N.; Levie, H. W.; Stroeve, P.; Ustach, V.; Faller, R.; Renaud, P.

    2015-08-01

    We use Molecular Dynamics and Monte Carlo simulations to examine molecular transport phenomena in nanochannels, explaining four orders of magnitude difference in wheat germ agglutinin (WGA) protein diffusion rates observed by fluorescence correlation spectroscopy (FCS) and by direct imaging of fluorescently-labeled proteins. We first use the ESPResSo Molecular Dynamics code to estimate the surface transport distance for neutral and charged proteins. We then employ a Monte Carlo model to calculate the paths of protein molecules on surfaces and in the bulk liquid transport medium. Our results show that the transport characteristics depend strongly on the degree of molecular surface coverage.more » Atomic force microscope characterization of surfaces exposed to WGA proteins for 1000 s show large protein aggregates consistent with the predicted coverage. These calculations and experiments provide useful insight into the details of molecular motion in confined geometries.« less

  17. Residual Monte Carlo high-order solver for Moment-Based Accelerated Thermal Radiative Transfer equations

    SciTech Connect (OSTI)

    Willert, Jeffrey Park, H.

    2014-11-01

    In this article we explore the possibility of replacing Standard Monte Carlo (SMC) transport sweeps within a Moment-Based Accelerated Thermal Radiative Transfer (TRT) algorithm with a Residual Monte Carlo (RMC) formulation. Previous Moment-Based Accelerated TRT implementations have encountered trouble when stochastic noise from SMC transport sweeps accumulates over several iterations and pollutes the low-order system. With RMC we hope to significantly lower the build-up of statistical error at a much lower cost. First, we display encouraging results for a zero-dimensional test problem. Then, we demonstrate that we can achieve a lower degree of error in two one-dimensional test problems by employing an RMC transport sweep with multiple orders of magnitude fewer particles per sweep. We find that by reformulating the high-order problem, we can compute more accurate solutions at a fraction of the cost.

  18. Numerical thermalization in particle-in-cell simulations with Monte-Carlo collisions

    SciTech Connect (OSTI)

    Lai, P. Y.; Lin, T. Y.; Lin-Liu, Y. R.; Chen, S. H.

    2014-12-15

    Numerical thermalization in collisional one-dimensional (1D) electrostatic (ES) particle-in-cell (PIC) simulations was investigated. Two collision models, the pitch-angle scattering of electrons by the stationary ion background and large-angle collisions between the electrons and the neutral background, were included in the PIC simulation using Monte-Carlo methods. The numerical results show that the thermalization times in both models were considerably reduced by the additional Monte-Carlo collisions as demonstrated by comparisons with Turner's previous simulation results based on a head-on collision model [M. M. Turner, Phys. Plasmas 13, 033506 (2006)]. However, the breakdown of Dawson's scaling law in the collisional 1D ES PIC simulation is more complicated than that was observed by Turner, and the revised scaling law of the numerical thermalization time with numerical parameters are derived on the basis of the simulation results obtained in this study.

  19. Self-consistent inclusion of classical large-angle Coulomb collisions in plasma Monte Carlo simulations

    SciTech Connect (OSTI)

    Turrell, A.E. Sherlock, M.; Rose, S.J.

    2015-10-15

    Large-angle Coulomb collisions allow for the exchange of a significant proportion of the energy of a particle in a single collision, but are not included in models of plasmas based on fluids, the Vlasov–Fokker–Planck equation, or currently available plasma Monte Carlo techniques. Their unique effects include the creation of fast ‘knock-on’ ions, which may be more likely to undergo certain reactions, and distortions to ion distribution functions relative to what is predicted by small-angle collision only theories. We present a computational method which uses Monte Carlo techniques to include the effects of large-angle Coulomb collisions in plasmas and which self-consistently evolves distribution functions according to the creation of knock-on ions of any generation. The method is used to demonstrate ion distribution function distortions in an inertial confinement fusion (ICF) relevant scenario of the slowing of fusion products.

  20. Rotational fluctuation of molecules in quantum clusters. I. Path integral hybrid Monte Carlo algorithm

    SciTech Connect (OSTI)

    Miura, Shinichi [Institute for Molecular Science, 38 Myodaiji, Okazaki 444-8585 (Japan)

    2007-03-21

    In this paper, we present a path integral hybrid Monte Carlo (PIHMC) method for rotating molecules in quantum fluids. This is an extension of our PIHMC for correlated Bose fluids [S. Miura and J. Tanaka, J. Chem. Phys. 120, 2160 (2004)] to handle the molecular rotation quantum mechanically. A novel technique referred to be an effective potential of quantum rotation is introduced to incorporate the rotational degree of freedom in the path integral molecular dynamics or hybrid Monte Carlo algorithm. For a permutation move to satisfy Bose statistics, we devise a multilevel Metropolis method combined with a configurational-bias technique for efficiently sampling the permutation and the associated atomic coordinates. Then, we have applied the PIHMC to a helium-4 cluster doped with a carbonyl sulfide molecule. The effects of the quantum rotation on the solvation structure and energetics were examined. Translational and rotational fluctuations of the dopant in the superfluid cluster were also analyzed.

  1. Posters Monte Carlo Simulation of Longwave Fluxes Through Broken Scattering Cloud Fields

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    5 Posters Monte Carlo Simulation of Longwave Fluxes Through Broken Scattering Cloud Fields E. E. Takara and R. G. Ellingson University of Maryland College Park, Maryland To simplify the analysis, we made several assumptions: the clouds were cuboidal; they were all identically sized and shaped; and they had constant optical properties. Results and Discussion The model was run for a set of cloud fields with clouds of varying optical thickness and scattering albedo. The predicted effective cloud

  2. Particle-In-Cell/Monte Carlo Simulation of Ion Back Bombardment in Photoinjectors

    SciTech Connect (OSTI)

    Qiang, Ji; Corlett, John; Staples, John

    2009-03-02

    In this paper, we report on studies of ion back bombardment in high average current dc and rf photoinjectors using a particle-in-cell/Monte Carlo method. Using H{sub 2} ion as an example, we observed that the ion density and energy deposition on the photocathode in rf guns are order of magnitude lower than that in a dc gun. A higher rf frequency helps mitigate the ion back bombardment of the cathode in rf guns.

  3. Fullrmc, A Rigid Body Reverse Monte Carlo Modeling Package Enabled With

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Machine Learning And Artificial Intelligence - Joint Center for Energy Storage Research January 22, 2016, Research Highlights Fullrmc, A Rigid Body Reverse Monte Carlo Modeling Package Enabled With Machine Learning And Artificial Intelligence Liquid Sulfur. Sx≤8 molecules recognized and built upon modelling Scientific Achievement Novel approach to reverse modelling atomic and molecular systems from a set of experimental data and constraints. New fitting concepts such as 'Group',

  4. Application of Diffusion Monte Carlo to Materials Dominated by van der Waals Interactions

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Benali, Anouar; Shulenburger, Luke; Romero, Nichols A.; Kim, Jeongnim; von Lilienfeld, O. Anatole

    2014-06-12

    Van der Waals forces are notoriously difficult to account for from first principles. We perform extensive calculation to assess the usefulness and validity of diffusion quantum Monte Carlo when applied to van der Waals forces. We present results for noble gas solids and clusters - archetypical van der Waals dominated assemblies, as well as a relevant pi-pi stacking supramolecular complex: DNA + intercalating anti-cancer drug Ellipticine.

  5. The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units

    SciTech Connect (OSTI)

    Hall, Clifford; School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030 ; Ji, Weixiao; Blaisten-Barojas, Estela; School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030

    2014-02-01

    We present a CPU–GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm, which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU–GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU–GPU duets. -- Highlights: •We parallelize the Metropolis Monte Carlo (MMC) algorithm on one CPU—GPU duet. •The Adaptive Tempering Monte Carlo employs MMC and profits from this CPU—GPU implementation. •Our benchmark shows a size scaling-up speedup of 62 for systems with 225,000 particles. •The testbed involves a polymeric system of oligopyrroles in the condensed phase. •The CPU—GPU parallelization includes dipole—dipole and Mie—Jones classic potentials.

  6. Non-Covalent Bonding in Complex Molecular Systems with Quantum Monte Carlo

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    | Argonne Leadership Computing Facility DFT, and a box including 64 molecules. By performing benchmark QMC calculations on snapshots of this type, researchers are able to ascertain DFT errors. Dario Alfè, University College London Non-Covalent Bonding in Complex Molecular Systems with Quantum Monte Carlo PI Name: Dario Alfe PI Email: d.alfe@ucl.ac.uk Institution: University College London Allocation Program: INCITE Allocation Hours at ALCF: 56 Million Year: 2014 Research Domain: Materials

  7. Non-covalent Bonding in Complex Molecular Systems with Quantum Monte Carlo

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    | Argonne Leadership Computing Facility DFT A snapshot of a liquid water simulation performed with DFT, and a box including 64 molecules. By performing benchmark QMC calculations on snapshots of this type, researchers are able to ascertain DFT errors. Credit: Dario Alfè, University College London Non-covalent Bonding in Complex Molecular Systems with Quantum Monte Carlo PI Name: Dario Alfè PI Email: d.alfe@ucl.ac.uk Institution: University College London Allocation Program: INCITE

  8. Automatic mesh adaptivity for hybrid Monte Carlo/deterministic neutronics modeling of difficult shielding problems

    SciTech Connect (OSTI)

    Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.

    2015-06-30

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.

  9. Automatic mesh adaptivity for hybrid Monte Carlo/deterministic neutronics modeling of difficult shielding problems

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.

    2015-06-30

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less

  10. Fully Differential Monte-Carlo Generator Dedicated to TMDs and Bessel-Weighted Asymmetries

    SciTech Connect (OSTI)

    Aghasyan, Mher M.; Avakian, Harut A.

    2013-10-01

    We present studies of double longitudinal spin asymmetries in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator, which includes quark intrinsic transverse momentum within the generalized parton model based on the fully differential cross section for the process. Additionally, we apply Bessel-weighting to the simulated events to extract transverse momentum dependent parton distribution functions and also discuss possible uncertainties due to kinematic correlation effects.

  11. Monte Carlo and analytical model predictions of leakage neutron exposures from passively scattered proton therapy

    SciTech Connect (OSTI)

    Prez-Andjar, Anglica [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 (United States)] [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 (United States); Zhang, Rui; Newhauser, Wayne [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and The University of Texas Graduate School of Biomedical Sciences at Houston, 6767 Bertner Avenue, Houston, Texas 77030 (United States)] [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and The University of Texas Graduate School of Biomedical Sciences at Houston, 6767 Bertner Avenue, Houston, Texas 77030 (United States)

    2013-12-15

    Purpose: Stray neutron radiation is of concern after radiation therapy, especially in children, because of the high risk it might carry for secondary cancers. Several previous studies predicted the stray neutron exposure from proton therapy, mostly using Monte Carlo simulations. Promising attempts to develop analytical models have also been reported, but these were limited to only a few proton beam energies. The purpose of this study was to develop an analytical model to predict leakage neutron equivalent dose from passively scattered proton beams in the 100-250-MeV interval.Methods: To develop and validate the analytical model, the authors used values of equivalent dose per therapeutic absorbed dose (H/D) predicted with Monte Carlo simulations. The authors also characterized the behavior of the mean neutron radiation-weighting factor, w{sub R}, as a function of depth in a water phantom and distance from the beam central axis.Results: The simulated and analytical predictions agreed well. On average, the percentage difference between the analytical model and the Monte Carlo simulations was 10% for the energies and positions studied. The authors found that w{sub R} was highest at the shallowest depth and decreased with depth until around 10 cm, where it started to increase slowly with depth. This was consistent among all energies.Conclusion: Simple analytical methods are promising alternatives to complex and slow Monte Carlo simulations to predict H/D values. The authors' results also provide improved understanding of the behavior of w{sub R} which strongly depends on depth, but is nearly independent of lateral distance from the beam central axis.

  12. SU-E-J-144: Low Activity Studies of Carbon 11 Activation Via GATE Monte Carlo

    SciTech Connect (OSTI)

    Elmekawy, A; Ewell, L; Butuceanu, C; Qu, L

    2015-06-15

    Purpose: To investigate the behavior of a Monte Carlo simulation code with low levels of activity (∼1,000Bq). Such activity levels are expected from phantoms and patients activated via a proton therapy beam. Methods: Three different ranges for a therapeutic proton radiation beam were examined in a Monte Carlo simulation code: 13.5, 17.0 and 21.0cm. For each range, the decay of an equivalent length{sup 11}C source and additional sources of length plus or minus one cm was studied in a benchmark PET simulation for activities of 1000, 2000 and 3000Bq. The ranges were chosen to coincide with a previous activation study, and the activities were chosen to coincide with the approximate level of isotope creation expected in a phantom or patient irradiated by a therapeutic proton beam. The GATE 7.0 simulation was completed on a cluster node, running Scientific Linux Carbon 6 (Red Hat©). The resulting Monte Carlo data were investigated with the ROOT (CERN) analysis tool. The half-life of{sup 11}C was extracted via a histogram fit to the number of simulated PET events vs. time. Results: The average slope of the deviation of the extracted carbon half life from the expected/nominal value vs. activity showed a generally positive value. This was unexpected, as the deviation should, in principal, decrease with increased activity and lower statistical uncertainty. Conclusion: For activity levels on the order of 1,000Bq, the behavior of a benchmark PET test was somewhat unexpected. It is important to be aware of the limitations of low activity PET images, and low activity Monte Carlo simulations. This work was funded in part by the Philips corporation.

  13. A Monte Carlo neutron transport code for eigenvalue calculations on a dual-GPU system and CUDA environment

    SciTech Connect (OSTI)

    Liu, T.; Ding, A.; Ji, W.; Xu, X. G. [Nuclear Engineering and Engineering Physics, Rensselaer Polytechnic Inst., Troy, NY 12180 (United States); Carothers, C. D. [Dept. of Computer Science, Rensselaer Polytechnic Inst. RPI (United States); Brown, F. B. [Los Alamos National Laboratory (LANL) (United States)

    2012-07-01

    Monte Carlo (MC) method is able to accurately calculate eigenvalues in reactor analysis. Its lengthy computation time can be reduced by general-purpose computing on Graphics Processing Units (GPU), one of the latest parallel computing techniques under development. The method of porting a regular transport code to GPU is usually very straightforward due to the 'embarrassingly parallel' nature of MC code. However, the situation becomes different for eigenvalue calculation in that it will be performed on a generation-by-generation basis and the thread coordination should be explicitly taken care of. This paper presents our effort to develop such a GPU-based MC code in Compute Unified Device Architecture (CUDA) environment. The code is able to perform eigenvalue calculation under simple geometries on a multi-GPU system. The specifics of algorithm design, including thread organization and memory management were described in detail. The original CPU version of the code was tested on an Intel Xeon X5660 2.8 GHz CPU, and the adapted GPU version was tested on NVIDIA Tesla M2090 GPUs. Double-precision floating point format was used throughout the calculation. The result showed that a speedup of 7.0 and 33.3 were obtained for a bare spherical core and a binary slab system respectively. The speedup factor was further increased by a factor of {approx}2 on a dual GPU system. The upper limit of device-level parallelism was analyzed, and a possible method to enhance the thread-level parallelism was proposed. (authors)

  14. Theoretically informed Monte Carlo simulation of liquid crystals by sampling of alignment-tensor fields.

    SciTech Connect (OSTI)

    Armas-Perez, Julio C.; Londono-Hurtado, Alejandro; Guzman, Orlando; Hernandez-Ortiz, Juan P.; de Pablo, Juan J.

    2015-07-27

    A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.

  15. Nonequilibrium candidate Monte Carlo: A new tool for efficient equilibrium simulation

    SciTech Connect (OSTI)

    Nilmeier, Jerome P.; Crooks, Gavin E.; Minh, David D. L.; Chodera, John D.

    2011-11-08

    Metropolis Monte Carlo simulation is a powerful tool for studying the equilibrium properties of matter. In complex condensed-phase systems, however, it is difficult to design Monte Carlo moves with high acceptance probabilities that also rapidly sample uncorrelated configurations. Here, we introduce a new class of moves based on nonequilibrium dynamics: candidate configurations are generated through a finite-time process in which a system is actively driven out of equilibrium, and accepted with criteria that preserve the equilibrium distribution. The acceptance rule is similar to the Metropolis acceptance probability, but related to the nonequilibrium work rather than the instantaneous energy difference. Our method is applicable to sampling from both a single thermodynamic state or a mixture of thermodynamic states, and allows both coordinates and thermodynamic parameters to be driven in nonequilibrium proposals. While generating finite-time switching trajectories incurs an additional cost, driving some degrees of freedom while allowing others to evolve naturally can lead to large enhancements in acceptance probabilities, greatly reducing structural correlation times. Using nonequilibrium driven processes vastly expands the repertoire of useful Monte Carlo proposals in simulations of dense solvated systems.

  16. PROBLEM DEPENDENT DOPPLER BROADENING OF CONTINUOUS ENERGY CROSS SECTIONS IN THE KENO MONTE CARLO COMPUTER CODE

    SciTech Connect (OSTI)

    Hart, S. W. D.; Maldonado, G. Ivan; Celik, Cihangir; Leal, Luiz C

    2014-01-01

    For many Monte Carlo codes cross sections are generally only created at a set of predetermined temperatures. This causes an increase in error as one moves further and further away from these temperatures in the Monte Carlo model. This paper discusses recent progress in the Scale Monte Carlo module KENO to create problem dependent, Doppler broadened, cross sections. Currently only broadening the 1D cross sections and probability tables is addressed. The approach uses a finite difference method to calculate the temperature dependent cross-sections for the 1D data, and a simple linear-logarithmic interpolation in the square root of temperature for the probability tables. Work is also ongoing to address broadening theS (alpha , beta) tables. With the current approach the temperature dependent cross sections are Doppler broadened before transport starts, and, for all but a few isotopes, the impact on cross section loading is negligible. Results can be compared with those obtained by using multigroup libraries, as KENO currently does interpolation on the multigroup cross sections to determine temperature dependent cross-sections. Current results compare favorably with these expected results.

  17. TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation

    SciTech Connect (OSTI)

    Xu, Y; Bai, T; Yan, H; Ouyang, L; Wang, J; Pompos, A; Jiang, S; Jia, X; Zhou, L

    2014-06-15

    Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections; 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research

  18. SU-E-J-205: Monte Carlo Modeling of Ultrasound Probes for Real-Time Ultrasound Image-Guided Radiotherapy

    SciTech Connect (OSTI)

    Hristov, D; Schlosser, J; Bazalova, M; Chen, J

    2014-06-01

    Purpose: To quantify the effect of ultrasound (US) probe beam attenuation for radiation therapy delivered under real-time US image guidance by means of Monte Carlo (MC) simulations. Methods: MC models of two Philips US probes, an X6-1 matrix-array transducer and a C5-2 curved-array transducer, were built based on their CT images in the EGSnrc BEAMnrc and DOSXYZnrc codes. Due to the metal parts, the probes were scanned in a Tomotherapy machine with a 3.5 MV beam. Mass densities in the probes were assigned based on an electron density calibration phantom consisting of cylinders with mass densities between 0.2–8.0 g/cm{sup 3}. Beam attenuation due to the probes was measured in a solid water phantom for a 6 MV and 15 MV 15x15 cm{sup 2} beam delivered on a Varian Trilogy linear accelerator. The dose was measured with the PTW-729 ionization chamber array at two depths and compared to MC simulations. The extreme case beam attenuation expected in robotic US image guided radiotherapy for probes in upright position was quantified by means of MC simulations. Results: The 3.5 MV CT number to mass density calibration curve was found to be linear with R{sup 2} > 0.99. The maximum mass densities were 4.6 and 4.2 g/cm{sup 3} in the C5-2 and X6-1 probe, respectively. Gamma analysis of the simulated and measured doses revealed that over 98% of measurement points passed the 3%/3mm criteria for both probes and measurement depths. The extreme attenuation for probes in upright position was found to be 25% and 31% for the C5-2 and X6-1 probe, respectively, for both 6 and 15 MV beams at 10 cm depth. Conclusion: MC models of two US probes used for real-time image guidance during radiotherapy have been built. As a Result, radiotherapy treatment planning with the imaging probes in place can now be performed. J Schlosser is an employee of SoniTrack Systems, Inc. D Hristov has financial interest in SoniTrack Systems, Inc.

  19. Radiation doses in cone-beam breast computed tomography: A Monte Carlo simulation study

    SciTech Connect (OSTI)

    Yi Ying; Lai, Chao-Jen; Han Tao; Zhong Yuncheng; Shen Youtao; Liu Xinming; Ge Shuaiping; You Zhicheng; Wang Tianpeng; Shaw, Chris C.

    2011-02-15

    Purpose: In this article, we describe a method to estimate the spatial dose variation, average dose and mean glandular dose (MGD) for a real breast using Monte Carlo simulation based on cone beam breast computed tomography (CBBCT) images. We present and discuss the dose estimation results for 19 mastectomy breast specimens, 4 homogeneous breast models, 6 ellipsoidal phantoms, and 6 cylindrical phantoms. Methods: To validate the Monte Carlo method for dose estimation in CBBCT, we compared the Monte Carlo dose estimates with the thermoluminescent dosimeter measurements at various radial positions in two polycarbonate cylinders (11- and 15-cm in diameter). Cone-beam computed tomography (CBCT) images of 19 mastectomy breast specimens, obtained with a bench-top experimental scanner, were segmented and used to construct 19 structured breast models. Monte Carlo simulation of CBBCT with these models was performed and used to estimate the point doses, average doses, and mean glandular doses for unit open air exposure at the iso-center. Mass based glandularity values were computed and used to investigate their effects on the average doses as well as the mean glandular doses. Average doses for 4 homogeneous breast models were estimated and compared to those of the corresponding structured breast models to investigate the effect of tissue structures. Average doses for ellipsoidal and cylindrical digital phantoms of identical diameter and height were also estimated for various glandularity values and compared with those for the structured breast models. Results: The absorbed dose maps for structured breast models show that doses in the glandular tissue were higher than those in the nearby adipose tissue. Estimated average doses for the homogeneous breast models were almost identical to those for the structured breast models (p=1). Normalized average doses estimated for the ellipsoidal phantoms were similar to those for the structured breast models (root mean square (rms

  20. Radiation doses in volume-of-interest breast computed tomography—A Monte Carlo simulation study

    SciTech Connect (OSTI)

    Lai, Chao-Jen Zhong, Yuncheng; Yi, Ying; Wang, Tianpeng; Shaw, Chris C.

    2015-06-15

    Purpose: Cone beam breast computed tomography (breast CT) with true three-dimensional, nearly isotropic spatial resolution has been developed and investigated over the past decade to overcome the problem of lesions overlapping with breast anatomical structures on two-dimensional mammographic images. However, the ability of breast CT to detect small objects, such as tissue structure edges and small calcifications, is limited. To resolve this problem, the authors proposed and developed a volume-of-interest (VOI) breast CT technique to image a small VOI using a higher radiation dose to improve that region’s visibility. In this study, the authors performed Monte Carlo simulations to estimate average breast dose and average glandular dose (AGD) for the VOI breast CT technique. Methods: Electron–Gamma-Shower system code-based Monte Carlo codes were used to simulate breast CT. The Monte Carlo codes estimated were validated using physical measurements of air kerma ratios and point doses in phantoms with an ion chamber and optically stimulated luminescence dosimeters. The validated full cone x-ray source was then collimated to simulate half cone beam x-rays to image digital pendant-geometry, hemi-ellipsoidal, homogeneous breast phantoms and to estimate breast doses with full field scans. 13-cm in diameter, 10-cm long hemi-ellipsoidal homogeneous phantoms were used to simulate median breasts. Breast compositions of 25% and 50% volumetric glandular fractions (VGFs) were used to investigate the influence on breast dose. The simulated half cone beam x-rays were then collimated to a narrow x-ray beam with an area of 2.5 × 2.5 cm{sup 2} field of view at the isocenter plane and to perform VOI field scans. The Monte Carlo results for the full field scans and the VOI field scans were then used to estimate the AGD for the VOI breast CT technique. Results: The ratios of air kerma ratios and dose measurement results from the Monte Carlo simulation to those from the physical

  1. A User's Manual for MASH V1.5 - A Monte Carlo Adjoint Shielding Code System

    SciTech Connect (OSTI)

    C. O. Slater; J. M. Barnes; J. O. Johnson; J.D. Drischler

    1998-10-01

    The Monte Carlo ~djoint ~ielding Code System, MASH, calculates neutron and gamma- ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air- over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system includes the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. The current version, MASH v 1.5, is the successor to the original MASH v 1.0 code system initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the "dose importance" of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response as a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem.

  2. Quantized vortices in {sup 4}He droplets: A quantum Monte Carlo study

    SciTech Connect (OSTI)

    Sola, E.; Casulleras, J.; Boronat, J.

    2007-08-01

    We present a diffusion Monte Carlo study of a vortex line excitation attached to the center of a {sup 4}He droplet at zero temperature. The vortex energy is estimated for droplets of increasing number of atoms, from N=70 up to 300, showing a monotonous increase with N. The evolution of the core radius and its associated energy, the core energy, is also studied as a function of N. The core radius is {approx}1 A in the center and increases when approaching the droplet surface; the core energy per unit volume stabilizes at a value 2.8 K{sigma}{sup -3} ({sigma}=2.556 A) for N{>=}200.

  3. Quantum Monte Carlo simulation of a two-dimensional Bose gas

    SciTech Connect (OSTI)

    Pilati, S.; Boronat, J.; Casulleras, J.; Giorgini, S.

    2005-02-01

    The equation of state of a homogeneous two-dimensional Bose gas is calculated using quantum Monte Carlo methods. The low-density universal behavior is investigated using different interatomic model potentials, both finite ranged and strictly repulsive and zero ranged, supporting a bound state. The condensate fraction and the pair distribution function are calculated as a function of the gas parameter, ranging from the dilute to the strongly correlated regime. In the case of the zero-range pseudopotential we discuss the stability of the gaslike state for large values of the two-dimensional scattering length, and we calculate the critical density where the system becomes unstable against cluster formation.

  4. W/Z + b bbar/Jets at NLO Using the Monte Carlo MCFM

    SciTech Connect (OSTI)

    John M. Campbell

    2001-05-29

    We summarize recent progress in next-to-leading QCD calculations made using the Monte Carlo MCFM. In particular, we focus on the calculations of p{bar p} {r_arrow} Wb{bar b}, Zb{bar b} and highlight the significant corrections to background estimates for Higgs searches in the channels WH and ZH at the Tevatron. We also report on the current progress of, and strategies for, the calculation of the process p{bar p} {r_arrow} W/Z + 2 jets.

  5. Simulation of atomic diffusion in the Fcc NiAl system: A kinetic Monte Carlo study

    SciTech Connect (OSTI)

    Alfonso, Dominic R.; Tafen, De Nyago

    2015-04-28

    The atomic diffusion in fcc NiAl binary alloys was studied by kinetic Monte Carlo simulation. The environment dependent hopping barriers were computed using a pair interaction model whose parameters were fitted to relevant data derived from electronic structure calculations. Long time diffusivities were calculated and the effect of composition change on the tracer diffusion coefficients was analyzed. These results indicate that this variation has noticeable impact on the atomic diffusivities. A reduction in the mobility of both Ni and Al is demonstrated with increasing Al content. As a result, examination of the pair interaction between atoms was carried out for the purpose of understanding the predicted trends.

  6. Monte Carlo and renormalization group local effective potentials in scalar field theories at finite temperature

    SciTech Connect (OSTI)

    Shafer, J.D.; Shepard, J.R.

    1997-04-01

    We derive an approximate renormalization group (RG) flow equation for the local effective potential of single-component {phi}{sup 4} field theory at finite temperature. Previous zero-temperature RG equations are recovered in the low- and high-temperature limits, in the latter case, via the phenomenon of dimensional reduction. We numerically solve our RG equations to obtain local effective potentials at finite temperature. These are found to be in excellent agreement with Monte Carlo results, especially when lattice artifacts are accounted for in the RG treatment. {copyright} {ital 1997} {ital The American Physical Society}

  7. Neutron matter with Quantum Monte Carlo: chiral 3N forces and static response

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Buraczynski, M.; Gandolfi, S.; Gezerlis, A.; Schwenk, A.; Tews, I.

    2016-03-01

    Neutron matter is related to the physics of neutron stars and that of neutron-rich nuclei. Moreover, Quantum Monte Carlo (QMC) methods offer a unique way of solving the many-body problem non-perturbatively, providing feedback on features of nuclear interactions and addressing scenarios that are inaccessible to other approaches. Our contribution goes over two recent accomplishments in the theory of neutron matter: a) the fusing of QMC with chiral effective field theory interactions, focusing on local chiral 3N forces, and b) the first attempt to find an ab initio solution to the problem of static response.

  8. Photons, Electrons and Positrons Transport in 3D by Monte Carlo Techniques

    Energy Science and Technology Software Center (OSTI)

    2014-12-01

    Version 04 FOTELP-2014 is a new compact general purpose version of the previous FOTELP-2K6 code designed to simulate the transport of photons, electrons and positrons through three-dimensional material and sources geometry by Monte Carlo techniques, using subroutine package PENGEOM from the PENELOPE code under Linux-based and Windows OS. This new version includes routine ELMAG for electron and positron transport simulation in electric and magnetic fields, RESUME option and routine TIMER for obtaining starting random numbermore » and for measuring the time of simulation.« less

  9. Refinement of overlapping local/global iteration method based on Monte Carlo/p-CMFD calculations

    SciTech Connect (OSTI)

    Jo, Y.; Yun, S.; Cho, N. Z.

    2013-07-01

    In this paper, the overlapping local/global (OLG) iteration method based on Monte Carlo/p-CMFD calculations is refined in two aspects. One is the consistent use of estimators to generate homogenized scattering cross sections. Another is that the incident or exiting angular interval is divided into multi-angular bins to modulate albedo boundary conditions for local problems. Numerical tests show that, compared to the one angle bin case in a previous study, the four angle bin case shows significantly improved results. (authors)

  10. Study of DCX reaction on medium nuclei with Monte-Carlo Shell Model

    SciTech Connect (OSTI)

    Wu, H. C.; Gibbs, W. R.

    2010-08-04

    In this work a method is introduced to calculate the DCX reaction in the framework of Monte-Carlo Shell Model (MCSM). To facilitate the use of Zero-temperature formalism of MCSM, the Double-Isobaric-Analog State (DIAS) is derived from the ground state by using isospin shifting operator. The validity of this method is tested by comparing the MCSM results to those of the SU(3) symmetry case. Application of this method to DCX on {sup 56}Fe and {sup 93}Nb is discussed.

  11. Theory of melting at high pressures: Amending density functional theory with quantum Monte Carlo

    SciTech Connect (OSTI)

    Shulenburger, L.; Desjarlais, M. P.; Mattsson, T. R.

    2014-10-01

    We present an improved first-principles description of melting under pressure based on thermodynamic integration comparing Density Functional Theory (DFT) and quantum Monte Carlo (QMC) treatments of the system. The method is applied to address the longstanding discrepancy between density functional theory (DFT) calculations and diamond anvil cell (DAC) experiments on the melting curve of xenon, a noble gas solid where van der Waals binding is challenging for traditional DFT methods. The calculations show excellent agreement with data below 20 GPa and that the high-pressure melt curve is well described by a Lindemann behavior up to at least 80 GPa, a finding in stark contrast to DAC data.

  12. Replica-exchange Wang Landau sampling: pushing the limits of Monte Carlo simulations in materials sciences

    SciTech Connect (OSTI)

    Perera, Meewanage Dilina N; Li, Ying Wai; Eisenbach, Markus; Vogel, Thomas; Landau, David P

    2015-01-01

    We describe the study of thermodynamics of materials using replica-exchange Wang Landau (REWL) sampling, a generic framework for massively parallel implementations of the Wang Landau Monte Carlo method. To evaluate the performance and scalability of the method, we investigate the magnetic phase transition in body-centered cubic (bcc) iron using the classical Heisenberg model parameterized with first principles calculations. We demonstrate that our framework leads to a significant speedup without compromising the accuracy and precision and facilitates the study of much larger systems than is possible with its serial counterpart.

  13. Exponentially-convergent Monte Carlo for the 1-D transport equation

    SciTech Connect (OSTI)

    Peterson, J. R.; Morel, J. E.; Ragusa, J. C.

    2013-07-01

    We define a new exponentially-convergent Monte Carlo method for solving the one-speed 1-D slab-geometry transport equation. This method is based upon the use of a linear discontinuous finite-element trial space in space and direction to represent the transport solution. A space-direction h-adaptive algorithm is employed to restore exponential convergence after stagnation occurs due to inadequate trial-space resolution. This methods uses jumps in the solution at cell interfaces as an error indicator. Computational results are presented demonstrating the efficacy of the new approach. (authors)

  14. Monte Carlo generators for studies of the 3D structure of the nucleon

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Avakian, Harut; D'Alesio, U.; Murgia, F.

    2015-01-23

    In this study, extraction of transverse momentum and space distributions of partons from measurements of spin and azimuthal asymmetries requires development of a self consistent analysis framework, accounting for evolution effects, and allowing control of systematic uncertainties due to variations of input parameters and models. Development of realistic Monte-Carlo generators, accounting for TMD evolution effects, spin-orbit and quark-gluon correlations will be crucial for future studies of quark-gluon dynamics in general and 3D structure of the nucleon in particular.

  15. TH-A-19A-08: Intel Xeon Phi Implementation of a Fast Multi-Purpose Monte Carlo Simulation for Proton Therapy

    SciTech Connect (OSTI)

    Souris, K; Lee, J; Sterpin, E

    2014-06-15

    Purpose: Recent studies have demonstrated the capability of graphics processing units (GPUs) to compute dose distributions using Monte Carlo (MC) methods within clinical time constraints. However, GPUs have a rigid vectorial architecture that favors the implementation of simplified particle transport algorithms, adapted to specific tasks. Our new, fast, and multipurpose MC code, named MCsquare, runs on Intel Xeon Phi coprocessors. This technology offers 60 independent cores, and therefore more flexibility to implement fast and yet generic MC functionalities, such as prompt gamma simulations. Methods: MCsquare implements several models and hence allows users to make their own tradeoff between speed and accuracy. A 200 MeV proton beam is simulated in a heterogeneous phantom using Geant4 and two configurations of MCsquare. The first one is the most conservative and accurate. The method of fictitious interactions handles the interfaces and secondary charged particles emitted in nuclear interactions are fully simulated. The second, faster configuration simplifies interface crossings and simulates only secondary protons after nuclear interaction events. Integral depth-dose and transversal profiles are compared to those of Geant4. Moreover, the production profile of prompt gammas is compared to PENH results. Results: Integral depth dose and transversal profiles computed by MCsquare and Geant4 are within 3%. The production of secondaries from nuclear interactions is slightly inaccurate at interfaces for the fastest configuration of MCsquare but this is unlikely to have any clinical impact. The computation time varies between 90 seconds for the most conservative settings to merely 59 seconds in the fastest configuration. Finally prompt gamma profiles are also in very good agreement with PENH results. Conclusion: Our new, fast, and multi-purpose Monte Carlo code simulates prompt gammas and calculates dose distributions in less than a minute, which complies with clinical time

  16. SU-E-T-349: Verification of a Monte Carlo Simulation Tool as a Second Check for TomoTherapy Treatment Planning

    SciTech Connect (OSTI)

    Yuan, J; Zheng, Y; Sohn, J; Wessels, B; Chen, Q

    2014-06-01

    Purpose: To verify a virtual source model used in Monte Carlo (MC) TomoTherapy simulations and to provide a tool for the dose distribution second check for TomoTherapy treatment planning. Methods: Published studies adopted the full MC model to simulate helical TomoTherapy which use phase space files (PSFs) at different scoring planes to facilitate patient dose calculations. We have developed a simple virtual source model (VSM) based on the commissioning data of helical TomoTherapy unit which can be exported from the treatment planning station (TPS). To calculate the patient dose distribution, TPS-generated sinograms are extracted from archived patient XML files and fluence maps are created by incorporating the openclose leaf time with leaf filter, jaw penumbra, and leaf latency. The VSM has been validated for various geometrical set-ups including open fields, heterogeneous media and clinical DQA cases. Results: Good agreement was obtained between our VSM and a published full MC model for the test case of heterogeneous media. For the complex clinical head and neck (HN) case, the MC simulation of the DQA plan agreed with the film measurement with 98% of planar dose pixels passing the 2%/2 mm gamma value test. For the patient treatment plan itself, the MC simulation showed comparable DVHs for the planning target volumes (PTVs) and organs-at-risk (OARs). Deviations observed were consistent with literature. Conclusion: The proposed VSM can be used in the MC method to accurately simulate the patient dose distributions for helical TomoTherapy.

  17. SU-E-T-587: Monte Carlo Versus Ray-Tracing for Treatment Planning Involving CNS Tumors On the MultiPlan System for CyberKnife Radiosurgery

    SciTech Connect (OSTI)

    Forbang, R Teboh

    2014-06-01

    Purpose: MultiPlan, the treatment planning system for the CyberKnife Robotic Radiosurgery system offers two approaches to dose computation, namely Ray-Tracing (RT), the default technique and Monte Carlo (MC), an option. RT is deterministic, however it accounts for primary heterogeneity only. MC on the other hand has an uncertainty associated with the calculation results. The advantage is that in addition, it accounts for heterogeneity effects on the scattered dose. Not all sites will benefit from MC. The goal of this work was to focus on central nervous system (CNS) tumors and compare dosimetrically, treatment plans computed with RT versus MC. Methods: Treatment plans were computed using both RT and MC for sites covering (a) the brain (b) C-spine (c) upper T-spine (d) lower T-spine (e) L-spine and (f) sacrum. RT was first used to compute clinically valid treatment plans. Then the same treatment parameters, monitor units, beam weights, etc., were used in the MC algorithm to compute the dose distribution. The plans were then compared for tumor coverage to illustrate the difference if any. All MC calculations were performed at a 1% uncertainty. Results: Using the RT technique, the tumor coverage for the brain, C-spine (C3–C7), upper T-spine (T4–T6), lower T-spine (T10), Lspine (L2) and sacrum were 96.8%, 93.1%, 97.2%, 87.3%, 91.1%, and 95.3%. The corresponding tumor coverage based on the MC approach was 98.2%, 95.3%, 87.55%, 88.2%, 92.5%, and 95.3%. It should be noted that the acceptable planning target coverage for our clinical practice is >95%. The coverage can be compromised for spine tumors to spare normal tissues such as the spinal cord. Conclusion: For treatment planning involving the CNS, RT and MC appear to be similar for most sites but for the T-spine area where most of the beams traverse lung tissue. In this case, MC is highly recommended.

  18. McICA_update_v2.cdr

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Monte Carlo Independent Column Approximation (McICA): Up and Running in North America and Europe H. W. Barker J. N. S. Cole J.-J. Morcrette R. Pincus P. Räisänen MSC UBC ECMWF NOAA-UC FMI Monte Carlo Independent Column Approximation (McICA): Up and Running in North America and Europe H. W. Barker J. N. S. Cole J.-J. Morcrette R. Pincus P. Räisänen MSC UBC ECMWF NOAA-UC FMI Funding Announcement (spring 2007): "The goal of the ARM program is ... to improve the representation of clouds and

  19. Quantitative assessment of the accuracy of dose calculation using pencil beam and Monte Carlo algorithms and requirements for clinical quality assurance

    SciTech Connect (OSTI)

    Ali, Imad; Ahmad, Salahuddin

    2013-10-01

    To compare the doses calculated using the BrainLAB pencil beam (PB) and Monte Carlo (MC) algorithms for tumors located in various sites including the lung and evaluate quality assurance procedures required for the verification of the accuracy of dose calculation. The dose-calculation accuracy of PB and MC was also assessed quantitatively with measurement using ionization chamber and Gafchromic films placed in solid water and heterogeneous phantoms. The dose was calculated using PB convolution and MC algorithms in the iPlan treatment planning system from BrainLAB. The dose calculation was performed on the patient's computed tomography images with lesions in various treatment sites including 5 lungs, 5 prostates, 4 brains, 2 head and necks, and 2 paraspinal tissues. A combination of conventional, conformal, and intensity-modulated radiation therapy plans was used in dose calculation. The leaf sequence from intensity-modulated radiation therapy plans or beam shapes from conformal plans and monitor units and other planning parameters calculated by the PB were identical for calculating dose with MC. Heterogeneity correction was considered in both PB and MC dose calculations. Dose-volume parameters such as V95 (volume covered by 95% of prescription dose), dose distributions, and gamma analysis were used to evaluate the calculated dose by PB and MC. The measured doses by ionization chamber and EBT GAFCHROMIC film in solid water and heterogeneous phantoms were used to quantitatively asses the accuracy of dose calculated by PB and MC. The dose-volume histograms and dose distributions calculated by PB and MC in the brain, prostate, paraspinal, and head and neck were in good agreement with one another (within 5%) and provided acceptable planning target volume coverage. However, dose distributions of the patients with lung cancer had large discrepancies. For a plan optimized with PB, the dose coverage was shown as clinically acceptable, whereas in reality, the MC showed a

  20. Automatic Mesh Adaptivity for Hybrid Monte Carlo/Deterministic Neutronics Modeling of Fusion Energy Systems

    SciTech Connect (OSTI)

    Ibrahim, Ahmad M; Wilson, P.; Sawan, M.; Mosher, Scott W; Peplow, Douglas E.; Grove, Robert E

    2013-01-01

    Three mesh adaptivity algorithms were developed to facilitate and expedite the use of the CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques in accurate full-scale neutronics simulations of fusion energy systems with immense sizes and complicated geometries. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility and resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation. Additionally, because of the significant increase in the efficiency of FW-CADIS simulations, the three algorithms enabled this difficult calculation to be accurately solved on a regular computer cluster, eliminating the need for a world-class super computer.

  1. Massively parallel Monte Carlo for many-particle simulations on GPUs

    SciTech Connect (OSTI)

    Anderson, Joshua A.; Jankowski, Eric; Grubb, Thomas L.; Engel, Michael; Glotzer, Sharon C.

    2013-12-01

    Current trends in parallel processors call for the design of efficient massively parallel algorithms for scientific computing. Parallel algorithms for Monte Carlo simulations of thermodynamic ensembles of particles have received little attention because of the inherent serial nature of the statistical sampling. In this paper, we present a massively parallel method that obeys detailed balance and implement it for a system of hard disks on the GPU. We reproduce results of serial high-precision Monte Carlo runs to verify the method. This is a good test case because the hard disk equation of state over the range where the liquid transforms into the solid is particularly sensitive to small deviations away from the balance conditions. On a Tesla K20, our GPU implementation executes over one billion trial moves per second, which is 148 times faster than on a single Intel Xeon E5540 CPU core, enables 27 times better performance per dollar, and cuts energy usage by a factor of 13. With this improved performance we are able to calculate the equation of state for systems of up to one million hard disks. These large system sizes are required in order to probe the nature of the melting transition, which has been debated for the last forty years. In this paper we present the details of our computational method, and discuss the thermodynamics of hard disks separately in a companion paper.

  2. Speciation model selection by Monte Carlo analysis of optical absorption spectra: Plutonium(IV) nitrate complexes

    SciTech Connect (OSTI)

    Berg, John M.; Veirs, D. Kirk; Vaughn, Randolph B.; Cisneros, Michael R.; Smith, Coleman A.

    2000-06-01

    Standard modeling approaches can produce the most likely values of the formation constants of metal-ligand complexes if a particular set of species containing the metal ion is known or assumed to exist in solution equilibrium with complexing ligands. Identifying the most likely set of species when more than one set is plausible is a more difficult problem to address quantitatively. A Monte Carlo method of data analysis is described that measures the relative abilities of different speciation models to fit optical spectra of open-shell actinide ions. The best model(s) can be identified from among a larger group of models initially judged to be plausible. The method is demonstrated by analyzing the absorption spectra of aqueous Pu(IV) titrated with nitrate ion at constant 2 molal ionic strength in aqueous perchloric acid. The best speciation model supported by the data is shown to include three Pu(IV) species with nitrate coordination numbers 0, 1, and 2. Formation constants are {beta}{sub 1}=3.2{+-}0.5 and {beta}{sub 2}=11.2{+-}1.2, where the uncertainties are 95% confidence limits estimated by propagating raw data uncertainties using Monte Carlo methods. Principal component analysis independently indicates three Pu(IV) complexes in equilibrium. (c) 2000 Society for Applied Spectroscopy.

  3. Surface Structures of Cubo-octahedral Pt-Mo Catalyst Nanoparticles from Monte Carlo Simulations

    SciTech Connect (OSTI)

    Wang, Guofeng; Van Hove, M.A.; Ross, P.N.; Baskes, M.I.

    2005-03-31

    The surface structures of cubo-octahedral Pt-Mo nanoparticles have been investigated using the Monte Carlo method and modified embedded atom method potentials that we developed for Pt-Mo alloys. The cubo-octahedral Pt-Mo nanoparticles are constructed with disordered fcc configurations, with sizes from 2.5 to 5.0 nm, and with Pt concentrations from 60 to 90 at. percent. The equilibrium Pt-Mo nanoparticle configurations were generated through Monte Carlo simulations allowing both atomic displacements and element exchanges at 600 K. We predict that the Pt atoms weakly segregate to the surfaces of such nanoparticles. The Pt concentrations in the surface are calculated to be 5 to 14 at. percent higher than the Pt concentrations of the nanoparticles. Moreover, the Pt atoms preferentially segregate to the facet sites of the surface, while the Pt and Mo atoms tend to alternate along the edges and vertices of these nanoparticles. We found that decreasing the size or increasing the Pt concentration leads to higher Pt concentrations but fewer Pt-Mo pairs in the Pt-Mo nanoparticle surfaces.

  4. Particle Communication and Domain Neighbor Coupling: Scalable Domain Decomposed Algorithms for Monte Carlo Particle Transport

    SciTech Connect (OSTI)

    O'Brien, M. J.; Brantley, P. S.

    2015-01-20

    In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 221 = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.

  5. An Evaluation of Monte Carlo Simulations of Neutron Multiplicity Measurements of Plutonium Metal

    SciTech Connect (OSTI)

    Mattingly, John; Miller, Eric; Solomon, Clell J. Jr.; Dennis, Ben; Meldrum, Amy; Clarke, Shaun; Pozzi, Sara

    2012-06-21

    In January 2009, Sandia National Laboratories conducted neutron multiplicity measurements of a polyethylene-reflected plutonium metal sphere. Over the past 3 years, those experiments have been collaboratively analyzed using Monte Carlo simulations conducted by University of Michigan (UM), Los Alamos National Laboratory (LANL), Sandia National Laboratories (SNL), and North Carolina State University (NCSU). Monte Carlo simulations of the experiments consistently overpredict the mean and variance of the measured neutron multiplicity distribution. This paper presents a sensitivity study conducted to evaluate the potential sources of the observed errors. MCNPX-PoliMi simulations of plutonium neutron multiplicity measurements exhibited systematic over-prediction of the neutron multiplicity distribution. The over-prediction tended to increase with increasing multiplication. MCNPX-PoliMi had previously been validated against only very low multiplication benchmarks. We conducted sensitivity studies to try to identify the cause(s) of the simulation errors; we eliminated the potential causes we identified, except for Pu-239 {bar {nu}}. A very small change (-1.1%) in the Pu-239 {bar {nu}} dramatically improved the accuracy of the MCNPX-PoliMi simulation for all 6 measurements. This observation is consistent with the trend observed in the bias exhibited by the MCNPX-PoliMi simulations: a very small error in {bar {nu}} is 'magnified' by increasing multiplication. We applied a scalar adjustment to Pu-239 {bar {nu}} (independent of neutron energy); an adjustment that depends on energy is probably more appropriate.

  6. GPU Acceleration of Mean Free Path Based Kernel Density Estimators for Monte Carlo Neutronics Simulations

    SciTech Connect (OSTI)

    Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.; Brown, Forrest B.

    2015-11-19

    Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics for one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.

  7. A Coupled Neutron-Photon 3-D Combinatorial Geometry Monte Carlo Transport Code

    Energy Science and Technology Software Center (OSTI)

    1998-06-12

    TART97 is a coupled neutron-photon, 3 dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly fast: if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system canmore » save you a great deal of time and energy. TART 97 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and ist data files.« less

  8. Coupled Hybrid Monte Carlo: Deterministic Analysis of VHTR Configurations with Advanced Actinide Fuels

    SciTech Connect (OSTI)

    Tsvetkov, Pavel V.; Ames II, David E.; Alajo, Ayodeji B.; Pritchard, Megan L.

    2006-07-01

    Partitioning and transmutation of minor actinides are expected to have a positive impact on the future of nuclear technology. Their deployment would lead to incineration of hazardous nuclides and could potentially provide additional fuel supply. The U.S. DOE NERI Project assesses the possibility, advantages and limitations of involving minor actinides as a fuel component. The analysis takes into consideration and compares capabilities of actinide-fueled VHTRs with pebble-bed and prismatic cores to approach a reactor lifetime long operation without intermediate refueling. A hybrid Monte Carlo-deterministic methodology has been adopted for coupled neutronics-thermal hydraulics design studies of VHTRs. Within the computational scheme, the key technical issues are being addressed and resolved by implementing efficient automated modeling procedures and sequences, combining Monte Carlo and deterministic approaches, developing and applying realistic 3D coupled neutronics-thermal-hydraulics models with multi-heterogeneity treatments, developing and performing experimental/computational benchmarks for model verification and validation, analyzing uncertainty effects and error propagation. This paper introduces the suggested modeling approach, discusses benchmark results and the preliminary analysis of actinide-fueled VHTRs. The presented up-to-date results are in agreement with the available experimental data. Studies of VHTRs with minor actinides suggest promising performance. (authors)

  9. Monte Carlo analysis of neutron slowing-down-time spectrometer for fast reactor spent fuel assay

    SciTech Connect (OSTI)

    Chen, Jianwei; Lineberry, Michael

    2007-07-01

    Using the neutron slowing-down-time method as a nondestructive assay tool to improve input material accountancy for fast reactor spent fuel reprocessing is under investigation at Idaho State University. Monte Carlo analyses were performed to simulate the neutron slowing down process in different slowing down spectrometers, namely, lead and graphite, and determine their main parameters. {sup 238}U threshold fission chamber response was simulated in the Monte Carlo model to represent the spent fuel assay signals, the signature (fission/time) signals of {sup 235}U, {sup 239}Pu, and {sup 241}Pu were simulated as a convolution of fission cross sections and neutron flux inside the spent fuel. {sup 238}U detector signals were analyzed using linear regression model based on the signatures of fissile materials in the spent fuel to determine weight fractions of fissile materials in the Advanced Burner Test Reactor spent fuel. The preliminary results show even though lead spectrometer showed a better assay performance than graphite, graphite spectrometer could accurately determine weight fractions of {sup 239}Pu and {sup 241}Pu given proper assay energy range were chosen. (authors)

  10. Energy density matrix formalism for interacting quantum systems: a quantum Monte Carlo study

    SciTech Connect (OSTI)

    Krogel, Jaron T; Kim, Jeongnim; Reboredo, Fernando A

    2014-01-01

    We develop an energy density matrix that parallels the one-body reduced density matrix (1RDM) for many-body quantum systems. Just as the density matrix gives access to the number density and occupation numbers, the energy density matrix yields the energy density and orbital occupation energies. The eigenvectors of the matrix provide a natural orbital partitioning of the energy density while the eigenvalues comprise a single particle energy spectrum obeying a total energy sum rule. For mean-field systems the energy density matrix recovers the exact spectrum. When correlation becomes important, the occupation energies resemble quasiparticle energies in some respects. We explore the occupation energy spectrum for the finite 3D homogeneous electron gas in the metallic regime and an isolated oxygen atom with ground state quantum Monte Carlo techniques imple- mented in the QMCPACK simulation code. The occupation energy spectrum for the homogeneous electron gas can be described by an effective mass below the Fermi level. Above the Fermi level evanescent behavior in the occupation energies is observed in similar fashion to the occupation numbers of the 1RDM. A direct comparison with total energy differences demonstrates a quantita- tive connection between the occupation energies and electron addition and removal energies for the electron gas. For the oxygen atom, the association between the ground state occupation energies and particle addition and removal energies becomes only qualitative. The energy density matrix provides a new avenue for describing energetics with quantum Monte Carlo methods which have traditionally been limited to total energies.

  11. SU-E-T-386: A Monte Carlo Dose Calculation Framework for Electron Beams On Varian TrueBeam

    SciTech Connect (OSTI)

    Rodrigues, A; Yin, F; Wu, Q; Sawkey, D

    2014-06-01

    Purpose: The design of the linac head is different for TrueBeam than Clinac, and there are differences in measured dose distributions in water phantoms between TrueBeam and Clinac for electron beams. Therefore, MC models for Clinac may not be applied directly to the Truebeam linac. The purpose of this study is to validate a Monte Carlo (MC) dose calculation framework for electron beams on Varian TrueBeam with phase space files provided by Varian. Methods: The particle histories from the phase space file were used as input for the down-stream simulation including jaws, applicators, and water phantom. MC packages BEAMnrc/DOSYXZnrc were used. The down-stream beam components were modeled according to manufacturer specifications and the dose distributions were compared with the measured data of standard cones. The measurements were performed in a water phantom with a p-type electron field diode (diameter 0.2cm) and ion chamber (CC13). Depth dose and orthogonal profiles at depths defined by R{sub 1} {sub 0} {sub 0}, R{sub 5} {sub 0}, Rp were compared. Results: Preliminary results for a 16 MeV phase space and 10x10, 15x15, and 20x20 cm{sup 2} applicator are presented. Simulations were run for a statistical uncertainty of <2% at depth of maximum dose for a voxel resolution of 0.5x0.5x0.2cm{sup 2}. Dose and range differences for the PDD profiles were within 2% and 1 mm, respectively. Dose differences within the central 80% of the beam width for the orthogonal profiles at depth of maximum dose were less than 2% for the 10x10, 15x15, and 20x20 cm{sup 2} applicator, respectively. Conclusion: Varian electron phase space files simulations are in agreement with measured commissioning data. These phase space files can be used in the simulation of TrueBeam linacs, and will provide reproducibility across publications. Analyses for all electron energies and standard applicators are under way and results will be included in the presentation.

  12. Study on in situ calibration for neutron flux monitor in the Large Helical Device based on Monte Carlo calculations

    SciTech Connect (OSTI)

    Nakano, Y. Yamazaki, A.; Watanabe, K.; Uritani, A.; Ogawa, K.; Isobe, M.

    2014-11-15

    Neutron monitoring is important to manage safety of fusion experiment facilities because neutrons are generated in fusion reactions. Monte Carlo simulations play an important role in evaluating the influence of neutron scattering from various structures and correcting differences between deuterium plasma experiments and in situ calibration experiments. We evaluated these influences based on differences between the both experiments at Large Helical Device using Monte Carlo simulation code MCNP5. A difference between the both experiments in absolute detection efficiency of the fission chamber between O-ports is estimated to be the biggest of all monitors. We additionally evaluated correction coefficients for some neutron monitors.

  13. Equation of state of an interacting Bose gas at finite temperature: A path-integral Monte Carlo study

    SciTech Connect (OSTI)

    Pilati, S.; Giorgini, S.; Sakkos, K.; Boronat, J.; Casulleras, J.

    2006-10-15

    By using exact path-integral Monte Carlo methods we calculate the equation of state of an interacting Bose gas as a function of temperature both below and above the superfluid transition. The universal character of the equation of state for dilute systems and low temperatures is investigated by modeling the interatomic interactions using different repulsive potentials corresponding to the same s-wave scattering length. The results obtained for the energy and the pressure are compared to the virial expansion for temperatures larger than the critical temperature. At very low temperatures we find agreement with the ground-state energy calculated using the diffusion Monte Carlo method.

  14. Binding energies and spatial structures of small carrier complexes in monolayer transition-metal dichalcogenides via diffusion Monte Carlo

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Mayers, Matthew Z.; Berkelbach, Timothy C.; Hybertsen, Mark S.; Reichman, David R.

    2015-10-09

    Ground-state diffusion Monte Carlo is used to investigate the binding energies and intercarrier radial probability distributions of excitons, trions, and biexcitons in a variety of two-dimensional transition-metal dichalcogenide materials. We compare these results to approximate variational calculations, as well as to analogous Monte Carlo calculations performed with simplified carrier interaction potentials. Our results highlight the successes and failures of approximate approaches as well as the physical features that determine the stability of small carrier complexes in monolayer transition-metal dichalcogenide materials. In conclusion, we discuss points of agreement and disagreement with recent experiments.

  15. CASL-U-2015-0170-000-a SHIFT: A New Monte Carlo Package Seth R. Johnson

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    -a SHIFT: A New Monte Carlo Package Seth R. Johnson Tara M. Pandya, Gregory G. Davidson, Thomas M. Evans, and Steven P. Hamilton , Cihangir Celik, Aarno Isotalo, Chris Peretti Oak Ridge National Laboratory April 19, 2015 CASL-U-2015-0170-000-a ORNL is managed by UT-Battelle for the U.S. Department of Energy Seth R Johnson R&D Staff, Monte Carlo Methods Radiation Transport Group Exnihilo team: Greg Davidson Tom Evans Stephen Hamilton Seth Johnson Tara Pandya Associate developers: Cihangir

  16. SU-E-T-591: Measurement and Monte Carlo Simulation of Stray Neutrons in Passive Scattering Proton Therapy: Needs and Challenges

    SciTech Connect (OSTI)

    Farah, J; Bonfrate, A; Donadille, L; Dubourg, N; Lacoste, V; Martinetti, F; Sayah, R; Trompier, F; Clairand, I [IRSN - Institute for Radiological Protection and Nuclear Safety, Fontenay-aux-roses (France); Caresana, M [Politecnico di Milano, Milano (Italy); Delacroix, S; Nauraye, C [Institut Curie - Centre de Protontherapie d Orsay, Orsay (France); Herault, J [Centre Antoine Lacassagne, Nice (France); Piau, S; Vabre, I [Institut de Physique Nucleaire d Orsay, Orsay (France)

    2014-06-01

    Purpose: Measure stray radiation inside a passive scattering proton therapy facility, compare values to Monte Carlo (MC) simulations and identify the actual needs and challenges. Methods: Measurements and MC simulations were considered to acknowledge neutron exposure associated with 75 MeV ocular or 180 MeV intracranial passively scattered proton treatments. First, using a specifically-designed high sensitivity Bonner Sphere system, neutron spectra were measured at different positions inside the treatment rooms. Next, measurement-based mapping of neutron ambient dose equivalent was fulfilled using several TEPCs and rem-meters. Finally, photon and neutron organ doses were measured using TLDs, RPLs and PADCs set inside anthropomorphic phantoms (Rando, 1 and 5-years-old CIRS). All measurements were also simulated with MCNPX to investigate the efficiency of MC models in predicting stray neutrons considering different nuclear cross sections and models. Results: Knowledge of the neutron fluence and energy distribution inside a proton therapy room is critical for stray radiation dosimetry. However, as spectrometry unfolding is initiated using a MC guess spectrum and suffers from algorithmic limits a 20% spectrometry uncertainty is expected. H*(10) mapping with TEPCs and rem-meters showed a good agreement between the detectors. Differences within measurement uncertainty (1015%) were observed and are inherent to the energy, fluence and directional response of each detector. For a typical ocular and intracranial treatment respectively, neutron doses outside the clinical target volume of 0.4 and 11 mGy were measured inside the Rando phantom. Photon doses were 210 times lower depending on organs position. High uncertainties (40%) are inherent to TLDs and PADCs measurements due to the need for neutron spectra at detector position. Finally, stray neutrons prediction with MC simulations proved to be extremely dependent on proton beam energy and the used nuclear models and cross

  17. CASL-U-2015-0247-000 The OpenMC Monte Carlo Particle Transport...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Ellis, Nich Horelik, Benoit Forget, Kord Smith Massachusetts Institute of Technology ... Paul Romano 3 , Benoit Forget, 1 and Kord Smith 1 1 Massachusetts Institute of Technology, ...

  18. Excitonic effects in two-dimensional semiconductors: Path integral Monte Carlo approach

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Velizhanin, Kirill A.; Saxena, Avadh

    2015-11-11

    The most striking features of novel two-dimensional semiconductors (e.g., transition metal dichalcogenide monolayers or phosphorene) is a strong Coulomb interaction between charge carriers resulting in large excitonic effects. In particular, this leads to the formation of multicarrier bound states upon photoexcitation (e.g., excitons, trions, and biexcitons), which could remain stable at near-room temperatures and contribute significantly to the optical properties of such materials. In our work we have used the path integral Monte Carlo methodology to numerically study properties of multicarrier bound states in two-dimensional semiconductors. Specifically, we have accurately investigated and tabulated the dependence of single-exciton, trion, and biexcitonmore » binding energies on the strength of dielectric screening, including the limiting cases of very strong and very weak screening. Our results of this work are potentially useful in the analysis of experimental data and benchmarking of theoretical and computational models.« less

  19. Monte Carlo simulation of elongating metallic nanowires in the presence of surfactants

    SciTech Connect (OSTI)

    Gimenez, M. Cecilia; Reinaudi, Luis Leiva, Ezequiel P. M.

    2015-12-28

    Nanowires of different metals undergoing elongation were studied by means of canonical Monte Carlo simulations and the embedded atom method representing the interatomic potentials. The presence of a surfactant medium was emulated by the introduction of an additional stabilization energy, represented by a parameter Q. Several values of the parameter Q and temperatures were analyzed. In general, it was observed for all studied metals that, as Q increases, there is a greater elongation before the nanowire breaks. In the case of silver, linear monatomic chains several atoms long formed at intermediate values of Q and low temperatures. Similar observations were made for the case of silver-gold alloys when the medium interacted selectively with Ag.

  20. penORNL: a parallel monte carlo photon and electron transport package using PENELOPE

    SciTech Connect (OSTI)

    Bekar, Kursat B.; Miller, Thomas Martin; Patton, Bruce W.; Weber, Charles F.

    2015-01-01

    The parallel Monte Carlo photon and electron transport code package penORNL was developed at Oak Ridge National Laboratory to enable advanced scanning electron microscope (SEM) simulations on high performance computing systems. This paper discusses the implementations, capabilities and parallel performance of the new code package. penORNL uses PENELOPE for its physics calculations and provides all available PENELOPE features to the users, as well as some new features including source definitions specifically developed for SEM simulations, a pulse-height tally capability for detailed simulations of gamma and x-ray detectors, and a modified interaction forcing mechanism to enable accurate energy deposition calculations. The parallel performance of penORNL was extensively tested with several model problems, and very good linear parallel scaling was observed with up to 512 processors. penORNL, along with its new features, will be available for SEM simulations upon completion of the new pulse-height tally implementation.

  1. Kinetic Monte Carlo simulations of scintillation processes in NaI(Tl)

    SciTech Connect (OSTI)

    Kerisit, Sebastien N.; Wang, Zhiguo; Williams, Richard; Grim, Joel; Gao, Fei

    2014-04-26

    Developing a comprehensive understanding of the processes that govern the scintillation behavior of inorganic scintillators provides a pathway to optimize current scintillators and allows for the science-driven search for new scintillator materials. Recent experimental data on the excitation density dependence of the light yield of inorganic scintillators presents an opportunity to incorporate parameterized interactions between excitations in scintillation models and thus enable more realistic simulations of the nonproportionality of inorganic scintillators. Therefore, a kinetic Monte Carlo (KMC) model of elementary scintillation processes in NaI(Tl) is developed in this work to simulate the kinetics of scintillation for a range of temperatures and Tl concentrations as well as the scintillation efficiency as a function of excitation density. The ability of the KMC model to reproduce available experimental data allows for elucidating the elementary processes that give rise to the kinetics and efficiency of scintillation observed experimentally for a range of conditions.

  2. Monte Carlo Code System for High-Energy Radiation Transport Calculations.

    Energy Science and Technology Software Center (OSTI)

    2000-02-16

    Version 00 HERMES-KFA consists of a set of Monte Carlo Codes used to simulate particle radiation and interaction with matter. The main codes are HETC, MORSE, and EGS. They are supported by a common geometry package, common random routines, a command interpreter, and auxiliary codes like NDEM that is used to generate a gamma-ray source from nuclear de-excitation after spallation processes. The codes have been modified so that any particle history falling outside the domainmore » of the physical theory of one program can be submitted to another program in the suite to complete the work. Also response data can be submitted by each program, to be collected and combined by a statistic package included within the command interpreter.« less

  3. Size and habit evolution of PETN crystals - a lattice Monte Carlo study

    SciTech Connect (OSTI)

    Zepeda-Ruiz, L A; Maiti, A; Gee, R; Gilmer, G H; Weeks, B

    2006-02-28

    Starting from an accurate inter-atomic potential we develop a simple scheme of generating an ''on-lattice'' molecular potential of short range, which is then incorporated into a lattice Monte Carlo code for simulating size and shape evolution of nanocrystallites. As a specific example, we test such a procedure on the morphological evolution of a molecular crystal of interest to us, e.g., Pentaerythritol Tetranitrate, or PETN, and obtain realistic facetted structures in excellent agreement with experimental morphologies. We investigate several interesting effects including, the evolution of the initial shape of a ''seed'' to an equilibrium configuration, and the variation of growth morphology as a function of the rate of particle addition relative to diffusion.

  4. Simulation of atomic diffusion in the Fcc NiAl system: A kinetic Monte Carlo study

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Alfonso, Dominic R.; Tafen, De Nyago

    2015-04-28

    The atomic diffusion in fcc NiAl binary alloys was studied by kinetic Monte Carlo simulation. The environment dependent hopping barriers were computed using a pair interaction model whose parameters were fitted to relevant data derived from electronic structure calculations. Long time diffusivities were calculated and the effect of composition change on the tracer diffusion coefficients was analyzed. These results indicate that this variation has noticeable impact on the atomic diffusivities. A reduction in the mobility of both Ni and Al is demonstrated with increasing Al content. As a result, examination of the pair interaction between atoms was carried out formore » the purpose of understanding the predicted trends.« less

  5. Direct simulation Monte Carlo investigation of the Richtmyer-Meshkov instability.

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Gallis, Michail A.; Koehler, Timothy P.; Torczynski, John R.; Plimpton, Steven J.

    2015-08-14

    The Richtmyer-Meshkov instability (RMI) is investigated using the Direct Simulation Monte Carlo (DSMC) method of molecular gas dynamics. Due to the inherent statistical noise and the significant computational requirements, DSMC is hardly ever applied to hydrodynamic flows. Here, DSMC RMI simulations are performed to quantify the shock-driven growth of a single-mode perturbation on the interface between two atmospheric-pressure monatomic gases prior to re-shocking as a function of the Atwood and Mach numbers. The DSMC results qualitatively reproduce all features of the RMI and are in reasonable quantitative agreement with existing theoretical and empirical models. The DSMC simulations indicate that theremore » is a universal behavior, consistent with previous work in this field that RMI growth follows.« less

  6. A bottom collider vertex detector design, Monte-Carlo simulation and analysis package

    SciTech Connect (OSTI)

    Lebrun, P.

    1990-10-01

    A detailed simulation of the BCD vertex detector is underway. Specifications and global design issues are briefly reviewed. The BCD design based on double sided strip detector is described in more detail. The GEANT3-based Monte-Carlo program and the analysis package used to estimate detector performance are discussed in detail. The current status of the expected resolution and signal to noise ratio for the golden'' CP violating mode B{sub d} {yields} {pi}{sup +}{pi}{sup {minus}} is presented. These calculations have been done at FNAL energy ({radical}s = 2.0 TeV). Emphasis is placed on design issues, analysis techniques and related software rather than physics potentials. 20 refs., 46 figs.

  7. Use of decision tree analysis and Monte Carlo simulation for downhole material selection

    SciTech Connect (OSTI)

    Cheldi, T.; Cavassi, P.; Lazzari, L.; Pezzotta, L.

    1997-08-01

    The paper describes how corrosion engineers can use the decision tree analysis in order to evaluate and select the best materials for the compilation of a new oil field, characterized by high CO{sub 2} and H{sub 2}S content. The method has been based on the decision tree analysis and the Monte Carlo simulation to obtain the probability distribution of some events to occur (for instance, number of workovers, corrosion inhibitor efficiency, coating damage rate). The corrosion study leads to four different technical solutions, showing different risk and reliability: carbon steel with corrosion allowance and inhibitor injection, coated tubing, and two corrosion resistant alloys, a superduplex stainless steel and a superaustenitic stainless steel. The cost comparison has been carried out by using the Expected Monetary Value criterion applied to the Life Cycle Cost evaluation. The paper presents and discusses the decision tree and the results of simulations.

  8. Markov Chain Monte Carlo Sampling Methods for 1D Seismic and EM Data Inversion

    Energy Science and Technology Software Center (OSTI)

    2008-09-22

    This software provides several Markov chain Monte Carlo sampling methods for the Bayesian model developed for inverting 1D marine seismic and controlled source electromagnetic (CSEM) data. The current software can be used for individual inversion of seismic AVO and CSEM data and for joint inversion of both seismic and EM data sets. The structure of the software is very general and flexible, and it allows users to incorporate their own forward simulation codes and rockmore » physics model codes easily into this software. Although the softwae was developed using C and C++ computer languages, the user-supplied codes can be written in C, C++, or various versions of Fortran languages. The software provides clear interfaces for users to plug in their own codes. The output of this software is in the format that the R free software CODA can directly read to build MCMC objects.« less

  9. Direct Monte Carlo simulation of the chemical equilibrium composition of detonation products

    SciTech Connect (OSTI)

    Shaw, M.S.

    1993-06-01

    A new Monte Carlo simulation method has been developed by the author which gives the equilibrium chemical composition of a molecular fluid directly. The usual NPT ensemble (isothermal-isobaric) is implemented with N being the number of atoms instead of molecules. Changes in chemical composition are treated as correlated spatial moves of atoms. Given the interaction potentials between molecular products, ``exact`` EOS points including the equilibrium chemical composition can be determined from the simulations. This method is applied to detonation products at conditions in the region near the Chapman- Jouget state. For the example of NO, it is shown that the CJ detonation velocity can be determined to a few meters per second. A rather small change in cross potentials is shown to shift the chemical equilibrium and the CJ conditions significantly.

  10. Iterative Monte Carlo analysis of spin-dependent parton distributions

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Sato, Nobuo; Melnitchouk, Wally; Kuhn, Sebastian E.; Ethier, Jacob J.; Accardi, Alberto

    2016-04-05

    We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳ 0.1. Furthermore, the study also provides the first determination of the flavor-separated twist-3 PDFsmore » and the d2 moment of the nucleon within a global PDF analysis.« less