Monte Carlo simulation for the transport beamline
Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A.; Attili, A.; Marchetto, F.; Russo, G.; Cirrone, G. A. P.; Schillaci, F.; Scuderi, V.; Carpinelli, M.
2013-07-26
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.
A Fast Monte Carlo Simulation for the International Linear Collider
Office of Scientific and Technical Information (OSTI)
Detector (Technical Report) | SciTech Connect A Fast Monte Carlo Simulation for the International Linear Collider Detector Citation Details In-Document Search Title: A Fast Monte Carlo Simulation for the International Linear Collider Detector The following paper contains details concerning the motivation for, implementation and performance of a Java-based fast Monte Carlo simulation for a detector designed to be used in the International Linear Collider. This simulation, presently included
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M.S.; Ricketson, L.F.; Dimits, A.M.; Caflisch, R.E.; Cohen, B.I.
2014-10-01
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ?, the computational cost of the method is O(?{sup ?2}) or O(?{sup ?2}(ln?){sup 2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(?{sup ?3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ?=10{sup ?5}. We discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.
Cluster expansion modeling and Monte Carlo simulation of alnico...
Office of Scientific and Technical Information (OSTI)
Accepted Manuscript: Cluster expansion modeling and Monte Carlo simulation of alnico 5-7 permanent magnets This content will become publicly available on March 5, 2016 Prev Next...
Efficient Monte Carlo Simulations of Gas Molecules Inside Porous Materials
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
| Center for Gas SeparationsRelevant to Clean Energy Technologies | Blandine Jerome Efficient Monte Carlo Simulations of Gas Molecules Inside Porous Materials Previous Next List J. Kim and B. Smit, J. Chem. Theory Comput. 8 (7), 2336 (2012) DOI: 10.1021/ct3003699 Abstract: Monte Carlo (MC) simulations are commonly used to obtain adsorption properties of gas molecules inside porous materials. In this work, we discuss various optimization strategies that lead to faster MC simulations with CO2
Molecular Monte Carlo Simulations Using Graphics Processing Units: To Waste
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Recycle or Not? | Center for Gas SeparationsRelevant to Clean Energy Technologies | Blandine Jerome Monte Carlo Simulations Using Graphics Processing Units: To Waste Recycle or Not? Previous Next List Jihan Kim, Jocelyn M. Rodgers, Manuel AthÃ¨nes, and Berend Smit, J. Chem. Theory Comput., 2011, 7 (10), pp 3208-3222 DOI: 10.1021/ct200474j Figure Abstract: In the waste recycling Monte Carlo (WRMC) algorithm, multiple trial states may be simultaneously generated and utilized during Monte Carlo
Linac Coherent Light Source Monte Carlo Simulation
Energy Science and Technology Software Center (OSTI)
2006-03-15
This suite consists of codes to generate an initial x-ray photon distribution and to propagate the photons through various objects. The suite is designed specifically for simulating the Linac Coherent Light Source, and x-ray free electron laser (XFEL) being built at the Stanford Linear Accelerator Center. The purpose is to provide sufficiently detailed characteristics of the laser to engineers who are designing the laser diagnostics.
Quantum Monte Carlo Simulation of Overpressurized Liquid {sup 4}He
Vranjes, L.; Boronat, J.; Casulleras, J.; Cazorla, C.
2005-09-30
A diffusion Monte Carlo simulation of superfluid {sup 4}He at zero temperature and pressures up to 275 bar is presented. Increasing the pressure beyond freezing ({approx}25 bar), the liquid enters the overpressurized phase in a metastable state. In this regime, we report results of the equation of state and the pressure dependence of the static structure factor, the condensate fraction, and the excited-state energy corresponding to the roton. Along this large pressure range, both the condensate fraction and the roton energy decrease but do not become zero. The roton energies obtained are compared with recent experimental data in the overpressurized regime.
Monte Carlo Simulation Tool Installation and Operation Guide
Aguayo Navarrete, Estanislao; Ankney, Austin S.; Berguson, Timothy J.; Kouzes, Richard T.; Orrell, John L.; Troy, Meredith D.; Wiseman, Clinton G.
2013-09-02
This document provides information on software and procedures for Monte Carlo simulations based on the Geant4 toolkit, the ROOT data analysis software and the CRY cosmic ray library. These tools have been chosen for its application to shield design and activation studies as part of the simulation task for the Majorana Collaboration. This document includes instructions for installation, operation and modification of the simulation code in a high cyber-security computing environment, such as the Pacific Northwest National Laboratory network. It is intended as a living document, and will be periodically updated. It is a starting point for information collection by an experimenter, and is not the definitive source. Users should consult with one of the authors for guidance on how to find the most current information for their needs.
Complete Monte Carlo Simulation of Neutron Scattering Experiments
Drosg, M.
2011-12-13
In the far past, it was not possible to accurately correct for the finite geometry and the finite sample size of a neutron scattering set-up. The limited calculation power of the ancient computers as well as the lack of powerful Monte Carlo codes and the limitation in the data base available then prevented a complete simulation of the actual experiment. Using e.g. the Monte Carlo neutron transport code MCNPX [1], neutron scattering experiments can be simulated almost completely with a high degree of precision using a modern PC, which has a computing power that is ten thousand times that of a super computer of the early 1970s. Thus, (better) corrections can also be obtained easily for previous published data provided that these experiments are sufficiently well documented. Better knowledge of reference data (e.g. atomic mass, relativistic correction, and monitor cross sections) further contributes to data improvement. Elastic neutron scattering experiments from liquid samples of the helium isotopes performed around 1970 at LANL happen to be very well documented. Considering that the cryogenic targets are expensive and complicated, it is certainly worthwhile to improve these data by correcting them using this comparatively straightforward method. As two thirds of all differential scattering cross section data of {sup 3}He(n,n){sup 3}He are connected to the LANL data, it became necessary to correct the dependent data measured in Karlsruhe, Germany, as well. A thorough simulation of both the LANL experiments and the Karlsruhe experiment is presented, starting from the neutron production, followed by the interaction in the air, the interaction with the cryostat structure, and finally the scattering medium itself. In addition, scattering from the hydrogen reference sample was simulated. For the LANL data, the multiple scattering corrections are smaller by a factor of five at least, making this work relevant. Even more important are the corrections to the Karlsruhe data due to the inclusion of the missing outgoing self-attenuation that amounts to up to 15%.
Energy Science and Technology Software Center (OSTI)
2010-10-20
The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.
On-the-fly nuclear data processing methods for Monte Carlo simulations of fast spectrum systems
Walsh, Jon
2015-08-31
The presentation summarizes work performed over summer 2015 related to Monte Carlo simulations. A flexible probability table interpolation scheme has been implemented and tested with results comparing favorably to the continuous phase-space on-the-fly approach.
Monte Carlo simulation of PET and SPECT imaging of {sup 90}Y (Journal
Office of Scientific and Technical Information (OSTI)
Article) | SciTech Connect Monte Carlo simulation of PET and SPECT imaging of {sup 90}Y Citation Details In-Document Search Title: Monte Carlo simulation of PET and SPECT imaging of {sup 90}Y Purpose: Yittrium-90 ({sup 90}Y) is traditionally thought of as a pure beta emitter, and is used in targeted radionuclide therapy, with imaging performed using bremsstrahlung single-photon emission computed tomography (SPECT). However, because {sup 90}Y also emits positrons through internal pair
Monte-Carlo simulation of noise in hard X-ray Transmission Crystal
Office of Scientific and Technical Information (OSTI)
Spectrometers: Identification of contributors to the background noise and shielding optimization (Journal Article) | SciTech Connect Monte-Carlo simulation of noise in hard X-ray Transmission Crystal Spectrometers: Identification of contributors to the background noise and shielding optimization Citation Details In-Document Search Title: Monte-Carlo simulation of noise in hard X-ray Transmission Crystal Spectrometers: Identification of contributors to the background noise and shielding
Quantum Monte Carlo simulation of spin-polarized H
Markic, L. Vranjes; Boronat, J.; Casulleras, J.
2007-02-01
The ground-state properties of spin polarized hydrogen H{down_arrow} are obtained by means of diffusion Monte Carlo calculations. Using the most accurate to date ab initio H{down_arrow}-H{down_arrow} interatomic potential we have studied its gas phase, from the very dilute regime until densities above its freezing point. At very small densities, the equation of state of the gas is very well described in terms of the gas parameter {rho}a{sup 3}, with a the s-wave scattering length. The solid phase has also been studied up to high pressures. The gas-solid phase transition occurs at a pressure of 173 bar, a much higher value than suggested by previous approximate descriptions.
Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry
Bostani, Maryam McMillan, Kyle; Cagnon, Chris H.; McNitt-Gray, Michael F.; Mueller, Jonathon W.; Cody, Dianna D.; DeMarco, John J.
2015-02-15
Purpose: The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. Methods: MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for all exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. Results: The calculated mean percent difference between TLD measurements and Monte Carlo simulations was ?4.9% with standard deviation of 8.7% and a range of ?22.7% to 5.7%. Conclusions: The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.
Energy Science and Technology Software Center (OSTI)
2006-05-09
The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.
MONTE CARLO SIMULATION OF METASTABLE OXYGEN PHOTOCHEMISTRY IN COMETARY ATMOSPHERES
Bisikalo, D. V.; Shematovich, V. I. [Institute of Astronomy of the Russian Academy of Sciences, Moscow (Russian Federation); Gérard, J.-C.; Hubert, B. [Laboratory for Planetary and Atmospheric Physics (LPAP), University of Liège, Liège (Belgium); Jehin, E.; Decock, A. [Origines Cosmologiques et Astrophysiques (ORCA), University of Liège (Belgium); Hutsemékers, D. [Extragalactic Astrophysics and Space Observations (EASO), University of Liège (Belgium); Manfroid, J., E-mail: B.Hubert@ulg.ac.be [High Energy Astrophysics Group (GAPHE), University of Liège (Belgium)
2015-01-01
Cometary atmospheres are produced by the outgassing of material, mainly H{sub 2}O, CO, and CO{sub 2} from the nucleus of the comet under the energy input from the Sun. Subsequent photochemical processes lead to the production of other species generally absent from the nucleus, such as OH. Although all comets are different, they all have a highly rarefied atmosphere, which is an ideal environment for nonthermal photochemical processes to take place and influence the detailed state of the atmosphere. We develop a Monte Carlo model of the coma photochemistry. We compute the energy distribution functions (EDF) of the metastable O({sup 1}D) and O({sup 1}S) species and obtain the red (630 nm) and green (557.7 nm) spectral line shapes of the full coma, consistent with the computed EDFs and the expansion velocity. We show that both species have a severely non-Maxwellian EDF, that results in broad spectral lines and the suprathermal broadening dominates due to the expansion motion. We apply our model to the atmosphere of comet C/1996 B2 (Hyakutake) and 103P/Hartley 2. The computed width of the green line, expressed in terms of speed, is lower than that of the red line. This result is comparable to previous theoretical analyses, but in disagreement with observations. We explain that the spectral line shape does not only depend on the exothermicity of the photochemical production mechanisms, but also on thermalization, due to elastic collisions, reducing the width of the emission line coming from the O({sup 1}D) level, which has a longer lifetime.
3D Direct Simulation Monte Carlo Code Which Solves for Geometrics
Energy Science and Technology Software Center (OSTI)
1998-01-13
Pegasus is a 3D Direct Simulation Monte Carlo Code which solves for geometries which can be represented by bodies of revolution. Included are all the surface chemistry enhancements in the 2D code Icarus as well as a real vacuum pump model. The code includes multiple species transport.
Isotropic Monte Carlo Grain Growth
Energy Science and Technology Software Center (OSTI)
2013-04-25
IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.
The effects of mapping CT images to Monte Carlo materials on GEANT4 proton simulation accuracy
Barnes, Samuel; McAuley, Grant; Slater, James; Wroe, Andrew
2013-04-15
Purpose: Monte Carlo simulations of radiation therapy require conversion from Hounsfield units (HU) in CT images to an exact tissue composition and density. The number of discrete densities (or density bins) used in this mapping affects the simulation accuracy, execution time, and memory usage in GEANT4 and other Monte Carlo code. The relationship between the number of density bins and CT noise was examined in general for all simulations that use HU conversion to density. Additionally, the effect of this on simulation accuracy was examined for proton radiation. Methods: Relative uncertainty from CT noise was compared with uncertainty from density binning to determine an upper limit on the number of density bins required in the presence of CT noise. Error propagation analysis was also performed on continuously slowing down approximation range calculations to determine the proton range uncertainty caused by density binning. These results were verified with Monte Carlo simulations. Results: In the presence of even modest CT noise (5 HU or 0.5%) 450 density bins were found to only cause a 5% increase in the density uncertainty (i.e., 95% of density uncertainty from CT noise, 5% from binning). Larger numbers of density bins are not required as CT noise will prevent increased density accuracy; this applies across all types of Monte Carlo simulations. Examining uncertainty in proton range, only 127 density bins are required for a proton range error of <0.1 mm in most tissue and <0.5 mm in low density tissue (e.g., lung). Conclusions: By considering CT noise and actual range uncertainty, the number of required density bins can be restricted to a very modest 127 depending on the application. Reducing the number of density bins provides large memory and execution time savings in GEANT4 and other Monte Carlo packages.
Numerical thermalization in particle-in-cell simulations with Monte-Carlo collisions
Lai, P. Y.; Lin, T. Y.; Lin-Liu, Y. R.; Chen, S. H.
2014-12-15
Numerical thermalization in collisional one-dimensional (1D) electrostatic (ES) particle-in-cell (PIC) simulations was investigated. Two collision models, the pitch-angle scattering of electrons by the stationary ion background and large-angle collisions between the electrons and the neutral background, were included in the PIC simulation using Monte-Carlo methods. The numerical results show that the thermalization times in both models were considerably reduced by the additional Monte-Carlo collisions as demonstrated by comparisons with Turner's previous simulation results based on a head-on collision model [M. M. Turner, Phys. Plasmas 13, 033506 (2006)]. However, the breakdown of Dawson's scaling law in the collisional 1D ES PIC simulation is more complicated than that was observed by Turner, and the revised scaling law of the numerical thermalization time with numerical parameters are derived on the basis of the simulation results obtained in this study.
A Fast Monte Carlo Simulation for the International Linear Collider...
Office of Scientific and Technical Information (OSTI)
with the full simulation by sacrificing what is in more many cases inappropriate attention to detail for valuable gains in time required for results. less Authors: Furse, D. ...
Posters Monte Carlo Simulation of Longwave Fluxes Through Broken Scattering Cloud Fields
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
5 Posters Monte Carlo Simulation of Longwave Fluxes Through Broken Scattering Cloud Fields E. E. Takara and R. G. Ellingson University of Maryland College Park, Maryland To simplify the analysis, we made several assumptions: the clouds were cuboidal; they were all identically sized and shaped; and they had constant optical properties. Results and Discussion The model was run for a set of cloud fields with clouds of varying optical thickness and scattering albedo. The predicted effective cloud
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Tringe, J. W.; Ileri, N.; Levie, H. W.; Stroeve, P.; Ustach, V.; Faller, R.; Renaud, P.
2015-08-01
We use Molecular Dynamics and Monte Carlo simulations to examine molecular transport phenomena in nanochannels, explaining four orders of magnitude difference in wheat germ agglutinin (WGA) protein diffusion rates observed by fluorescence correlation spectroscopy (FCS) and by direct imaging of fluorescently-labeled proteins. We first use the ESPResSo Molecular Dynamics code to estimate the surface transport distance for neutral and charged proteins. We then employ a Monte Carlo model to calculate the paths of protein molecules on surfaces and in the bulk liquid transport medium. Our results show that the transport characteristics depend strongly on the degree of molecular surface coverage.moreÂ Â» Atomic force microscope characterization of surfaces exposed to WGA proteins for 1000 s show large protein aggregates consistent with the predicted coverage. These calculations and experiments provide useful insight into the details of molecular motion in confined geometries.Â«Â less
Nonequilibrium candidate Monte Carlo: A new tool for efficient equilibrium simulation
Nilmeier, Jerome P.; Crooks, Gavin E.; Minh, David D. L.; Chodera, John D.
2011-11-08
Metropolis Monte Carlo simulation is a powerful tool for studying the equilibrium properties of matter. In complex condensed-phase systems, however, it is difficult to design Monte Carlo moves with high acceptance probabilities that also rapidly sample uncorrelated configurations. Here, we introduce a new class of moves based on nonequilibrium dynamics: candidate configurations are generated through a finite-time process in which a system is actively driven out of equilibrium, and accepted with criteria that preserve the equilibrium distribution. The acceptance rule is similar to the Metropolis acceptance probability, but related to the nonequilibrium work rather than the instantaneous energy difference. Our method is applicable to sampling from both a single thermodynamic state or a mixture of thermodynamic states, and allows both coordinates and thermodynamic parameters to be driven in nonequilibrium proposals. While generating finite-time switching trajectories incurs an additional cost, driving some degrees of freedom while allowing others to evolve naturally can lead to large enhancements in acceptance probabilities, greatly reducing structural correlation times. Using nonequilibrium driven processes vastly expands the repertoire of useful Monte Carlo proposals in simulations of dense solvated systems.
Radiation doses in cone-beam breast computed tomography: A Monte Carlo simulation study
Yi Ying; Lai, Chao-Jen; Han Tao; Zhong Yuncheng; Shen Youtao; Liu Xinming; Ge Shuaiping; You Zhicheng; Wang Tianpeng; Shaw, Chris C.
2011-02-15
Purpose: In this article, we describe a method to estimate the spatial dose variation, average dose and mean glandular dose (MGD) for a real breast using Monte Carlo simulation based on cone beam breast computed tomography (CBBCT) images. We present and discuss the dose estimation results for 19 mastectomy breast specimens, 4 homogeneous breast models, 6 ellipsoidal phantoms, and 6 cylindrical phantoms. Methods: To validate the Monte Carlo method for dose estimation in CBBCT, we compared the Monte Carlo dose estimates with the thermoluminescent dosimeter measurements at various radial positions in two polycarbonate cylinders (11- and 15-cm in diameter). Cone-beam computed tomography (CBCT) images of 19 mastectomy breast specimens, obtained with a bench-top experimental scanner, were segmented and used to construct 19 structured breast models. Monte Carlo simulation of CBBCT with these models was performed and used to estimate the point doses, average doses, and mean glandular doses for unit open air exposure at the iso-center. Mass based glandularity values were computed and used to investigate their effects on the average doses as well as the mean glandular doses. Average doses for 4 homogeneous breast models were estimated and compared to those of the corresponding structured breast models to investigate the effect of tissue structures. Average doses for ellipsoidal and cylindrical digital phantoms of identical diameter and height were also estimated for various glandularity values and compared with those for the structured breast models. Results: The absorbed dose maps for structured breast models show that doses in the glandular tissue were higher than those in the nearby adipose tissue. Estimated average doses for the homogeneous breast models were almost identical to those for the structured breast models (p=1). Normalized average doses estimated for the ellipsoidal phantoms were similar to those for the structured breast models (root mean square (rms) percentage difference=1.7%; p=0.01), whereas those for the cylindrical phantoms were significantly lower (rms percentage difference=7.7%; p<0.01). Normalized MGDs were found to decrease with increasing glandularity. Conclusions: Our results indicate that it is sufficient to use homogeneous breast models derived from CBCT generated structured breast models to estimate the average dose. This investigation also shows that ellipsoidal digital phantoms of similar dimensions (diameter and height) and glandularity to actual breasts may be used to represent a real breast to estimate the average breast dose with Monte Carlo simulation. We have also successfully demonstrated the use of structured breast models to estimate the true MGDs and shown that the normalized MGDs decreased with the glandularity as previously reported by other researchers for CBBCT or mammography.
MCViNE- An object oriented Monte Carlo neutron ray tracing simulation package
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Lin, J. Y. Y.; Smith, Hillary L.; Granroth, Garrett E.; Abernathy, Douglas L.; Lumsden, Mark D.; Winn, Barry L.; Aczel, Adam A.; Aivazis, Michael; Fultz, Brent
2015-11-28
MCViNE (Monte-Carlo VIrtual Neutron Experiment) is an open-source Monte Carlo (MC) neutron ray-tracing software for performing computer modeling and simulations that mirror real neutron scattering experiments. We exploited the close similarity between how instrument components are designed and operated and how such components can be modeled in software. For example we used object oriented programming concepts for representing neutron scatterers and detector systems, and recursive algorithms for implementing multiple scattering. Combining these features together in MCViNE allows one to handle sophisticated neutron scattering problems in modern instruments, including, for example, neutron detection by complex detector systems, and single and multiplemoreÂ Â» scattering events in a variety of samples and sample environments. In addition, MCViNE can use simulation components from linear-chain-based MC ray tracing packages which facilitates porting instrument models from those codes. Furthermore it allows for components written solely in Python, which expedites prototyping of new components. These developments have enabled detailed simulations of neutron scattering experiments, with non-trivial samples, for time-of-flight inelastic instruments at the Spallation Neutron Source. Examples of such simulations for powder and single-crystal samples with various scattering kernels, including kernels for phonon and magnon scattering, are presented. As a result, with simulations that closely reproduce experimental results, scattering mechanisms can be turned on and off to determine how they contribute to the measured scattering intensities, improving our understanding of the underlying physics.Â«Â less
Simulation of atomic diffusion in the Fcc NiAl system: A kinetic Monte Carlo study
Alfonso, Dominic R.; Tafen, De Nyago
2015-04-28
The atomic diffusion in fcc NiAl binary alloys was studied by kinetic Monte Carlo simulation. The environment dependent hopping barriers were computed using a pair interaction model whose parameters were fitted to relevant data derived from electronic structure calculations. Long time diffusivities were calculated and the effect of composition change on the tracer diffusion coefficients was analyzed. These results indicate that this variation has noticeable impact on the atomic diffusivities. A reduction in the mobility of both Ni and Al is demonstrated with increasing Al content. As a result, examination of the pair interaction between atoms was carried out for the purpose of understanding the predicted trends.
Monte Carlo simulations of channeling spectra recorded for samples containing complex defects
Jagielski, Jacek; Turos, Prof. Andrzej; Nowicki, Lech; Jozwik, P.; Shutthanandan, Vaithiyalingam; Zhang, Yanwen; Sathish, N.; Thome, Lionel; Stonert, A.; Jozwik-Biala, Iwona
2012-01-01
The aim of the present paper is to describe the current status of the development of McChasy, a Monte Carlo simulation code, to make it suitable for the analysis of dislocations and dislocation loops in crystals. Such factors like the shape of the bent channel and geometrical distortions of the crystalline structure in the vicinity of dislocation has been discussed. The results obtained demonstrate that the new procedure applied to the spectra recorded on crystals containing dislocation yields damage profiles which are independent of the energy of the analyzing beam.
Monte Carlo simulations of channeling spectra recorded for samples containing complex defects
Jagielski, Jacek K.; Turos, Andrzej W.; Nowicki, L.; Jozwik, Przemyslaw A.; Shutthanandan, V.; Zhang, Yanwen; Sathish, N.; Thome, Lionel; Stonert, A.; Jozwik Biala, Iwona
2012-02-15
The main aim of the present paper is to describe the current status of the development of McChasy, a Monte Carlo simulation code, to make it suitable for the analysis of dislocations and dislocation loops in crystals. Such factors like the shape of the bent channel and geometrical distortions of the crystalline structure in the vicinity of dislocation has been discussed. Several examples of the analysis performed at different energies of analyzing ions are presented. The results obtained demonstrate that the new procedure applied to the spectra recorded on crystals containing dislocation yields damage profiles which are independent of the energy of the analyzing beam.
Direct simulation Monte Carlo investigation of the Richtmyer-Meshkov instability.
Gallis, Michail A.; Koehler, Timothy P.; Torczynski, John R.; Plimpton, Steven J.
2015-08-14
The Richtmyer-Meshkov instability (RMI) is investigated using the Direct Simulation Monte Carlo (DSMC) method of molecular gas dynamics. Due to the inherent statistical noise and the significant computational requirements, DSMC is hardly ever applied to hydrodynamic flows. Here, DSMC RMI simulations are performed to quantify the shock-driven growth of a single-mode perturbation on the interface between two atmospheric-pressure monatomic gases prior to re-shocking as a function of the Atwood and Mach numbers. The DSMC results qualitatively reproduce all features of the RMI and are in reasonable quantitative agreement with existing theoretical and empirical models. The DSMC simulations indicate that there is a universal behavior, consistent with previous work in this field that RMI growth follows.
Direct simulation Monte Carlo investigation of the Richtmyer-Meshkov instability.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Gallis, Michail A.; Koehler, Timothy P.; Torczynski, John R.; Plimpton, Steven J.
2015-08-14
The Richtmyer-Meshkov instability (RMI) is investigated using the Direct Simulation Monte Carlo (DSMC) method of molecular gas dynamics. Due to the inherent statistical noise and the significant computational requirements, DSMC is hardly ever applied to hydrodynamic flows. Here, DSMC RMI simulations are performed to quantify the shock-driven growth of a single-mode perturbation on the interface between two atmospheric-pressure monatomic gases prior to re-shocking as a function of the Atwood and Mach numbers. The DSMC results qualitatively reproduce all features of the RMI and are in reasonable quantitative agreement with existing theoretical and empirical models. The DSMC simulations indicate that theremoreÂ Â» is a universal behavior, consistent with previous work in this field that RMI growth follows.Â«Â less
Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.; Brown, Forrest B.
2015-11-19
Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics for one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.
Massively parallel Monte Carlo for many-particle simulations on GPUs
Anderson, Joshua A.; Jankowski, Eric [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)] [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Grubb, Thomas L. [Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)] [Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Engel, Michael [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)] [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Glotzer, Sharon C., E-mail: sglotzer@umich.edu [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)
2013-12-01
Current trends in parallel processors call for the design of efficient massively parallel algorithms for scientific computing. Parallel algorithms for Monte Carlo simulations of thermodynamic ensembles of particles have received little attention because of the inherent serial nature of the statistical sampling. In this paper, we present a massively parallel method that obeys detailed balance and implement it for a system of hard disks on the GPU. We reproduce results of serial high-precision Monte Carlo runs to verify the method. This is a good test case because the hard disk equation of state over the range where the liquid transforms into the solid is particularly sensitive to small deviations away from the balance conditions. On a Tesla K20, our GPU implementation executes over one billion trial moves per second, which is 148 times faster than on a single Intel Xeon E5540 CPU core, enables 27 times better performance per dollar, and cuts energy usage by a factor of 13. With this improved performance we are able to calculate the equation of state for systems of up to one million hard disks. These large system sizes are required in order to probe the nature of the melting transition, which has been debated for the last forty years. In this paper we present the details of our computational method, and discuss the thermodynamics of hard disks separately in a companion paper.
Surface Structures of Cubo-octahedral Pt-Mo Catalyst Nanoparticles from Monte Carlo Simulations
Wang, Guofeng; Van Hove, M.A.; Ross, P.N.; Baskes, M.I.
2005-03-31
The surface structures of cubo-octahedral Pt-Mo nanoparticles have been investigated using the Monte Carlo method and modified embedded atom method potentials that we developed for Pt-Mo alloys. The cubo-octahedral Pt-Mo nanoparticles are constructed with disordered fcc configurations, with sizes from 2.5 to 5.0 nm, and with Pt concentrations from 60 to 90 at. percent. The equilibrium Pt-Mo nanoparticle configurations were generated through Monte Carlo simulations allowing both atomic displacements and element exchanges at 600 K. We predict that the Pt atoms weakly segregate to the surfaces of such nanoparticles. The Pt concentrations in the surface are calculated to be 5 to 14 at. percent higher than the Pt concentrations of the nanoparticles. Moreover, the Pt atoms preferentially segregate to the facet sites of the surface, while the Pt and Mo atoms tend to alternate along the edges and vertices of these nanoparticles. We found that decreasing the size or increasing the Pt concentration leads to higher Pt concentrations but fewer Pt-Mo pairs in the Pt-Mo nanoparticle surfaces.
Density-functional Monte-Carlo simulation of CuZn order-disorder transition
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Khan, Suffian N.; Eisenbach, Markus
2016-01-25
We perform a Wang-Landau Monte Carlo simulation of a Cu0.5Zn0.5 order-disorder transition using 250 atoms and pairwise atom swaps inside a 5 x 5 x 5 BCC supercell. Each time step uses energies calculated from density functional theory (DFT) via the all-electron Korringa-Kohn- Rostoker method and self-consistent potentials. Here we find CuZn undergoes a transition from a disordered A2 to an ordered B2 structure, as observed in experiment. Our calculated transition temperature is near 870 K, comparing favorably to the known experimental peak at 750 K. We also plot the entropy, temperature, specific-heat, and short-range order as a function ofmoreÂ Â» internal energy.Â«Â less
Billion-atom synchronous parallel kinetic Monte Carlo simulations of critical 3D Ising systems
Martinez, E.; Monasterio, P.R.; Marian, J.
2011-02-20
An extension of the synchronous parallel kinetic Monte Carlo (spkMC) algorithm developed by Martinez et al. [J. Comp. Phys. 227 (2008) 3804] to discrete lattices is presented. The method solves the master equation synchronously by recourse to null events that keep all processors' time clocks current in a global sense. Boundary conflicts are resolved by adopting a chessboard decomposition into non-interacting sublattices. We find that the bias introduced by the spatial correlations attendant to the sublattice decomposition is within the standard deviation of serial calculations, which confirms the statistical validity of our algorithm. We have analyzed the parallel efficiency of spkMC and find that it scales consistently with problem size and sublattice partition. We apply the method to the calculation of scale-dependent critical exponents in billion-atom 3D Ising systems, with very good agreement with state-of-the-art multispin simulations.
A bottom collider vertex detector design, Monte-Carlo simulation and analysis package
Lebrun, P.
1990-10-01
A detailed simulation of the BCD vertex detector is underway. Specifications and global design issues are briefly reviewed. The BCD design based on double sided strip detector is described in more detail. The GEANT3-based Monte-Carlo program and the analysis package used to estimate detector performance are discussed in detail. The current status of the expected resolution and signal to noise ratio for the golden'' CP violating mode B{sub d} {yields} {pi}{sup +}{pi}{sup {minus}} is presented. These calculations have been done at FNAL energy ({radical}s = 2.0 TeV). Emphasis is placed on design issues, analysis techniques and related software rather than physics potentials. 20 refs., 46 figs.
Cluster expansion modeling and Monte Carlo simulation of alnico 5–7 permanent magnets
Nguyen, Manh Cuong; Zhao, Xin; Wang, Cai -Zhuang; Ho, Kai -Ming
2015-03-05
The concerns about the supply and resource of rare earth (RE) metals have generated a lot of interests in searching for high performance RE-free permanent magnets. Alnico alloys are traditional non-RE permanent magnets and have received much attention recently due their good performance at high temperature. In this paper, we develop an accurate and efficient cluster expansion energy model for alnico 5–7. Monte Carlo simulations using the cluster expansion method are performed to investigate the structure of alnico 5–7 at atomistic and nano scales. The alnico 5–7 master alloy is found to decompose into FeCo-rich and NiAl-rich phases at low temperature. The boundary between these two phases is quite sharp (~2 nm) for a wide range of temperature. The compositions of the main constituents in these two phases become higher when the temperature gets lower. Both FeCo-rich and NiAl-rich phases are in B2 ordering with Fe and Al on ?-site and Ni and Co on ?-site. The degree of order of the NiAl-rich phase is much higher than that of the FeCo-rich phase. In addition, a small magnetic moment is also observed in NiAl-rich phase but the moment reduces as the temperature is lowered, implying that the magnetic properties of alnico 5–7 could be improved by lowering annealing temperature to diminish the magnetism in NiAl-rich phase. Furthermore, the results from our Monte Carlo simulations are consistent with available experimental results.
Cluster expansion modeling and Monte Carlo simulation of alnico 5â€“7 permanent magnets
Nguyen, Manh Cuong; Zhao, Xin; Wang, Cai -Zhuang; Ho, Kai -Ming
2015-03-05
The concerns about the supply and resource of rare earth (RE) metals have generated a lot of interests in searching for high performance RE-free permanent magnets. Alnico alloys are traditional non-RE permanent magnets and have received much attention recently due their good performance at high temperature. In this paper, we develop an accurate and efficient cluster expansion energy model for alnico 5â€“7. Monte Carlo simulations using the cluster expansion method are performed to investigate the structure of alnico 5â€“7 at atomistic and nano scales. The alnico 5â€“7 master alloy is found to decompose into FeCo-rich and NiAl-rich phases at low temperature. The boundary between these two phases is quite sharp (~2 nm) for a wide range of temperature. The compositions of the main constituents in these two phases become higher when the temperature gets lower. Both FeCo-rich and NiAl-rich phases are in B2 ordering with Fe and Al on Î±-site and Ni and Co on Î²-site. The degree of order of the NiAl-rich phase is much higher than that of the FeCo-rich phase. In addition, a small magnetic moment is also observed in NiAl-rich phase but the moment reduces as the temperature is lowered, implying that the magnetic properties of alnico 5â€“7 could be improved by lowering annealing temperature to diminish the magnetism in NiAl-rich phase. Furthermore, the results from our Monte Carlo simulations are consistent with available experimental results.
Cluster expansion modeling and Monte Carlo simulation of alnico 5â€“7 permanent magnets
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Nguyen, Manh Cuong; Zhao, Xin; Wang, Cai -Zhuang; Ho, Kai -Ming
2015-03-05
The concerns about the supply and resource of rare earth (RE) metals have generated a lot of interests in searching for high performance RE-free permanent magnets. Alnico alloys are traditional non-RE permanent magnets and have received much attention recently due their good performance at high temperature. In this paper, we develop an accurate and efficient cluster expansion energy model for alnico 5â€“7. Monte Carlo simulations using the cluster expansion method are performed to investigate the structure of alnico 5â€“7 at atomistic and nano scales. The alnico 5â€“7 master alloy is found to decompose into FeCo-rich and NiAl-rich phases at lowmoreÂ Â» temperature. The boundary between these two phases is quite sharp (~2 nm) for a wide range of temperature. The compositions of the main constituents in these two phases become higher when the temperature gets lower. Both FeCo-rich and NiAl-rich phases are in B2 ordering with Fe and Al on Î±-site and Ni and Co on Î²-site. The degree of order of the NiAl-rich phase is much higher than that of the FeCo-rich phase. In addition, a small magnetic moment is also observed in NiAl-rich phase but the moment reduces as the temperature is lowered, implying that the magnetic properties of alnico 5â€“7 could be improved by lowering annealing temperature to diminish the magnetism in NiAl-rich phase. Furthermore, the results from our Monte Carlo simulations are consistent with available experimental results.Â«Â less
Uribe, R. M.; Salvat, F.; Cleland, M. R.; Berejka, A.
2009-03-10
The Monte Carlo code PENELOPE was used to simulate the irradiation of alanine coated film dosimeters with electron beams of energies from 1 to 5 MeV being produced by a high-current industrial electron accelerator. This code includes a geometry package that defines complex quadratic geometries, such as those of the irradiation of products in an irradiation processing facility. In the present case the energy deposited on a water film at the surface of a wood parallelepiped was calculated using the program PENMAIN, which is a generic main program included in the PENELOPE distribution package. The results from the simulation were then compared with measurements performed by irradiating alanine film dosimeters with electrons using a 150 kW Dynamitron electron accelerator. The alanine films were placed on top of a set of wooden planks using the same geometrical arrangement as the one used for the simulation. The way the results from the simulation can be correlated with the actual measurements, taking into account the irradiation parameters, is described. An estimation of the percentage difference between measurements and calculations is also presented.
MONTE CARLO SIMULATIONS OF PERIODIC PULSED REACTOR WITH MOVING GEOMETRY PARTS
Cao, Yan; Gohar, Yousry
2015-11-01
In a periodic pulsed reactor, the reactor state varies periodically from slightly subcritical to slightly prompt supercritical for producing periodic power pulses. Such periodic state change is accomplished by a periodic movement of specific reactor parts, such as control rods or reflector sections. The analysis of such reactor is difficult to perform with the current reactor physics computer programs. Based on past experience, the utilization of the point kinetics approximations gives considerable errors in predicting the magnitude and the shape of the power pulse if the reactor has significantly different neutron life times in different zones. To accurately simulate the dynamics of this type of reactor, a Monte Carlo procedure using the transfer function TRCL/TR of the MCNP/MCNPX computer programs is utilized to model the movable reactor parts. In this paper, two algorithms simulating the geometry part movements during a neutron history tracking have been developed. Several test cases have been developed to evaluate these procedures. The numerical test cases have shown that the developed algorithms can be utilized to simulate the reactor dynamics with movable geometry parts.
von Wittenau, A; Aufderheide, M B; Henderson, G L
2010-05-07
Given the cost and lead-times involved in high-energy proton radiography, it is prudent to model proposed radiographic experiments to see if the images predicted would return useful information. We recently modified our raytracing transmission radiography modeling code HADES to perform simplified Monte Carlo simulations of the transport of protons in a proton radiography beamline. Beamline objects include the initial diffuser, vacuum magnetic fields, windows, angle-selecting collimators, and objects described as distorted 2D (planar or cylindrical) meshes or as distorted 3D hexahedral meshes. We present an overview of the algorithms used for the modeling and code timings for simulations through typical 2D and 3D meshes. We next calculate expected changes in image blur as scattering materials are placed upstream and downstream of a resolution test object (a 3 mm thick sheet of tantalum, into which 0.4 mm wide slits have been cut), and as the current supplied to the focusing magnets is varied. We compare and contrast the resulting simulations with the results of measurements obtained at the 800 MeV Los Alamos LANSCE Line-C proton radiography facility.
MUSiC - An Automated Scan for Deviations between Data and Monte Carlo Simulation
Meyer, Arnd
2010-02-10
A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.
Byun, H. S.; Pirbadian, S.; Nakano, Aiichiro; Shi, Liang; El-Naggar, Mohamed Y.
2014-09-05
Microorganisms overcome the considerable hurdle of respiring extracellular solid substrates by deploying large multiheme cytochrome complexes that form 20 nanometer conduits to traffic electrons through the periplasm and across the cellular outer membrane. Here we report the first kinetic Monte Carlo simulations and single-molecule scanning tunneling microscopy (STM) measurements of the Shewanella oneidensis MR-1 outer membrane decaheme cytochrome MtrF, which can perform the final electron transfer step from cells to minerals and microbial fuel cell anodes. We find that the calculated electron transport rate through MtrF is consistent with previously reported in vitro measurements of the Shewanella Mtr complex, as well as in vivo respiration rates on electrode surfaces assuming a reasonable (experimentally verified) coverage of cytochromes on the cell surface. The simulations also reveal a rich phase diagram in the overall electron occupation density of the hemes as a function of electron injection and ejection rates. Single molecule tunneling spectroscopy confirms MtrF's ability to mediate electron transport between an STM tip and an underlying Au(111) surface, but at rates higher than expected from previously calculated heme-heme electron transfer rates for solvated molecules.
Vrugt, Jasper A; Hyman, James M; Robinson, Bruce A; Higdon, Dave; Ter Braak, Cajo J F; Diks, Cees G H
2008-01-01
Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.
Reactor physics simulations with coupled Monte Carlo calculation and computational fluid dynamics.
Seker, V.; Thomas, J. W.; Downar, T. J.; Purdue Univ.
2007-01-01
A computational code system based on coupling the Monte Carlo code MCNP5 and the Computational Fluid Dynamics (CFD) code STAR-CD was developed as an audit tool for lower order nuclear reactor calculations. This paper presents the methodology of the developed computer program 'McSTAR'. McSTAR is written in FORTRAN90 programming language and couples MCNP5 and the commercial CFD code STAR-CD. MCNP uses a continuous energy cross section library produced by the NJOY code system from the raw ENDF/B data. A major part of the work was to develop and implement methods to update the cross section library with the temperature distribution calculated by STARCD for every region. Three different methods were investigated and implemented in McSTAR. The user subroutines in STAR-CD are modified to read the power density data and assign them to the appropriate variables in the program and to write an output data file containing the temperature, density and indexing information to perform the mapping between MCNP and STAR-CD cells. Preliminary testing of the code was performed using a 3x3 PWR pin-cell problem. The preliminary results are compared with those obtained from a STAR-CD coupled calculation with the deterministic transport code DeCART. Good agreement in the k{sub eff} and the power profile was observed. Increased computational capabilities and improvements in computational methods have accelerated interest in high fidelity modeling of nuclear reactor cores during the last several years. High-fidelity has been achieved by utilizing full core neutron transport solutions for the neutronics calculation and computational fluid dynamics solutions for the thermal-hydraulics calculation. Previous researchers have reported the coupling of 3D deterministic neutron transport method to CFD and their application to practical reactor analysis problems. One of the principal motivations of the work here was to utilize Monte Carlo methods to validate the coupled deterministic neutron transport and CFD solutions. Previous researchers have successfully performed Monte Carlo calculations with limited thermal feedback. In fact, much of the validation of the deterministic neutronics transport code DeCART in was performed using the Monte Carlo code McCARD which employs a limited thermal feedback model. However, for a broader range of temperature/fluid applications it was desirable to couple Monte Carlo to a more sophisticated temperature fluid solution such as CFD. This paper focuses on the methods used to couple Monte Carlo to CFD and their application to a series of simple test problems.
Monte Carlo simulation based study of a proposed multileaf collimator for a telecobalt machine
Sahani, G.; Dash Sharma, P. K.; Hussain, S. A.; Dutt Sharma, Sunil; Sharma, D. N.
2013-02-15
Purpose: The objective of the present work was to propose a design of a secondary multileaf collimator (MLC) for a telecobalt machine and optimize its design features through Monte Carlo simulation. Methods: The proposed MLC design consists of 72 leaves (36 leaf pairs) with additional jaws perpendicular to leaf motion having the capability of shaping a maximum square field size of 35 Multiplication-Sign 35 cm{sup 2}. The projected widths at isocenter of each of the central 34 leaf pairs and 2 peripheral leaf pairs are 10 and 5 mm, respectively. The ends of the leaves and the x-jaws were optimized to obtain acceptable values of dosimetric and leakage parameters. Monte Carlo N-Particle code was used for generating beam profiles and depth dose curves and estimating the leakage radiation through the MLC. A water phantom of dimension 50 Multiplication-Sign 50 Multiplication-Sign 40 cm{sup 3} with an array of voxels (4 Multiplication-Sign 0.3 Multiplication-Sign 0.6 cm{sup 3}= 0.72 cm{sup 3}) was used for the study of dosimetric and leakage characteristics of the MLC. Output files generated for beam profiles were exported to the PTW radiation field analyzer software through locally developed software for analysis of beam profiles in order to evaluate radiation field width, beam flatness, symmetry, and beam penumbra. Results: The optimized version of the MLC can define radiation fields of up to 35 Multiplication-Sign 35 cm{sup 2} within the prescribed tolerance values of 2 mm. The flatness and symmetry were found to be well within the acceptable tolerance value of 3%. The penumbra for a 10 Multiplication-Sign 10 cm{sup 2} field size is 10.7 mm which is less than the generally acceptable value of 12 mm for a telecobalt machine. The maximum and average radiation leakage through the MLC were found to be 0.74% and 0.41% which are well below the International Electrotechnical Commission recommended tolerance values of 2% and 0.75%, respectively. The maximum leakage through the leaf ends in closed condition was observed to be 8.6% which is less than the values reported for other MLCs designed for medical linear accelerators. Conclusions: It is concluded that dosimetric parameters and the leakage radiation of the optimized secondary MLC design are well below their recommended tolerance values. The optimized design of the proposed MLC can be integrated into a telecobalt machine by replacing the existing adjustable secondary collimator for conformal radiotherapy treatment of cancer patients.
Boscoboinik, A. M.; Manzi, S. J.; Tysoe, W. T.; Pereyra, V. D.; Boscoboinik, J. A.
2015-09-10
The influence of directing agents in the self-assembly of molecular wires to produce two-dimensional electronic nanoarchitectures is studied here using a Monte Carlo approach to simulate the effect of arbitrarily locating nodal points on a surface, from which the growth of self-assembled molecular wires can be nucleated. This is compared to experimental results reported for the self-assembly of molecular wires when 1,4-phenylenediisocyanide (PDI) is adsorbed on Au(111). The latter results in the formation of (Au-PDI)_{n} organometallic chains, which were shown to be conductive when linked between gold nanoparticles on an insulating substrate. The present study analyzes, by means of stochastic methods, the influence of variables that affect the growth and design of self-assembled conductive nanoarchitectures, such as the distance between nodes, coverage of the monomeric units that leads to the formation of the desired architectures, and the interaction between the monomeric units. As a result, this study proposes an approach and sets the stage for the production of complex 2D nanoarchitectures using a bottom-up strategy but including the use of current state-of-the-art top-down technology as an integral part of the self-assembly strategy.
Cascade annealing simulations of bcc iron using object kinetic Monte Carlo
Xu, Haixuan; Osetskiy, Yury N; Stoller, Roger E
2012-01-01
Simulations of displacement cascade annealing were carried out using object kinetic Monte Carlo based on an extensive MD database including various primary knock-on atom energies and directions. The sensitivity of the results to a broad range of material and model parameters was examined. The diffusion mechanism of interstitial clusters has been identified to have the most significant impact on the fraction of stable interstitials that escape the cascade region. The maximum level of recombination was observed for the limiting case in which all interstitial clusters exhibit 3D random walk diffusion. The OKMC model was parameterized using two alternative sets of defect migration and binding energies, one from ab initio calculations and the second from an empirical potential. The two sets of data predict essentially the same fraction of surviving defects but different times associated with the defect escape processes. This study provides a comprehensive picture of the first phase of long-term defect evolution in bcc iron and generates information that can be used as input data for mean field rate theory (MFRT) to predict the microstructure evolution of materials under irradiation. In addition, the limitations of the current OKMC model are discussed and a potential way to overcome these limitations is outlined.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Boscoboinik, A. M.; Manzi, S. J.; Tysoe, W. T.; Pereyra, V. D.; Boscoboinik, J. A.
2015-09-10
The influence of directing agents in the self-assembly of molecular wires to produce two-dimensional electronic nanoarchitectures is studied here using a Monte Carlo approach to simulate the effect of arbitrarily locating nodal points on a surface, from which the growth of self-assembled molecular wires can be nucleated. This is compared to experimental results reported for the self-assembly of molecular wires when 1,4-phenylenediisocyanide (PDI) is adsorbed on Au(111). The latter results in the formation of (Au-PDI)n organometallic chains, which were shown to be conductive when linked between gold nanoparticles on an insulating substrate. The present study analyzes, by means of stochasticmoreÂ Â» methods, the influence of variables that affect the growth and design of self-assembled conductive nanoarchitectures, such as the distance between nodes, coverage of the monomeric units that leads to the formation of the desired architectures, and the interaction between the monomeric units. As a result, this study proposes an approach and sets the stage for the production of complex 2D nanoarchitectures using a bottom-up strategy but including the use of current state-of-the-art top-down technology as an integral part of the self-assembly strategy.Â«Â less
Particle-In-Cell/Monte Carlo Simulation of Ion Back Bombardment in Photoinjectors
Qiang, Ji; Corlett, John; Staples, John
2009-03-02
In this paper, we report on studies of ion back bombardment in high average current dc and rf photoinjectors using a particle-in-cell/Monte Carlo method. Using H{sub 2} ion as an example, we observed that the ion density and energy deposition on the photocathode in rf guns are order of magnitude lower than that in a dc gun. A higher rf frequency helps mitigate the ion back bombardment of the cathode in rf guns.
Armas-Perez, Julio C.; Londono-Hurtado, Alejandro; Guzman, Orlando; Hernandez-Ortiz, Juan P.; de Pablo, Juan J.
2015-07-27
A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.
Quantum Monte Carlo simulation of a two-dimensional Bose gas
Pilati, S.; Boronat, J.; Casulleras, J.; Giorgini, S.
2005-02-01
The equation of state of a homogeneous two-dimensional Bose gas is calculated using quantum Monte Carlo methods. The low-density universal behavior is investigated using different interatomic model potentials, both finite ranged and strictly repulsive and zero ranged, supporting a bound state. The condensate fraction and the pair distribution function are calculated as a function of the gas parameter, ranging from the dilute to the strongly correlated regime. In the case of the zero-range pseudopotential we discuss the stability of the gaslike state for large values of the two-dimensional scattering length, and we calculate the critical density where the system becomes unstable against cluster formation.
Perera, Meewanage Dilina N; Li, Ying Wai; Eisenbach, Markus; Vogel, Thomas; Landau, David P
2015-01-01
We describe the study of thermodynamics of materials using replica-exchange Wang Landau (REWL) sampling, a generic framework for massively parallel implementations of the Wang Landau Monte Carlo method. To evaluate the performance and scalability of the method, we investigate the magnetic phase transition in body-centered cubic (bcc) iron using the classical Heisenberg model parameterized with first principles calculations. We demonstrate that our framework leads to a significant speedup without compromising the accuracy and precision and facilitates the study of much larger systems than is possible with its serial counterpart.
Integrated Cost and Schedule using Monte Carlo Simulation of a CPM Model - 12419
Hulett, David T.; Nosbisch, Michael R.
2012-07-01
This discussion of the recommended practice (RP) 57R-09 of AACE International defines the integrated analysis of schedule and cost risk to estimate the appropriate level of cost and schedule contingency reserve on projects. The main contribution of this RP is to include the impact of schedule risk on cost risk and hence on the need for cost contingency reserves. Additional benefits include the prioritizing of the risks to cost, some of which are risks to schedule, so that risk mitigation may be conducted in a cost-effective way, scatter diagrams of time-cost pairs for developing joint targets of time and cost, and probabilistic cash flow which shows cash flow at different levels of certainty. Integrating cost and schedule risk into one analysis based on the project schedule loaded with costed resources from the cost estimate provides both: (1) more accurate cost estimates than if the schedule risk were ignored or incorporated only partially, and (2) illustrates the importance of schedule risk to cost risk when the durations of activities using labor-type (time-dependent) resources are risky. Many activities such as detailed engineering, construction or software development are mainly conducted by people who need to be paid even if their work takes longer than scheduled. Level-of-effort resources, such as the project management team, are extreme examples of time-dependent resources, since if the project duration exceeds its planned duration the cost of these resources will increase over their budgeted amount. The integrated cost-schedule risk analysis is based on: - A high quality CPM schedule with logic tight enough so that it will provide the correct dates and critical paths during simulation automatically without manual intervention. - A contingency-free estimate of project costs that is loaded on the activities of the schedule. - Resolves inconsistencies between cost estimate and schedule that often creep into those documents as project execution proceeds. - Good-quality risk data that are usually collected in risk interviews of the project team, management and others knowledgeable in the risk of the project. The risks from the risk register are used as the basis of the risk data in the risk driver method. The risk driver method is based in the fundamental principle that identifiable risks drive overall cost and schedule risk. - A Monte Carlo simulation software program that can simulate schedule risk, burn WM2012 rate risk and time-independent resource risk. The results include the standard histograms and cumulative distributions of possible cost and time results for the project. However, by simulating both cost and time simultaneously we can collect the cost-time pairs of results and hence show the scatter diagram ('football chart') that indicates the joint probability of finishing on time and on budget. Also, we can derive the probabilistic cash flow for comparison with the time-phased project budget. Finally the risks to schedule completion and to cost can be prioritized, say at the P-80 level of confidence, to help focus the risk mitigation efforts. If the cost and schedule estimates including contingency reserves are not acceptable to the project stakeholders the project team should conduct risk mitigation workshops and studies, deciding which risk mitigation actions to take, and re-run the Monte Carlo simulation to determine the possible improvement to the project's objectives. Finally, it is recommended that the contingency reserves of cost and of time, calculated at a level that represents an acceptable degree of certainty and uncertainty for the project stakeholders, be added as a resource-loaded activity to the project schedule for strategic planning purposes. The risk analysis described in this paper is correct only for the current plan, represented by the schedule. The project contingency reserve of time and cost that are the main results of this analysis apply if that plan is to be followed. Of course project managers have the option of re-planning and re-scheduling in the face of new facts, in part by m
HILO: Quasi Diffusion Accelerated Monte Carlo on Hybrid Architectures
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
fidelity simulation of a diverse range of kinetic systems. Available for thumbnail of Feynman Center (505) 665-9090 Email HILO: Quasi Diffusion Accelerated Monte Carlo on Hybrid...
Evaluation of Monte Carlo Electron-Transport Algorithms in the...
Office of Scientific and Technical Information (OSTI)
Evaluation of Monte Carlo Electron-Transport Algorithms in the Integrated Tiger Series Codes for Stochastic-Media Simulations. Citation Details In-Document Search Title: Evaluation...
Statistical Exploration of Electronic Structure of Molecules from Quantum Monte-Carlo Simulations
Prabhat, Mr; Zubarev, Dmitry; Lester, Jr., William A.
2010-12-22
In this report, we present results from analysis of Quantum Monte Carlo (QMC) simulation data with the goal of determining internal structure of a 3N-dimensional phase space of an N-electron molecule. We are interested in mining the simulation data for patterns that might be indicative of the bond rearrangement as molecules change electronic states. We examined simulation output that tracks the positions of two coupled electrons in the singlet and triplet states of an H2 molecule. The electrons trace out a trajectory, which was analyzed with a number of statistical techniques. This project was intended to address the following scientific questions: (1) Do high-dimensional phase spaces characterizing electronic structure of molecules tend to cluster in any natural way? Do we see a change in clustering patterns as we explore different electronic states of the same molecule? (2) Since it is hard to understand the high-dimensional space of trajectories, can we project these trajectories to a lower dimensional subspace to gain a better understanding of patterns? (3) Do trajectories inherently lie in a lower-dimensional manifold? Can we recover that manifold? After extensive statistical analysis, we are now in a better position to respond to these questions. (1) We definitely see clustering patterns, and differences between the H2 and H2tri datasets. These are revealed by the pamk method in a fairly reliable manner and can potentially be used to distinguish bonded and non-bonded systems and get insight into the nature of bonding. (2) Projecting to a lower dimensional subspace ({approx}4-5) using PCA or Kernel PCA reveals interesting patterns in the distribution of scalar values, which can be related to the existing descriptors of electronic structure of molecules. Also, these results can be immediately used to develop robust tools for analysis of noisy data obtained during QMC simulations (3) All dimensionality reduction and estimation techniques that we tried seem to indicate that one needs 4 or 5 components to account for most of the variance in the data, hence this 5D dataset does not necessarily lie on a well-defined, low dimensional manifold. In terms of specific clustering techniques, K-means was generally useful in exploring the dataset. The partition around medoids (pam) technique produced the most definitive results for our data showing distinctive patterns for both a sample of the complete data and time-series. The gap statistic with tibshirani criteria did not provide any distinction across the 2 dataset. The gap statistic w/DandF criteria, Model based clustering and hierarchical modeling simply failed to run on our datasets. Thankfully, the vanilla PCA technique was successful in handling our entire dataset. PCA revealed some interesting patterns for the scalar value distribution. Kernel PCA techniques (vanilladot, RBF, Polynomial) and MDS failed to run on the entire dataset, or even a significant fraction of the dataset, and we resorted to creating an explicit feature map followed by conventional PCA. Clustering using K-means and PAM in the new basis set seems to produce promising results. Understanding the new basis set in the scientific context of the problem is challenging, and we are currently working to further examine and interpret the results.
Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations
Arampatzis, Georgios; Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 ; Katsoulakis, Markos A.
2014-03-28
In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.
Hardiansyah, D.; Haryanto, F.; Male, S.
2014-09-30
Prism is a non-commercial Radiotherapy Treatment Planning System (RTPS) develop by Ira J. Kalet from Washington University. Inhomogeneity factor is included in Prism TPS dose calculation. The aim of this study is to investigate the sensitivity of dose calculation on Prism using Monte Carlo simulation. Phase space source from head linear accelerator (LINAC) for Monte Carlo simulation is implemented. To achieve this aim, Prism dose calculation is compared with EGSnrc Monte Carlo simulation. Percentage depth dose (PDD) and R50 from both calculations are observed. BEAMnrc is simulated electron transport in LINAC head and produced phase space file. This file is used as DOSXYZnrc input to simulated electron transport in phantom. This study is started with commissioning process in water phantom. Commissioning process is adjusted Monte Carlo simulation with Prism RTPS. Commissioning result is used for study of inhomogeneity phantom. Physical parameters of inhomogeneity phantom that varied in this study are: density, location and thickness of tissue. Commissioning result is shown that optimum energy of Monte Carlo simulation for 6 MeV electron beam is 6.8 MeV. This commissioning is used R50 and PDD with Practical length (R{sub p}) as references. From inhomogeneity study, the average deviation for all case on interest region is below 5 %. Based on ICRU recommendations, Prism has good ability to calculate the radiation dose in inhomogeneity tissue.
Wirawan, Rahadi; Waris, Abdul; Djamal, Mitra; Handayani, Gunawan
2015-04-16
The spectrum of gamma energy absorption in the NaI crystal (scintillation detector) is the interaction result of gamma photon with NaI crystal, and itâ€™s associated with the photon gamma energy incoming to the detector. Through a simulation approach, we can perform an early observation of gamma energy absorption spectrum in a scintillator crystal detector (NaI) before the experiment conducted. In this paper, we present a simulation model result of gamma energy absorption spectrum for energy 100-700 keV (i.e. 297 keV, 400 keV and 662 keV). This simulation developed based on the concept of photon beam point source distribution and photon cross section interaction with the Monte Carlo method. Our computational code has been successfully predicting the multiple energy peaks absorption spectrum, which derived from multiple photon energy sources.
Sarrut, David; Université Lyon 1; Centre Léon Bérard ; Bardiès, Manuel; Marcatili, Sara; Mauxion, Thibault; Boussion, Nicolas; Freud, Nicolas; Létang, Jean-Michel; Jan, Sébastien; Maigne, Lydia; Perrot, Yann; Pietrzyk, Uwe; Robert, Charlotte; and others
2014-06-15
In this paper, the authors' review the applicability of the open-source GATE Monte Carlo simulation platform based on the GEANT4 toolkit for radiation therapy and dosimetry applications. The many applications of GATE for state-of-the-art radiotherapy simulations are described including external beam radiotherapy, brachytherapy, intraoperative radiotherapy, hadrontherapy, molecular radiotherapy, and in vivo dose monitoring. Investigations that have been performed using GEANT4 only are also mentioned to illustrate the potential of GATE. The very practical feature of GATE making it easy to model both a treatment and an imaging acquisition within the same frameworkis emphasized. The computational times associated with several applications are provided to illustrate the practical feasibility of the simulations using current computing facilities.
Electrolyte pore/solution partitioning by expanded grand canonical ensemble Monte Carlo simulation
Moucka, Filip; Bratko, Dusan Luzar, Alenka
2015-03-28
Using a newly developed grand canonical Monte Carlo approach based on fractional exchanges of dissolved ions and water molecules, we studied equilibrium partitioning of both components between laterally extended apolar confinements and surrounding electrolyte solution. Accurate calculations of the Hamiltonian and tensorial pressure components at anisotropic conditions in the pore required the development of a novel algorithm for a self-consistent correction of nonelectrostatic cut-off effects. At pore widths above the kinetic threshold to capillary evaporation, the molality of the salt inside the confinement grows in parallel with that of the bulk phase, but presents a nonuniform width-dependence, being depleted at some and elevated at other separations. The presence of the salt enhances the layered structure in the slit and lengthens the range of inter-wall pressure exerted by the metastable liquid. Solvation pressure becomes increasingly repulsive with growing salt molality in the surrounding bath. Depending on the sign of the excess molality in the pore, the wetting free energy of pore walls is either increased or decreased by the presence of the salt. Because of simultaneous rise in the solution surface tension, which increases the free-energy cost of vapor nucleation, the rise in the apparent hydrophobicity of the walls has not been shown to enhance the volatility of the metastable liquid in the pores.
Shang, Yu; Lin, Yu; Yu, Guoqiang; Li, Ting; Chen, Lei; Toborek, Michal
2014-05-12
Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (Î±D{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of Î±D{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo stroke model of mouse. Computer simulations shows that the high-order (Nâ€‰â‰¥â€‰5) linear algorithm was more accurate in extracting Î±D{sub B} (errorsâ€‰<â€‰Â±2%) from the noise-free DCS data than the semi-infinite solution (errors: âˆ’5.3% to âˆ’18.0%) for different tissue models. Although adding random noises to DCS data resulted in Î±D{sub B} variations, the mean values of errors in extracting Î±D{sub B} were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of Î±D{sub B} using both linear algorithm and semi-infinite solution were fairly small (errorsâ€‰<â€‰Â±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.
Hui, Y.Y.; Chang, Y.-R.; Lee, H.-Y.; Chang, H.-C.; Lim, T.-S.; Fann Wunshain
2009-01-05
The number of negatively charged nitrogen-vacancy centers (N-V){sup -} in fluorescent nanodiamond (FND) has been determined by photon correlation spectroscopy and Monte Carlo simulations at the single particle level. By taking account of the random dipole orientation of the multiple (N-V){sup -} fluorophores and simulating the probability distribution of their effective numbers (N{sub e}), we found that the actual number (N{sub a}) of the fluorophores is in linear correlation with N{sub e}, with correction factors of 1.8 and 1.2 in measurements using linearly and circularly polarized lights, respectively. We determined N{sub a}=8{+-}1 for 28 nm FND particles prepared by 3 MeV proton irradiation.
Leon, Stephanie M. Wagner, Louis K.; Brateman, Libby F.
2014-11-01
Purpose: Monte Carlo simulations were performed with the goal of verifying previously published physical measurements characterizing scatter as a function of apparent thickness. A secondary goal was to provide a way of determining what effect tissue glandularity might have on the scatter characteristics of breast tissue. The overall reason for characterizing mammography scatter in this research is the application of these data to an image processing-based scatter-correction program. Methods: MCNPX was used to simulate scatter from an infinitesimal pencil beam using typical mammography geometries and techniques. The spreading of the pencil beam was characterized by two parameters: mean radial extent (MRE) and scatter fraction (SF). The SF and MRE were found as functions of target, filter, tube potential, phantom thickness, and the presence or absence of a grid. The SF was determined by separating scatter and primary by the angle of incidence on the detector, then finding the ratio of the measured scatter to the total number of detected events. The accuracy of the MRE was determined by placing ring-shaped tallies around the impulse and fitting those data to the point-spread function (PSF) equation using the value for MRE derived from the physical measurements. The goodness-of-fit was determined for each data set as a means of assessing the accuracy of the physical MRE data. The effect of breast glandularity on the SF, MRE, and apparent tissue thickness was also considered for a limited number of techniques. Results: The agreement between the physical measurements and the results of the Monte Carlo simulations was assessed. With a grid, the SFs ranged from 0.065 to 0.089, with absolute differences between the measured and simulated SFs averaging 0.02. Without a grid, the range was 0.28–0.51, with absolute differences averaging ?0.01. The goodness-of-fit values comparing the Monte Carlo data to the PSF from the physical measurements ranged from 0.96 to 1.00 with a grid and 0.65 to 0.86 without a grid. Analysis of the data suggested that the nongrid data could be better described by a biexponential function than the single exponential used here. The simulations assessing the effect of breast composition on SF and MRE showed only a slight impact on these quantities. When compared to a mix of 50% glandular/50% adipose tissue, the impact of substituting adipose or glandular breast compositions on the apparent thickness of the tissue was about 5%. Conclusions: The findings show agreement between the physical measurements published previously and the Monte Carlo simulations presented here; the resulting data can therefore be used more confidently for an application such as image processing-based scatter correction. The findings also suggest that breast composition does not have a major impact on the scatter characteristics of breast tissue. Application of the scatter data to the development of a scatter-correction software program can be simplified by ignoring the variations in density among breast tissues.
Quantum Monte Carlo by message passing
Bonca, J.; Gubernatis, J.E.
1993-01-01
We summarize results of quantum Monte Carlo simulations of the degenerate single-impurity Anderson model using the impurity algorithm of Hirsch and Fye. Using methods of Bayesian statistical inference, coupled with the principle of maximum entropy, we extracted the single-particle spectral density from the imaginary-time Green's function. The variations of resulting spectral densities with model parameters agree qualitatively with the spectral densities predicted by NCA calculations. All the simulations were performed on a cluster of 16 IBM R6000/560 workstations under the control of the message-passing software PVM. We described the trivial parallelization of our quantum Monte Carlo code both for the cluster and the CM-5 computer. Other issues for effective parallelization of the impurity algorithm are also discussed.
Quantum Monte Carlo by message passing
Bonca, J.; Gubernatis, J.E.
1993-05-01
We summarize results of quantum Monte Carlo simulations of the degenerate single-impurity Anderson model using the impurity algorithm of Hirsch and Fye. Using methods of Bayesian statistical inference, coupled with the principle of maximum entropy, we extracted the single-particle spectral density from the imaginary-time Green`s function. The variations of resulting spectral densities with model parameters agree qualitatively with the spectral densities predicted by NCA calculations. All the simulations were performed on a cluster of 16 IBM R6000/560 workstations under the control of the message-passing software PVM. We described the trivial parallelization of our quantum Monte Carlo code both for the cluster and the CM-5 computer. Other issues for effective parallelization of the impurity algorithm are also discussed.
Lin, J. Y. Y. [California Institute of Technology, Pasadena] [California Institute of Technology, Pasadena; Aczel, Adam A [ORNL] [ORNL; Abernathy, Douglas L [ORNL] [ORNL; Nagler, Stephen E [ORNL] [ORNL; Buyers, W. J. L. [National Research Council of Canada] [National Research Council of Canada; Granroth, Garrett E [ORNL] [ORNL
2014-01-01
Recently an extended series of equally spaced vibrational modes was observed in uranium nitride (UN) by performing neutron spectroscopy measurements using the ARCS and SEQUOIA time-of- flight chopper spectrometers [A.A. Aczel et al, Nature Communications 3, 1124 (2012)]. These modes are well described by 3D isotropic quantum harmonic oscillator (QHO) behavior of the nitrogen atoms, but there are additional contributions to the scattering that complicate the measured response. In an effort to better characterize the observed neutron scattering spectrum of UN, we have performed Monte Carlo ray tracing simulations of the ARCS and SEQUOIA experiments with various sample kernels, accounting for the nitrogen QHO scattering, contributions that arise from the acoustic portion of the partial phonon density of states (PDOS), and multiple scattering. These simulations demonstrate that the U and N motions can be treated independently, and show that multiple scattering contributes an approximate Q-independent background to the spectrum at the oscillator mode positions. Temperature dependent studies of the lowest few oscillator modes have also been made with SEQUOIA, and our simulations indicate that the T-dependence of the scattering from these modes is strongly influenced by the uranium lattice.
Energy Monte Carlo (EMCEE) | Open Energy Information
with a specific set of distributions. Both programs run as spreadsheet workbooks in Microsoft Excel. EMCEE and Emc2 require Crystal Ball, a commercially available Monte Carlo...
Wang, Z; Gao, M
2014-06-01
Purpose: Monte Carlo simulation plays an important role for proton Pencil Beam Scanning (PBS) technique. However, MC simulation demands high computing power and is limited to few large proton centers that can afford a computer cluster. We study the feasibility of utilizing cloud computing in the MC simulation of PBS beams. Methods: A GATE/GEANT4 based MC simulation software was installed on a commercial cloud computing virtual machine (Linux 64-bits, Amazon EC2). Single spot Integral Depth Dose (IDD) curves and in-air transverse profiles were used to tune the source parameters to simulate an IBA machine. With the use of StarCluster software developed at MIT, a Linux cluster with 2–100 nodes can be conveniently launched in the cloud. A proton PBS plan was then exported to the cloud where the MC simulation was run. Results: The simulated PBS plan has a field size of 10×10cm{sup 2}, 20cm range, 10cm modulation, and contains over 10,000 beam spots. EC2 instance type m1.medium was selected considering the CPU/memory requirement and 40 instances were used to form a Linux cluster. To minimize cost, master node was created with on-demand instance and worker nodes were created with spot-instance. The hourly cost for the 40-node cluster was $0.63 and the projected cost for a 100-node cluster was $1.41. Ten million events were simulated to plot PDD and profile, with each job containing 500k events. The simulation completed within 1 hour and an overall statistical uncertainty of < 2% was achieved. Good agreement between MC simulation and measurement was observed. Conclusion: Cloud computing is a cost-effective and easy to maintain platform to run proton PBS MC simulation. When proton MC packages such as GATE and TOPAS are combined with cloud computing, it will greatly facilitate the pursuing of PBS MC studies, especially for newly established proton centers or individual researchers.
Dong, Han; Sharma, Diksha; Badano, Aldo
2014-12-15
Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: The visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.
Wang, J.; Biasca, R.; Liewer, P.C.
1996-01-01
Although the existence of the critical ionization velocity (CIV) is known from laboratory experiments, no agreement has been reached as to whether CIV exists in the natural space environment. In this paper the authors move towards more realistic models of CIV and present the first fully three-dimensional, electromagnetic particle-in-cell Monte-Carlo collision (PIC-MCC) simulations of typical space-based CIV experiments. In their model, the released neutral gas is taken to be a spherical cloud traveling across a magnetized ambient plasma. Simulations are performed for neutral clouds with various sizes and densities. The effects of the cloud parameters on ionization yield, wave energy growth, electron heating, momentum coupling, and the three-dimensional structure of the newly ionized plasma are discussed. The simulations suggest that the quantitative characteristics of momentum transfers among the ion beam, neutral cloud, and plasma waves is the key indicator of whether CIV can occur in space. The missing factors in space-based CIV experiments may be the conditions necessary for a continuous enhancement of the beam ion momentum. For a typical shaped charge release experiment, favorable CIV conditions may exist only in a very narrow, intermediate spatial region some distance from the release point due to the effects of the cloud density and size. When CIV does occur, the newly ionized plasma from the cloud forms a very complex structure due to the combined forces from the geomagnetic field, the motion induced emf, and the polarization. Hence the detection of CIV also critically depends on the sensor location. 32 refs., 8 figs., 2 tabs.
TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation
Xu, Y; Bai, T; Yan, H; Ouyang, L; Wang, J; Pompos, A; Jiang, S; Jia, X; Zhou, L
2014-06-15
Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections; 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512Ã—512Ã—100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research in Strategic Emerging Industry, Guangdong, China (2011A081402003)
Generalizing the self-healing diffusion Monte Carlo approach...
Office of Scientific and Technical Information (OSTI)
Generalizing the self-healing diffusion Monte Carlo approach to finite temperature: A path ... Title: Generalizing the self-healing diffusion Monte Carlo approach to finite temperature: ...
TH-A-18C-09: Ultra-Fast Monte Carlo Simulation for Cone Beam CT Imaging of Brain Trauma
Sisniega, A; Zbijewski, W; Stayman, J; Yorkston, J; Aygun, N; Koliatsos, V; Siewerdsen, J
2014-06-15
Purpose: Application of cone-beam CT (CBCT) to low-contrast soft tissue imaging, such as in detection of traumatic brain injury, is challenged by high levels of scatter. A fast, accurate scatter correction method based on Monte Carlo (MC) estimation is developed for application in high-quality CBCT imaging of acute brain injury. Methods: The correction involves MC scatter estimation executed on an NVIDIA GTX 780 GPU (MC-GPU), with baseline simulation speed of ~1e7 photons/sec. MC-GPU is accelerated by a novel, GPU-optimized implementation of variance reduction (VR) techniques (forced detection and photon splitting). The number of simulated tracks and projections is reduced for additional speed-up. Residual noise is removed and the missing scatter projections are estimated via kernel smoothing (KS) in projection plane and across gantry angles. The method is assessed using CBCT images of a head phantom presenting a realistic simulation of fresh intracranial hemorrhage (100 kVp, 180 mAs, 720 projections, source-detector distance 700 mm, source-axis distance 480 mm). Results: For a fixed run-time of ~1 sec/projection, GPU-optimized VR reduces the noise in MC-GPU scatter estimates by a factor of 4. For scatter correction, MC-GPU with VR is executed with 4-fold angular downsampling and 1e5 photons/projection, yielding 3.5 minute run-time per scan, and de-noised with optimized KS. Corrected CBCT images demonstrate uniformity improvement of 18 HU and contrast improvement of 26 HU compared to no correction, and a 52% increase in contrast-tonoise ratio in simulated hemorrhage compared to â€œoracleâ€ constant fraction correction. Conclusion: Acceleration of MC-GPU achieved through GPU-optimized variance reduction and kernel smoothing yields an efficient (<5 min/scan) and accurate scatter correction that does not rely on additional hardware or simplifying assumptions about the scatter distribution. The method is undergoing implementation in a novel CBCT dedicated to brain trauma imaging at the point of care in sports and military applications. Research grant from Carestream Health. JY is an employee of Carestream Health.
Reverse Monte Carlo simulation of Se{sub 80}Te{sub 20} and Se{sub 80}Te{sub 15}Sb{sub 5} glasses
Abdel-Baset, A. M.; Rashad, M.; Moharram, A. H.
2013-12-16
Two-dimensional Monte Carlo of the total pair distribution functions g(r) is determined for Se{sub 80}Te{sub 20} and Se{sub 80}Te{sub 15}Sb{sub 5} alloys, and then it used to assemble the three-dimensional atomic configurations using the reverse Monte Carlo simulation. The partial pair distribution functions g{sub ij}(r) indicate that the basic structure unit in the Se{sub 80}Te{sub 15}Sb{sub 5} glass is di-antimony tri-selenide units connected together through Se-Se and Se-Te chain. The structure of Se{sub 80}Te{sub 20} alloys is a chain of Se-Te and Se-Se in addition to some rings of Se atoms.
Status of Monte-Carlo Event Generators
Hoeche, Stefan; /SLAC
2011-08-11
Recent progress on general-purpose Monte-Carlo event generators is reviewed with emphasis on the simulation of hard QCD processes and subsequent parton cascades. Describing full final states of high-energy particle collisions in contemporary experiments is an intricate task. Hundreds of particles are typically produced, and the reactions involve both large and small momentum transfer. The high-dimensional phase space makes an exact solution of the problem impossible. Instead, one typically resorts to regarding events as factorized into different steps, ordered descending in the mass scales or invariant momentum transfers which are involved. In this picture, a hard interaction, described through fixed-order perturbation theory, is followed by multiple Bremsstrahlung emissions off initial- and final-state and, finally, by the hadronization process, which binds QCD partons into color-neutral hadrons. Each of these steps can be treated independently, which is the basic concept inherent to general-purpose event generators. Their development is nowadays often focused on an improved description of radiative corrections to hard processes through perturbative QCD. In this context, the concept of jets is introduced, which allows to relate sprays of hadronic particles in detectors to the partons in perturbation theory. In this talk, we briefly review recent progress on perturbative QCD in event generation. The main focus lies on the general-purpose Monte-Carlo programs HERWIG, PYTHIA and SHERPA, which will be the workhorses for LHC phenomenology. A detailed description of the physics models included in these generators can be found in [8]. We also discuss matrix-element generators, which provide the parton-level input for general-purpose Monte Carlo.
Exploring theory space with Monte Carlo reweighting
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun
2014-10-13
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists andmoreÂ Â» experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.Â«Â less
Exploring theory space with Monte Carlo reweighting
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun
2014-10-13
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. In particular, we suggest procedures that allow more efficient collaboration between theoristsmore »and experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less
Exploring theory space with Monte Carlo reweighting
Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun
2014-10-13
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists and experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.
Monte Carlo event generators for hadron-hadron collisions
Knowles, I.G.; Protopopescu, S.D.
1993-06-01
A brief review of Monte Carlo event generators for simulating hadron-hadron collisions is presented. Particular emphasis is placed on comparisons of the approaches used to describe physics elements and identifying their relative merits and weaknesses. This review summarizes a more detailed report.
Souris, K; Lee, J; Sterpin, E
2014-06-15
Purpose: Recent studies have demonstrated the capability of graphics processing units (GPUs) to compute dose distributions using Monte Carlo (MC) methods within clinical time constraints. However, GPUs have a rigid vectorial architecture that favors the implementation of simplified particle transport algorithms, adapted to specific tasks. Our new, fast, and multipurpose MC code, named MCsquare, runs on Intel Xeon Phi coprocessors. This technology offers 60 independent cores, and therefore more flexibility to implement fast and yet generic MC functionalities, such as prompt gamma simulations. Methods: MCsquare implements several models and hence allows users to make their own tradeoff between speed and accuracy. A 200 MeV proton beam is simulated in a heterogeneous phantom using Geant4 and two configurations of MCsquare. The first one is the most conservative and accurate. The method of fictitious interactions handles the interfaces and secondary charged particles emitted in nuclear interactions are fully simulated. The second, faster configuration simplifies interface crossings and simulates only secondary protons after nuclear interaction events. Integral depth-dose and transversal profiles are compared to those of Geant4. Moreover, the production profile of prompt gammas is compared to PENH results. Results: Integral depth dose and transversal profiles computed by MCsquare and Geant4 are within 3%. The production of secondaries from nuclear interactions is slightly inaccurate at interfaces for the fastest configuration of MCsquare but this is unlikely to have any clinical impact. The computation time varies between 90 seconds for the most conservative settings to merely 59 seconds in the fastest configuration. Finally prompt gamma profiles are also in very good agreement with PENH results. Conclusion: Our new, fast, and multi-purpose Monte Carlo code simulates prompt gammas and calculates dose distributions in less than a minute, which complies with clinical time constraints. It has been successfully validated with Geant4. This work has been financialy supported by InVivoIGT, a public/private partnership between UCL and IBA.
Qiang, J.
2009-10-17
In this paper, we report on study of ion back bombardment in a high average current radio-frequency (RF) photo-gun using a particle-in-cell/Monte Carlo simulation method. Using this method, we systematically studied effects of gas pressure, RF frequency, RF initial phase, electric field profile, magnetic field, laser repetition rate, different ion species on ion particle line density distribution, kinetic energy spectrum, and ion power line density distribution back bombardment onto the photocathode. Those simulation results suggested that effects of ion back bombardment could increase linearly with the background gas pressure and laser repetition rate. The RF frequency has significantly affected the ion motion inside the gun so that the ion power deposition on the photocathode in an RF gun can be several orders of magnitude lower than that in a DC gun. The ion back bombardment can be minimized by appropriately choosing the electric field profile and the initial phase.
Choi, Myunghee; Chan, Vincent S.
2014-02-28
This final report describes the work performed under U.S. Department of Energy Cooperative Agreement DE-FC02-08ER54954 for the period April 1, 2011 through March 31, 2013. The goal of this project was to perform iterated finite-orbit Monte Carlo simulations with full-wall fields for modeling tokamak ICRF wave heating experiments. In year 1, the finite-orbit Monte-Carlo code ORBIT-RF and its iteration algorithms with the full-wave code AORSA were improved to enable systematical study of the factors responsible for the discrepancy in the simulated and the measured fast-ion FIDA signals in the DIII-D and NSTX ICRF fast-wave (FW) experiments. In year 2, ORBIT-RF was coupled to the TORIC full-wave code for a comparative study of ORBIT-RF/TORIC and ORBIT-RF/AORSA results in FW experiments.
Kadoura, Ahmad; Sun, Shuyu Salama, Amgad
2014-08-01
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (?, ?) for single site models were proposed for methane, nitrogen and carbon monoxide.
Mohammadyari, P; Faghihi, R; Shirazi, M Mosleh; Lotfi, M; Meigooni, A
2014-06-01
Purpose: the accuboost is the most modern method of breast brachytherapy that is a boost method in compressed tissue by a mammography unit. the dose distribution in uncompressed tissue, as compressed tissue is important that should be characterized. Methods: In this study, the mechanical behavior of breast in mammography loading, the displacement of breast tissue and the dose distribution in compressed and uncompressed tissue, are investigated. Dosimetry was performed by two dosimeter methods of Monte Carlo simulations using MCNP5 code and thermoluminescence dosimeters. For Monte Carlo simulations, the dose values in cubical lattice were calculated using tally F6. The displacement of the breast elements was simulated by Finite element model and calculated using ABAQUS software, from which the 3D dose distribution in uncompressed tissue was determined. The geometry of the model is constructed from MR images of 6 volunteers. Experimental dosimetery was performed by placing the thermoluminescence dosimeters into the polyvinyl alcohol breast equivalent phantom and on the proximal edge of compression plates to the chest. Results: The results indicate that using the cone applicators would deliver more than 95% of dose to the depth of 5 to 17mm, while round applicator will increase the skin dose. Nodal displacement, in presence of gravity and 60N forces, i.e. in mammography compression, was determined with 43% contraction in the loading direction and 37% expansion in orthogonal orientation. Finally, in comparison of the acquired from thermoluminescence dosimeters with MCNP5, they are consistent with each other in breast phantom and in chest's skin with average different percentage of 13.7Â±5.7 and 7.7Â±2.3, respectively. Conclusion: The major advantage of this kind of dosimetry is the ability of 3D dose calculation by FE Modeling. Finally, polyvinyl alcohol is a reliable material as a breast tissue equivalent dosimetric phantom that provides the ability of TLD dosimetry for validation.
Thfoin, I. Reverdin, C.; Duval, A.; Leboeuf, X.; Lecherbourg, L.; Rossé, B.; Hulin, S.; Batani, D.; Santos, J. J.; Vaisseau, X.; Fourment, C.; Giuffrida, L.; Szabo, C. I.; Bastiani-Ceccotti, S.; Brambrink, E.; Koenig, M.; Nakatsutsumi, M.; Morace, A.
2014-11-15
Transmission crystal spectrometers (TCS) are used on many laser facilities to record hard X-ray spectra. During experiments, signal recorded on imaging plates is often degraded by a background noise. Monte-Carlo simulations made with the code GEANT4 show that this background noise is mainly generated by diffusion of MeV electrons and very hard X-rays. An experiment, carried out at LULI2000, confirmed that the use of magnets in front of the diagnostic, that bent the electron trajectories, reduces significantly this background. The new spectrometer SPECTIX (Spectromètre PETAL à Cristal en TransmIssion X), built for the LMJ/PETAL facility, will include this optimized shielding.
Li, Wenfang; Du, Jinjin; Wen, Ruijuan; Yang, Pengfei; Li, Gang; Zhang, Tiancai; Liang, Junjun
2014-03-17
We investigate the transmission of single-atom transits based on a strongly coupled cavity quantum electrodynamics system. By superposing the transit transmissions of a considerable number of atoms, we obtain the absorption spectra of the cavity induced by single atoms and obtain the temperature of the cold atom. The number of atoms passing through the microcavity for each release is also counted, and this number changes exponentially along with the atom temperature. Monte Carlo simulations agree closely with the experimental results, and the initial temperature of the cold atom is determined. Compared with the conventional time-of-flight (TOF) method, this approach avoids some uncertainties in the standard TOF and sheds new light on determining temperature of cold atoms by counting atoms individually in a confined space.
Ryabtsev, I. I.; Tretyakov, D. B.; Beterov, I. I.; Entin, V. M.; Yakshina, E. A.
2010-11-15
Results of numerical Monte Carlo simulations for the Stark-tuned Fo{center_dot}{center_dot}rster resonance and dipole blockade between two to five cold rubidium Rydberg atoms in various spatial configurations are presented. The effects of the atoms' spatial uncertainties on the resonance amplitude and spectra are investigated. The feasibility of observing coherent Rabi-like population oscillations at a Fo{center_dot}{center_dot}rster resonance between two cold Rydberg atoms is analyzed. Spectra and the fidelity of the Rydberg dipole blockade are calculated for various experimental conditions, including nonzero detuning from the Fo{center_dot}{center_dot}rster resonance and finite laser linewidth. The results are discussed in the context of quantum-information processing with Rydberg atoms.
Liu, T; Du, X; Su, L; Gao, Y; Ji, W; Xu, X; Zhang, D; Shi, J; Liu, B; Kalra, M
2014-06-15
Purpose: To compare the CT doses derived from the experiments and GPU-based Monte Carlo (MC) simulations, using a human cadaver and ATOM phantom. Methods: The cadaver of an 88-year old male and the ATOM phantom were scanned by a GE LightSpeed Pro 16 MDCT. For the cadaver study, the Thimble chambers (Model 10×5?0.6CT and 10×6?0.6CT) were used to measure the absorbed dose in different deep and superficial organs. Whole-body scans were first performed to construct a complete image database for MC simulations. Abdomen/pelvis helical scans were then conducted using 120/100 kVps, 300 mAs and a pitch factor of 1.375:1. For the ATOM phantom study, the OSL dosimeters were used and helical scans were performed using 120 kVp and x, y, z tube current modulation (TCM). For the MC simulations, sufficient particles were run in both cases such that the statistical errors of the results by ARCHER-CT were limited to 1%. Results: For the human cadaver scan, the doses to the stomach, liver, colon, left kidney, pancreas and urinary bladder were compared. The difference between experiments and simulations was within 19% for the 120 kVp and 25% for the 100 kVp. For the ATOM phantom scan, the doses to the lung, thyroid, esophagus, heart, stomach, liver, spleen, kidneys and thymus were compared. The difference was 39.2% for the esophagus, and within 16% for all other organs. Conclusion: In this study the experimental and simulated CT doses were compared. Their difference is primarily attributed to the systematic errors of the MC simulations, including the accuracy of the bowtie filter modeling, and the algorithm to generate voxelized phantom from DICOM images. The experimental error is considered small and may arise from the dosimeters. R01 grant (R01EB015478) from National Institute of Biomedical Imaging and Bioengineering.
A Monte Carlo algorithm for degenerate plasmas
Turrell, A.E. Sherlock, M.; Rose, S.J.
2013-09-15
A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the Fermi–Dirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electron–ion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.
McGrath, Matthew; Kuo, I-F W.; Ngouana, Brice F.; Ghogomu, Julius N.; Mundy, Christopher J.; Marenich, Aleksandr; Cramer, Christopher J.; Truhlar, Donald G.; Siepmann, Joern I.
2013-08-28
The free energy of solvation and dissociation of hydrogen chloride in water is calculated through a combined molecular simulation quantum chemical approach at four temperatures between T = 300 and 450 K. The free energy is first decomposed into the sum of two components: the Gibbs free energy of transfer of molecular HCl from the vapor to the aqueous liquid phase and the standard-state free energy of acid dissociation of HCl in aqueous solution. The former quantity is calculated using Gibbs ensemble Monte Carlo simulations using either Kohn-Sham density functional theory or a molecular mechanics force field to determine the systemâ€™s potential energy. The latter free energy contribution is computed using a continuum solvation model utilizing either experimental reference data or micro-solvated clusters. The predicted combined solvation and dissociation free energies agree very well with available experimental data. CJM was supported by the US Department of Energy,Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. Pacific Northwest National Laboratory is operated by Battelle for the US Department of Energy.
Correlated electron dynamics with time-dependent quantum Monte Carlo:
Office of Scientific and Technical Information (OSTI)
Three-dimensional helium (Journal Article) | SciTech Connect Correlated electron dynamics with time-dependent quantum Monte Carlo: Three-dimensional helium Citation Details In-Document Search Title: Correlated electron dynamics with time-dependent quantum Monte Carlo: Three-dimensional helium Here the recently proposed time-dependent quantum Monte Carlo method is applied to three dimensional para- and ortho-helium atoms subjected to an external electromagnetic field with amplitude sufficient
Vazquez Quino, L; Calvo, O; Huerta, C; DeWeese, M
2014-06-01
Purpose: To study the perturbation due to the use of a novel Reference Ion Chamber designed to measure small field dosimetry (KermaX Plus C by IBA). Methods: Using the Phase-space files for TrueBeam photon beams available by Varian in IAEA-compliant format for 6 and 15 MV. Monte Carlo simulations were performed using BEAMnrc and DOSXYZnrc to investigate the perturbation introduced by a reference chamber into the PDDs and profiles measured in water tank. Field sizes ranging from 1×1, 2×2,3×3, 5×5 cm2 were simulated for both energies with and without a 0.5 mm foil of Aluminum which is equivalent to the attenuation equivalent of the reference chamber specifications in a water phantom of 30×30×30 cm3 and a pixel resolution of 2 mm. The PDDs, profiles, and gamma analysis of the simulations were performed as well as a energy spectrum analysis of the phase-space files generated during the simulation. Results: Examination of the energy spectrum analysis performed shown a very small increment of the energy spectrum at the build-up region but no difference is appreciated after dmax. The PDD, profiles and gamma analysis had shown a very good agreement among the simulations with and without the Al foil, with a gamma analysis with a criterion of 2% and 2mm resulting in 99.9% of the points passing this criterion. Conclusion: This work indicates the potential benefits of using the KermaX Plus C as reference chamber in the measurement of PDD and Profiles for small fields since the perturbation due to in the presence of the chamber the perturbation is minimal and the chamber can be considered transparent to the photon beam.
Farah, J; Bonfrate, A; Donadille, L; Dubourg, N; Lacoste, V; Martinetti, F; Sayah, R; Trompier, F; Clairand, I [IRSN - Institute for Radiological Protection and Nuclear Safety, Fontenay-aux-roses (France); Caresana, M [Politecnico di Milano, Milano (Italy); Delacroix, S; Nauraye, C [Institut Curie - Centre de Protontherapie d Orsay, Orsay (France); Herault, J [Centre Antoine Lacassagne, Nice (France); Piau, S; Vabre, I [Institut de Physique Nucleaire d Orsay, Orsay (France)
2014-06-01
Purpose: Measure stray radiation inside a passive scattering proton therapy facility, compare values to Monte Carlo (MC) simulations and identify the actual needs and challenges. Methods: Measurements and MC simulations were considered to acknowledge neutron exposure associated with 75 MeV ocular or 180 MeV intracranial passively scattered proton treatments. First, using a specifically-designed high sensitivity Bonner Sphere system, neutron spectra were measured at different positions inside the treatment rooms. Next, measurement-based mapping of neutron ambient dose equivalent was fulfilled using several TEPCs and rem-meters. Finally, photon and neutron organ doses were measured using TLDs, RPLs and PADCs set inside anthropomorphic phantoms (Rando, 1 and 5-years-old CIRS). All measurements were also simulated with MCNPX to investigate the efficiency of MC models in predicting stray neutrons considering different nuclear cross sections and models. Results: Knowledge of the neutron fluence and energy distribution inside a proton therapy room is critical for stray radiation dosimetry. However, as spectrometry unfolding is initiated using a MC guess spectrum and suffers from algorithmic limits a 20% spectrometry uncertainty is expected. H*(10) mapping with TEPCs and rem-meters showed a good agreement between the detectors. Differences within measurement uncertainty (10–15%) were observed and are inherent to the energy, fluence and directional response of each detector. For a typical ocular and intracranial treatment respectively, neutron doses outside the clinical target volume of 0.4 and 11 mGy were measured inside the Rando phantom. Photon doses were 2–10 times lower depending on organs position. High uncertainties (40%) are inherent to TLDs and PADCs measurements due to the need for neutron spectra at detector position. Finally, stray neutrons prediction with MC simulations proved to be extremely dependent on proton beam energy and the used nuclear models and cross sections. Conclusion: This work highlights measurement and simulation limits for ion therapy radiation protection applications.
Tests of Monte Carlo Independent Column Approximation in the...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Meteorological Institute Jarvinen, Heikki Finnish Meteorological Institute Category: Modeling The Monte Carlo Independent Column Approximation (McICA) was recently introduced...
DOE Science Showcase - Monte Carlo Methods | OSTI, US Dept of...
Office of Scientific and Technical Information (OSTI)
Learn about the ways these methods are used in DOE's research endeavors today in "Monte Carlo Methods" by Dr. William Watson, Physicist, OSTI staff. Image credit: Sandia National ...
Quantum Monte Carlo methods for nuclear physics
Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.
2015-09-09
Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Quantum Monte Carlo methods for nuclear physics
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.
2014-10-19
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-bodymoreÂ Â» interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.Â«Â less
Quantum Monte Carlo methods for nuclear physics
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.
2015-09-09
Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit,moreÂ Â» and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.Â«Â less
Quantum Monte Carlo methods for nuclear physics
Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.
2014-10-19
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Chen Zhaoquan [College of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, Anhui 232001 (China); State Key Laboratory of Structural Analysis for Industrial Equipment, Dalian University of Technology, Dalian, Liaoning 116024 (China); State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China); Ye Qiubo [College of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, Anhui 232001 (China); Communications Research Centre, 3701 Carling Ave., Ottawa K2H 8S2 (Canada); Xia Guangqing [State Key Laboratory of Structural Analysis for Industrial Equipment, Dalian University of Technology, Dalian, Liaoning 116024 (China); Hong Lingli; Hu Yelin; Zheng Xiaoliang; Li Ping [College of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, Anhui 232001 (China); Zhou Qiyan [College of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, Anhui 232001 (China); State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China); Hu Xiwei; Liu Minghai [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China)
2013-03-15
Although surface-wave plasma (SWP) sources have many industrial applications, the ionization process for SWP discharges is not yet well understood. The resonant excitation of surface plasmon polaritons (SPPs) has recently been proposed to produce SWP efficiently, and this work presents a numerical study of the mechanism to produce SWP sources. Specifically, SWP resonantly excited by SPPs at low pressure (0.25 Torr) are modeled using a two-dimensional in the working space and three-dimensional in the velocity space particle-in-cell with the Monte Carlo collision method. Simulation results are sampled at different time steps, in which the detailed information about the distribution of electrons and electromagnetic fields is obtained. Results show that the mode conversion between surface waves of SPPs and electron plasma waves (EPWs) occurs efficiently at the location where the plasma density is higher than 3.57 Multiplication-Sign 10{sup 17} m{sup -3}. Due to the effect of the locally enhanced electric field of SPPs, the mode conversion between the surface waves of SPPs and EPWs is very strong, which plays a significant role in efficiently heating SWP to the overdense state.
Fan, Yu; Zou, Ying; Sun, Jizhong; Wang, Dezhen [Key Laboratory of Materials Modification by Laser, Ion and Electron Beams (Ministry of Education), School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024 (China)] [Key Laboratory of Materials Modification by Laser, Ion and Electron Beams (Ministry of Education), School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024 (China); Stirner, Thomas [Department of Electronic Engineering, University of Applied Sciences Deggendorf, Edlmairstr. 6-8, D-94469 Deggendorf (Germany)] [Department of Electronic Engineering, University of Applied Sciences Deggendorf, Edlmairstr. 6-8, D-94469 Deggendorf (Germany)
2013-10-15
The influence of an applied magnetic field on plasma-related devices has a wide range of applications. Its effects on a plasma have been studied for years; however, there are still many issues that are not understood well. This paper reports a detailed kinetic study with the two-dimension-in-space and three-dimension-in-velocity particle-in-cell plus Monte Carlo collision method on the role of E×B drift in a capacitive argon discharge, similar to the experiment of You et al.[Thin Solid Films 519, 6981 (2011)]. The parameters chosen in the present study for the external magnetic field are in a range common to many applications. Two basic configurations of the magnetic field are analyzed in detail: the magnetic field direction parallel to the electrode with or without a gradient. With an extensive parametric study, we give detailed influences of the drift on the collective behaviors of the plasma along a two-dimensional domain, which cannot be represented by a 1 spatial and 3 velocity dimensions model. By analyzing the results of the simulations, the occurring collisionless heating mechanism is explained well.
Glaser, R E; Johannesson, G; Sengupta, S; Kosovic, B; Carle, S; Franz, G A; Aines, R D; Nitao, J J; Hanley, W G; Ramirez, A L; Newmark, R L; Johnson, V M; Dyer, K M; Henderson, K A; Sugiyama, G A; Hickling, T L; Pasyanos, M E; Jones, D A; Grimm, R J; Levine, R A
2004-03-11
Accurate prediction of complex phenomena can be greatly enhanced through the use of data and observations to update simulations. The ability to create these data-driven simulations is limited by error and uncertainty in both the data and the simulation. The stochastic engine project addressed this problem through the development and application of a family of Markov Chain Monte Carlo methods utilizing importance sampling driven by forward simulators to minimize time spent search very large state spaces. The stochastic engine rapidly chooses among a very large number of hypothesized states and selects those that are consistent (within error) with all the information at hand. Predicted measurements from the simulator are used to estimate the likelihood of actual measurements, which in turn reduces the uncertainty in the original sample space via a conditional probability method called Bayesian inferencing. This highly efficient, staged Metropolis-type search algorithm allows us to address extremely complex problems and opens the door to solving many data-driven, nonlinear, multidimensional problems. A key challenge has been developing representation methods that integrate the local details of real data with the global physics of the simulations, enabling supercomputers to efficiently solve the problem. Development focused on large-scale problems, and on examining the mathematical robustness of the approach in diverse applications. Multiple data types were combined with large-scale simulations to evaluate systems with {approx}{sup 10}20,000 possible states (detecting underground leaks at the Hanford waste tanks). The probable uses of chemical process facilities were assessed using an evidence-tree representation and in-process updating. Other applications included contaminant flow paths at the Savannah River Site, locating structural flaws in buildings, improving models for seismic travel times systems used to monitor nuclear proliferation, characterizing the source of indistinct atmospheric plumes, and improving flash radiography. In the course of developing these applications, we also developed new methods to cluster and analyze the results of the state-space searches, as well as a number of algorithms to improve the search speed and efficiency. Our generalized solution contributes both a means to make more informed predictions of the behavior of very complex systems, and to improve those predictions as events unfold, using new data in real time.
Mei, Donghai; Neurock, Matthew; Smith, C Michael
2009-10-22
The kinetics for the selective hydrogenation of acetylene-ethylene mixtures over model Pd(111) and bimetallic Pd-Ag alloy surfaces were examined using first principles based kinetic Monte Carlo (KMC) simulations to elucidate the effects of alloying as well as process conditions (temperature and hydrogen partial pressure). The mechanisms that control the selective and unselective routes which included hydrogenation, dehydrogenation and C-?C bond breaking pathways were analyzed using first-principle density functional theory (DFT) calculations. The results were used to construct an intrinsic kinetic database that was used in a variable time step kinetic Monte Carlo simulation to follow the kinetics and the molecular transformations in the selective hydrogenation of acetylene-ethylene feeds over Pd and Pd-Ag surfaces. The lateral interactions between coadsorbates that occur through-surface and through-space were estimated using DFT-parameterized bond order conservation and van der Waal interaction models respectively. The simulation results show that the rate of acetylene hydrogenation as well as the ethylene selectivity increase with temperature over both the Pd(111) and the Pd-Ag/Pd(111) alloy surfaces. The selective hydrogenation of acetylene to ethylene proceeds via the formation of a vinyl intermediate. The unselective formation of ethane is the result of the over-hydrogenation of ethylene as well as over-hydrogenation of vinyl to form ethylidene. Ethylidene further hydrogenates to form ethane and dehydrogenates to form ethylidyne. While ethylidyne is not reactive, it can block adsorption sites which limit the availability of hydrogen on the surface and thus act to enhance the selectivity. Alloying Ag into the Pd surface decreases the overall rated but increases the ethylene selectivity significantly by promoting the selective hydrogenation of vinyl to ethylene and concomitantly suppressing the unselective path involving the hydrogenation of vinyl to ethylidene and the dehydrogenation ethylidene to ethylidyne. This is consistent with experimental results which suggest only the predominant hydrogenation path involving the sequential addition of hydrogen to form vinyl and ethylene exists over the Pd-Ag alloys. Ag enhances the desorption of ethylene and hydrogen from the surface thus limiting their ability to undergo subsequent reactions. The simulated apparent activation barriers were calculated to be 32-44 kJ/mol on Pd(111) and 26-31 kJ/mol on Pd-Ag/Pd(111) respectively. The reaction was found to be essentially first order in hydrogen over Pd(111) and Pd-Ag/Pd(111) surfaces. The results reveal that increases in the hydrogen partial pressure increase the activity but decrease ethylene selectivity over both Pd and Pd-Ag/Pd(111) surfaces. Pacific Northwest National Laboratory is operated by Battelle for the US Department of Energy.
Communication: Water on hexagonal boron nitride from diffusion Monte Carlo
Al-Hamdani, Yasmine S.; Ma, Ming; Michaelides, Angelos; Alfè, Dario; Lilienfeld, O. Anatole von
2015-05-14
Despite a recent flurry of experimental and simulation studies, an accurate estimate of the interaction strength of water molecules with hexagonal boron nitride is lacking. Here, we report quantum Monte Carlo results for the adsorption of a water monomer on a periodic hexagonal boron nitride sheet, which yield a water monomer interaction energy of ?84 ± 5 meV. We use the results to evaluate the performance of several widely used density functional theory (DFT) exchange correlation functionals and find that they all deviate substantially. Differences in interaction energies between different adsorption sites are however better reproduced by DFT.
A Post-Monte-Carlo Sensitivity Analysis Code
Energy Science and Technology Software Center (OSTI)
2000-04-04
SATOOL (Sensitivity Analysis TOOL) is a code for sensitivity analysis, following an uncertainity analysis with Monte Carlo simulations. Sensitivity analysis identifies those input variables, whose variance contributes dominatly to the variance in the output. This analysis can be used to reduce the variance in the output variables by redefining the "sensitive" variables with greater precision, i.e. with lower variance. The code identifies a group of sensitive variables, ranks them in the order of importance andmoreÂ Â» also quantifies the relative importance among the sensitive variables.Â«Â less
Element Agglomeration Algebraic Multilevel Monte-Carlo Library
Energy Science and Technology Software Center (OSTI)
2015-02-19
ElagMC is a parallel C++ library for Multilevel Monte Carlo simulations with algebraically constructed coarse spaces. ElagMC enables Multilevel variance reduction techniques in the context of general unstructured meshes by using the specialized element-based agglomeration techniques implemented in ELAG (the Element-Agglomeration Algebraic Multigrid and Upscaling Library developed by U. Villa and P. Vassilevski and currently under review for public release). The ElabMC library can support different type of deterministic problems, including mixed finite element discretizationsmoreÂ Â» of subsurface flow problems.Â«Â less
Monte Carlo Hybrid Applied to Binary Stochastic Mixtures
Energy Science and Technology Software Center (OSTI)
2008-08-11
The purpose of this set of codes isto use an inexpensive, approximate deterministic flux distribution to generate weight windows, wihich will then be used to bound particle weights for the Monte Carlo code run. The process is not automated; the user must run the deterministic code and use the output file as a command-line argument for the Monte Carlo code. Two sets of text input files are included as test problems/templates.
Multiscale Monte Carlo equilibration: Pure Yang-Mills theory
Endres, Michael G.; Brower, Richard C.; Orginos, Kostas; Detmold, William; Pochinsky, Andrew V.
2015-12-29
In this study, we present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
Quantum Monte Carlo Calculations of Light Nuclei Using Chiral Potentials
Office of Scientific and Technical Information (OSTI)
(Journal Article) | SciTech Connect Quantum Monte Carlo Calculations of Light Nuclei Using Chiral Potentials Citation Details In-Document Search Title: Quantum Monte Carlo Calculations of Light Nuclei Using Chiral Potentials Authors: Lynn, J. E. ; Carlson, J. ; Epelbaum, E. ; Gandolfi, S. ; Gezerlis, A. ; Schwenk, A. Publication Date: 2014-11-04 OSTI Identifier: 1181024 Grant/Contract Number: AC02-05CH11231 Type: Publisher's Accepted Manuscript Journal Name: Physical Review Letters
Fast Monte Carlo for radiation therapy: the PEREGRINE Project (Conference)
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
| SciTech Connect Search Results Conference: Fast Monte Carlo for radiation therapy: the PEREGRINE Project Citation Details In-Document Search Title: Fast Monte Carlo for radiation therapy: the PEREGRINE Project Ã— You are accessing a document from the Department of Energy's (DOE) SciTech Connect. This site is a product of DOE's Office of Scientific and Technical Information (OSTI) and is provided as a public service. Visit OSTI to utilize additional information resources in energy science
Teymurazyan, A.; Rowlands, J. A.; Thunder Bay Regional Research Institute , Thunder Bay P7A 7T1; Department of Radiation Oncology, University of Toronto, Toronto M5S 3E2 ; Pang, G.
2014-04-15
Purpose: Electronic Portal Imaging Devices (EPIDs) have been widely used in radiation therapy and are still needed on linear accelerators (Linacs) equipped with kilovoltage cone beam CT (kV-CBCT) or MRI systems. Our aim is to develop a new high quantum efficiency (QE) ?erenkov Portal Imaging Device (CPID) that is quantum noise limited at dose levels corresponding to a single Linac pulse. Methods: Recently a new concept of CPID for MV x-ray imaging in radiation therapy was introduced. It relies on ?erenkov effect for x-ray detection. The proposed design consisted of a matrix of optical fibers aligned with the incident x-rays and coupled to an active matrix flat panel imager (AMFPI) for image readout. A weakness of such design is that too few ?erenkov light photons reach the AMFPI for each incident x-ray and an AMFPI with an avalanche gain is required in order to overcome the readout noise for portal imaging application. In this work the authors propose to replace the optical fibers in the CPID with light guides without a cladding layer that are suspended in air. The air between the light guides takes on the role of the cladding layer found in a regular optical fiber. Since air has a significantly lower refractive index (?1 versus 1.38 in a typical cladding layer), a much superior light collection efficiency is achieved. Results: A Monte Carlo simulation of the new design has been conducted to investigate its feasibility. Detector quantities such as quantum efficiency (QE), spatial resolution (MTF), and frequency dependent detective quantum efficiency (DQE) have been evaluated. The detector signal and the quantum noise have been compared to the readout noise. Conclusions: Our studies show that the modified new CPID has a QE and DQE more than an order of magnitude greater than that of current clinical systems and yet a spatial resolution similar to that of current low-QE flat-panel based EPIDs. Furthermore it was demonstrated that the new CPID does not require an avalanche gain in the AMFPI and is quantum noise limited at dose levels corresponding to a single Linac pulse.
Brachytherapy structural shielding calculations using Monte Carlo generated, monoenergetic data
Zourari, K.; Peppa, V.; Papagiannis, P.; Ballester, Facundo; Siebert, Frank-André
2014-04-15
Purpose: To provide a method for calculating the transmission of any broad photon beam with a known energy spectrum in the range of 20–1090 keV, through concrete and lead, based on the superposition of corresponding monoenergetic data obtained from Monte Carlo simulation. Methods: MCNP5 was used to calculate broad photon beam transmission data through varying thickness of lead and concrete, for monoenergetic point sources of energy in the range pertinent to brachytherapy (20–1090 keV, in 10 keV intervals). The three parameter empirical model introduced byArcher et al. [“Diagnostic x-ray shielding design based on an empirical model of photon attenuation,” Health Phys. 44, 507–517 (1983)] was used to describe the transmission curve for each of the 216 energy-material combinations. These three parameters, and hence the transmission curve, for any polyenergetic spectrum can then be obtained by superposition along the lines of Kharrati et al. [“Monte Carlo simulation of x-ray buildup factors of lead and its applications in shielding of diagnostic x-ray facilities,” Med. Phys. 34, 1398–1404 (2007)]. A simple program, incorporating a graphical user interface, was developed to facilitate the superposition of monoenergetic data, the graphical and tabular display of broad photon beam transmission curves, and the calculation of material thickness required for a given transmission from these curves. Results: Polyenergetic broad photon beam transmission curves of this work, calculated from the superposition of monoenergetic data, are compared to corresponding results in the literature. A good agreement is observed with results in the literature obtained from Monte Carlo simulations for the photon spectra emitted from bare point sources of various radionuclides. Differences are observed with corresponding results in the literature for x-ray spectra at various tube potentials, mainly due to the different broad beam conditions or x-ray spectra assumed. Conclusions: The data of this work allow for the accurate calculation of structural shielding thickness, taking into account the spectral variation with shield thickness, and broad beam conditions, in a realistic geometry. The simplicity of calculations also obviates the need for the use of crude transmission data estimates such as the half and tenth value layer indices. Although this study was primarily designed for brachytherapy, results might also be useful for radiology and nuclear medicine facility design, provided broad beam conditions apply.
Use of single scatter electron monte carlo transport for medical radiation sciences
Svatos, Michelle M. (Oakland, CA)
2001-01-01
The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.
Cai, Zhongli; Chattopadhyay, Niladri; Kwon, Yongkyu Luke; Pignol, Jean-Philippe; Lechtman, Eli; Reilly, Raymond M.; Department of Medical Imaging, University of Toronto, Toronto, Ontario M5S 3E2; Toronto General Research Institute, University Health Network, Toronto, Ontario M5G 2C4
2013-11-15
Purpose: The authors’ aims were to model how various factors influence radiation dose enhancement by gold nanoparticles (AuNPs) and to propose a new modeling approach to the dose enhancement factor (DEF).Methods: The authors used Monte Carlo N-particle (MCNP 5) computer code to simulate photon and electron transport in cells. The authors modeled human breast cancer cells as a single cell, a monolayer, or a cluster of cells. Different numbers of 5, 30, or 50 nm AuNPs were placed in the extracellular space, on the cell surface, in the cytoplasm, or in the nucleus. Photon sources examined in the simulation included nine monoenergetic x-rays (10–100 keV), an x-ray beam (100 kVp), and {sup 125}I and {sup 103}Pd brachytherapy seeds. Both nuclear and cellular dose enhancement factors (NDEFs, CDEFs) were calculated. The ability of these metrics to predict the experimental DEF based on the clonogenic survival of MDA-MB-361 human breast cancer cells exposed to AuNPs and x-rays were compared.Results: NDEFs show a strong dependence on photon energies with peaks at 15, 30/40, and 90 keV. Cell model and subcellular location of AuNPs influence the peak position and value of NDEF. NDEFs decrease in the order of AuNPs in the nucleus, cytoplasm, cell membrane, and extracellular space. NDEFs also decrease in the order of AuNPs in a cell cluster, monolayer, and single cell if the photon energy is larger than 20 keV. NDEFs depend linearly on the number of AuNPs per cell. Similar trends were observed for CDEFs. NDEFs using the monolayer cell model were more predictive than either single cell or cluster cell models of the DEFs experimentally derived from the clonogenic survival of cells cultured as a monolayer. The amount of AuNPs required to double the prescribed dose in terms of mg Au/g tissue decreases as the size of AuNPs increases, especially when AuNPs are in the nucleus and the cytoplasm. For 40 keV x-rays and a cluster of cells, to double the prescribed x-ray dose (NDEF = 2) using 30 nm AuNPs, would require 5.1 ± 0.2, 9 ± 1, 10 ± 1, 10 ± 1 mg Au/g tissue in the nucleus, in the cytoplasm, on the cell surface, or in the extracellular space, respectively. Using 50 nm AuNPs, the required amount decreases to 3.1 ± 0.3, 8 ± 1, 9 ± 1, 9 ± 1 mg Au/g tissue, respectively.Conclusions: NDEF is a new metric that can predict the radiation enhancement of AuNPs for various experimental conditions. Cell model, the subcellular location and size of AuNPs, and the number of AuNPs per cell, as well as the x-ray photon energy all have effects on NDEFs. Larger AuNPs in the nucleus of cluster cells exposed to x-rays of 15 or 40 keV maximize NDEFs.
Crossing the mesoscale no-mans land via parallel kinetic Monte Carlo.
Garcia Cardona, Cristina (San Diego State University); Webb, Edmund Blackburn, III; Wagner, Gregory John; Tikare, Veena; Holm, Elizabeth Ann; Plimpton, Steven James; Thompson, Aidan Patrick; Slepoy, Alexander (U. S. Department of Energy, NNSA); Zhou, Xiao Wang; Battaile, Corbett Chandler; Chandross, Michael Evan
2009-10-01
The kinetic Monte Carlo method and its variants are powerful tools for modeling materials at the mesoscale, meaning at length and time scales in between the atomic and continuum. We have completed a 3 year LDRD project with the goal of developing a parallel kinetic Monte Carlo capability and applying it to materials modeling problems of interest to Sandia. In this report we give an overview of the methods and algorithms developed, and describe our new open-source code called SPPARKS, for Stochastic Parallel PARticle Kinetic Simulator. We also highlight the development of several Monte Carlo models in SPPARKS for specific materials modeling applications, including grain growth, bubble formation, diffusion in nanoporous materials, defect formation in erbium hydrides, and surface growth and evolution.
Simulation of atomic diffusion in the Fcc NiAl system: A kinetic Monte
Office of Scientific and Technical Information (OSTI)
Carlo study (Journal Article) | SciTech Connect Simulation of atomic diffusion in the Fcc NiAl system: A kinetic Monte Carlo study Citation Details In-Document Search Title: Simulation of atomic diffusion in the Fcc NiAl system: A kinetic Monte Carlo study The atomic diffusion in fcc NiAl binary alloys was studied by kinetic Monte Carlo simulation. The environment dependent hopping barriers were computed using a pair interaction model whose parameters were fitted to relevant data derived
Non-adiabatic molecular dynamics by accelerated semiclassical Monte Carlo
White, Alexander J.; Gorshkov, Vyacheslav N.; Tretiak, Sergei; Mozyrsky, Dmitry
2015-07-07
Non-adiabatic dynamics, where systems non-radiatively transition between electronic states, plays a crucial role in many photo-physical processes, such as fluorescence, phosphorescence, and photoisomerization. Methods for the simulation of non-adiabatic dynamics are typically either numerically impractical, highly complex, or based on approximations which can result in failure for even simple systems. Recently, the Semiclassical Monte Carlo (SCMC) approach was developed in an attempt to combine the accuracy of rigorous semiclassical methods with the efficiency and simplicity of widely used surface hopping methods. However, while SCMC was found to be more efficient than other semiclassical methods, it is not yet as efficient as is needed to be used for large molecular systems. Here, we have developed two new methods: the accelerated-SCMC and the accelerated-SCMC with re-Gaussianization, which reduce the cost of the SCMC algorithm up to two orders of magnitude for certain systems. In many cases shown here, the new procedures are nearly as efficient as the commonly used surface hopping schemes, with little to no loss of accuracy. This implies that these modified SCMC algorithms will be of practical numerical solutions for simulating non-adiabatic dynamics in realistic molecular systems.
Non-adiabatic molecular dynamics by accelerated semiclassical Monte Carlo
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
White, Alexander J.; Gorshkov, Vyacheslav N.; Tretiak, Sergei; Mozyrsky, Dmitry
2015-07-07
Non-adiabatic dynamics, where systems non-radiatively transition between electronic states, plays a crucial role in many photo-physical processes, such as fluorescence, phosphorescence, and photoisomerization. Methods for the simulation of non-adiabatic dynamics are typically either numerically impractical, highly complex, or based on approximations which can result in failure for even simple systems. Recently, the Semiclassical Monte Carlo (SCMC) approach was developed in an attempt to combine the accuracy of rigorous semiclassical methods with the efficiency and simplicity of widely used surface hopping methods. However, while SCMC was found to be more efficient than other semiclassical methods, it is not yet as efficientmoreÂ Â» as is needed to be used for large molecular systems. Here, we have developed two new methods: the accelerated-SCMC and the accelerated-SCMC with re-Gaussianization, which reduce the cost of the SCMC algorithm up to two orders of magnitude for certain systems. In many cases shown here, the new procedures are nearly as efficient as the commonly used surface hopping schemes, with little to no loss of accuracy. This implies that these modified SCMC algorithms will be of practical numerical solutions for simulating non-adiabatic dynamics in realistic molecular systems.Â«Â less
Ensemble bayesian model averaging using markov chain Monte Carlo sampling
Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P
2008-01-01
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.
Quantum Monte Carlo for electronic structure: Recent developments and applications
Rodriquez, M. M.S.
1995-04-01
Quantum Monte Carlo (QMC) methods have been found to give excellent results when applied to chemical systems. The main goal of the present work is to use QMC to perform electronic structure calculations. In QMC, a Monte Carlo simulation is used to solve the Schroedinger equation, taking advantage of its analogy to a classical diffusion process with branching. In the present work the author focuses on how to extend the usefulness of QMC to more meaningful molecular systems. This study is aimed at questions concerning polyatomic and large atomic number systems. The accuracy of the solution obtained is determined by the accuracy of the trial wave function`s nodal structure. Efforts in the group have given great emphasis to finding optimized wave functions for the QMC calculations. Little work had been done by systematically looking at a family of systems to see how the best wave functions evolve with system size. In this work the author presents a study of trial wave functions for C, CH, C{sub 2}H and C{sub 2}H{sub 2}. The goal is to study how to build wave functions for larger systems by accumulating knowledge from the wave functions of its fragments as well as gaining some knowledge on the usefulness of multi-reference wave functions. In a MC calculation of a heavy atom, for reasonable time steps most moves for core electrons are rejected. For this reason true equilibration is rarely achieved. A method proposed by Batrouni and Reynolds modifies the way the simulation is performed without altering the final steady-state solution. It introduces an acceleration matrix chosen so that all coordinates (i.e., of core and valence electrons) propagate at comparable speeds. A study of the results obtained using their proposed matrix suggests that it may not be the optimum choice. In this work the author has found that the desired mixing of coordinates between core and valence electrons is not achieved when using this matrix. A bibliography of 175 references is included.
High order Chin actions in path integral Monte Carlo
Sakkos, K.; Casulleras, J.; Boronat, J.
2009-05-28
High order actions proposed by Chin have been used for the first time in path integral Monte Carlo simulations. Contrary to the Takahashi-Imada action, which is accurate to the fourth order only for the trace, the Chin action is fully fourth order, with the additional advantage that the leading fourth-order error coefficients are finely tunable. By optimizing two free parameters entering in the new action, we show that the time step error dependence achieved is best fitted with a sixth order law. The computational effort per bead is increased but the total number of beads is greatly reduced and the efficiency improvement with respect to the primitive approximation is approximately a factor of 10. The Chin action is tested in a one-dimensional harmonic oscillator, a H{sub 2} drop, and bulk liquid {sup 4}He. In all cases a sixth-order law is obtained with values of the number of beads that compare well with the pair action approximation in the stringent test of superfluid {sup 4}He.
Calculations of pair production by Monte Carlo methods
Bottcher, C.; Strayer, M.R.
1991-01-01
We describe some of the technical design issues associated with the production of particle-antiparticle pairs in very large accelerators. To answer these questions requires extensive calculation of Feynman diagrams, in effect multi-dimensional integrals, which we evaluate by Monte Carlo methods on a variety of supercomputers. We present some portable algorithms for generating random numbers on vector and parallel architecture machines. 12 refs., 14 figs.
Distributed Monte Carlo production for D0
Snow, Joel; /Langston U.
2010-01-01
The D0 collaboration uses a variety of resources on four continents to pursue a strategy of flexibility and automation in the generation of simulation data. This strategy provides a resilient and opportunistic system which ensures an adequate and timely supply of simulation data to support D0's physics analyses. A mixture of facilities, dedicated and opportunistic, specialized and generic, large and small, grid job enabled and not, are used to provide a production system that has adapted to newly developing technologies. This strategy has increased the event production rate by a factor of seven and the data production rate by a factor of ten in the last three years despite diminishing manpower. Common to all production facilities is the SAM (Sequential Access to Metadata) data-grid. Job submission to the grid uses SAMGrid middleware which may forward jobs to the OSG, the WLCG, or native SAMGrid sites. The distributed computing and data handling system used by D0 will be described and the results of MC production since the deployment of grid technologies will be presented.
Pérez-Andújar, Angélica [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 (United States)] [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 (United States); Zhang, Rui; Newhauser, Wayne [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and The University of Texas Graduate School of Biomedical Sciences at Houston, 6767 Bertner Avenue, Houston, Texas 77030 (United States)] [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and The University of Texas Graduate School of Biomedical Sciences at Houston, 6767 Bertner Avenue, Houston, Texas 77030 (United States)
2013-12-15
Purpose: Stray neutron radiation is of concern after radiation therapy, especially in children, because of the high risk it might carry for secondary cancers. Several previous studies predicted the stray neutron exposure from proton therapy, mostly using Monte Carlo simulations. Promising attempts to develop analytical models have also been reported, but these were limited to only a few proton beam energies. The purpose of this study was to develop an analytical model to predict leakage neutron equivalent dose from passively scattered proton beams in the 100-250-MeV interval.Methods: To develop and validate the analytical model, the authors used values of equivalent dose per therapeutic absorbed dose (H/D) predicted with Monte Carlo simulations. The authors also characterized the behavior of the mean neutron radiation-weighting factor, w{sub R}, as a function of depth in a water phantom and distance from the beam central axis.Results: The simulated and analytical predictions agreed well. On average, the percentage difference between the analytical model and the Monte Carlo simulations was 10% for the energies and positions studied. The authors found that w{sub R} was highest at the shallowest depth and decreased with depth until around 10 cm, where it started to increase slowly with depth. This was consistent among all energies.Conclusion: Simple analytical methods are promising alternatives to complex and slow Monte Carlo simulations to predict H/D values. The authors' results also provide improved understanding of the behavior of w{sub R} which strongly depends on depth, but is nearly independent of lateral distance from the beam central axis.
Fully Differential Monte-Carlo Generator Dedicated to TMDs and Bessel-Weighted Asymmetries
Aghasyan, Mher M.; Avakian, Harut A.
2013-10-01
We present studies of double longitudinal spin asymmetries in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator, which includes quark intrinsic transverse momentum within the generalized parton model based on the fully differential cross section for the process. Additionally, we apply Bessel-weighting to the simulated events to extract transverse momentum dependent parton distribution functions and also discuss possible uncertainties due to kinematic correlation effects.
Coupled Monte Carlo neutronics and thermal hydraulics for power reactors
Bernnat, W.; Buck, M.; Mattes, M.; Zwermann, W.; Pasichnyk, I.; Velkov, K.
2012-07-01
The availability of high performance computing resources enables more and more the use of detailed Monte Carlo models even for full core power reactors. The detailed structure of the core can be described by lattices, modeled by so-called repeated structures e.g. in Monte Carlo codes such as MCNP5 or MCNPX. For cores with mainly uniform material compositions, fuel and moderator temperatures, there is no problem in constructing core models. However, when the material composition and the temperatures vary strongly a huge number of different material cells must be described which complicate the input and in many cases exceed code or memory limits. The second problem arises with the preparation of corresponding temperature dependent cross sections and thermal scattering laws. Only if these problems can be solved, a realistic coupling of Monte Carlo neutronics with an appropriate thermal-hydraulics model is possible. In this paper a method for the treatment of detailed material and temperature distributions in MCNP5 is described based on user-specified internal functions which assign distinct elements of the core cells to material specifications (e.g. water density) and temperatures from a thermal-hydraulics code. The core grid itself can be described with a uniform material specification. The temperature dependency of cross sections and thermal neutron scattering laws is taken into account by interpolation, requiring only a limited number of data sets generated for different temperatures. Applications will be shown for the stationary part of the Purdue PWR benchmark using ATHLET for thermal- hydraulics and for a generic Modular High Temperature reactor using THERMIX for thermal- hydraulics. (authors)
Properties of reactive oxygen species by quantum Monte Carlo
Zen, Andrea; Trout, Bernhardt L.; Guidoni, Leonardo
2014-07-07
The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N{sup 3} ? N{sup 4}, where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.
Nakano, Y. Yamazaki, A.; Watanabe, K.; Uritani, A.; Ogawa, K.; Isobe, M.
2014-11-15
Neutron monitoring is important to manage safety of fusion experiment facilities because neutrons are generated in fusion reactions. Monte Carlo simulations play an important role in evaluating the influence of neutron scattering from various structures and correcting differences between deuterium plasma experiments and in situ calibration experiments. We evaluated these influences based on differences between the both experiments at Large Helical Device using Monte Carlo simulation code MCNP5. A difference between the both experiments in absolute detection efficiency of the fission chamber between O-ports is estimated to be the biggest of all monitors. We additionally evaluated correction coefficients for some neutron monitors.
Applications of FLUKA Monte Carlo Code for Nuclear and Accelerator Physics
Office of Scientific and Technical Information (OSTI)
(Journal Article) | SciTech Connect Applications of FLUKA Monte Carlo Code for Nuclear and Accelerator Physics Citation Details In-Document Search Title: Applications of FLUKA Monte Carlo Code for Nuclear and Accelerator Physics FLUKA is a general purpose Monte Carlo code capable of handling all radiation components from thermal energies (for neutrons) or 1 keV (for all other particles) to cosmic ray energies and can be applied in many different fields. Presently the code is maintained on
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmoreÂ Â» geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.Â«Â less
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Hybrid Deterministic/Monte Carlo Solutions to the Neutron Transport k-Eigenvalue Problem with a Comparison to Pure Monte Carlo Solutions Jeffrey A. Willert Los Alamos National Laboratory September 16, 2013 Joint work with: Dana Knoll (LANL), Ryosuke Park (LANL), and C. T. Kelley (NCSU) Jeffrey A. Willert Hybrid k-Eigenvalue Methods September 16, 2013 1 / 25 CASL-U-2013-0309-000 1 Introduction 2 Nonlinear Diffusion Acceleration for k-Eigenvalue Problems 3 Hybrid Methods 4 Classic Monte Carlo
Improved version of the PHOBOS Glauber Monte Carlo
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Loizides, C.; Nagle, J.; Steinberg, P.
2015-09-01
â€œGlauberâ€ models are used to calculate geometric quantities in the initial state of heavy ion collisions, such as impact parameter, number of participating nucleons and initial eccentricity. Experimental heavy-ion collaborations, in particular at RHIC and LHC, use Glauber Model calculations for various geometric observables for determination of the collision centrality. In this document, we describe the assumptions inherent to the approach, and provide an updated implementation (v2) of the Monte Carlo based Glauber Model calculation, which originally was used by the PHOBOS collaboration. The main improvement w.r.t. the earlier version (v1) (Alver et al. 2008) is the inclusion of Tritium,moreÂ Â»Helium-3, and Uranium, as well as the treatment of deformed nuclei and Glauberâ€“Gribov fluctuations of the proton in p +A collisions. A usersâ€™ guide (updated to reflect changes in v2) is provided for running various calculations.Â«Â less
Modeling granular phosphor screens by Monte Carlo methods
Liaparinos, Panagiotis F.; Kandarakis, Ioannis S.; Cavouras, Dionisis A.; Delis, Harry B.; Panayiotakis, George S.
2006-12-15
The intrinsic phosphor properties are of significant importance for the performance of phosphor screens used in medical imaging systems. In previous analytical-theoretical and Monte Carlo studies on granular phosphor materials, values of optical properties, and light interaction cross sections were found by fitting to experimental data. These values were then employed for the assessment of phosphor screen imaging performance. However, it was found that, depending on the experimental technique and fitting methodology, the optical parameters of a specific phosphor material varied within a wide range of values, i.e., variations of light scattering with respect to light absorption coefficients were often observed for the same phosphor material. In this study, x-ray and light transport within granular phosphor materials was studied by developing a computational model using Monte Carlo methods. The model was based on the intrinsic physical characteristics of the phosphor. Input values required to feed the model can be easily obtained from tabulated data. The complex refractive index was introduced and microscopic probabilities for light interactions were produced, using Mie scattering theory. Model validation was carried out by comparing model results on x-ray and light parameters (x-ray absorption, statistical fluctuations in the x-ray to light conversion process, number of emitted light photons, output light spatial distribution) with previous published experimental data on Gd{sub 2}O{sub 2}S:Tb phosphor material (Kodak Min-R screen). Results showed the dependence of the modulation transfer function (MTF) on phosphor grain size and material packing density. It was predicted that granular Gd{sub 2}O{sub 2}S:Tb screens of high packing density and small grain size may exhibit considerably better resolution and light emission properties than the conventional Gd{sub 2}O{sub 2}S:Tb screens, under similar conditions (x-ray incident energy, screen thickness)
SU-E-T-188: Film Dosimetry Verification of Monte Carlo Generated Electron Treatment Plans
Enright, S; Asprinio, A; Lu, L
2014-06-01
Purpose: The purpose of this study was to compare dose distributions from film measurements to Monte Carlo generated electron treatment plans. Irradiation with electrons offers the advantages of dose uniformity in the target volume and of minimizing the dose to deeper healthy tissue. Using the Monte Carlo algorithm will improve dose accuracy in regions with heterogeneities and irregular surfaces. Methods: Dose distributions from GafChromic{sup ™} EBT3 films were compared to dose distributions from the Electron Monte Carlo algorithm in the Eclipse{sup ™} radiotherapy treatment planning system. These measurements were obtained for 6MeV, 9MeV and 12MeV electrons at two depths. All phantoms studied were imported into Eclipse by CT scan. A 1 cm thick solid water template with holes for bonelike and lung-like plugs was used. Different configurations were used with the different plugs inserted into the holes. Configurations with solid-water plugs stacked on top of one another were also used to create an irregular surface. Results: The dose distributions measured from the film agreed with those from the Electron Monte Carlo treatment plan. Accuracy of Electron Monte Carlo algorithm was also compared to that of Pencil Beam. Dose distributions from Monte Carlo had much higher pass rates than distributions from Pencil Beam when compared to the film. The pass rate for Monte Carlo was in the 80%–99% range, where the pass rate for Pencil Beam was as low as 10.76%. Conclusion: The dose distribution from Monte Carlo agreed with the measured dose from the film. When compared to the Pencil Beam algorithm, pass rates for Monte Carlo were much higher. Monte Carlo should be used over Pencil Beam for regions with heterogeneities and irregular surfaces.
Tesfamicael, B; Gueye, P; Lyons, D; Mahesh, M; Avery, S
2014-06-01
Purpose: To construct a dose monitoring system based on an endorectal balloon coupled to thin scintillating fibers to study the dose delivered to the rectum during prostate cancer proton therapy Methods: The Geant4 Monte Carlo toolkit version 9.6p02 was used to simulate prostate cancer proton therapy treatments of an endorectal balloon (for immobilization of a 2.9 cm diameter prostate gland) and a set of 34 scintillating fibers symmetrically placed around the balloon and perpendicular to the proton beam direction (for dosimetry measurements) Results: A linear response of the fibers to the dose delivered was observed within <2%, a property that makes them good candidates for real time dosimetry. Results obtained show that the closest fiber recorded about 1/3 of the dose to the target with a 1/r{sup 2} decrease in the dose distribution as one goes toward the frontal and distal top fibers. Very low dose was recorded by the bottom fibers (about 45 times comparatively), which is a clear indication that the overall volume of the rectal wall that is exposed to a higher dose is relatively minimized. Further analysis indicated a simple scaling relationship between the dose to the prostate and the dose to the top fibers (a linear fit gave a slope of ?0.07±0.07 MeV per treatment Gy) Conclusion: Thin (1 mm × 1 mm × 100 cm) long scintillating fibers were found to be ideal for real time in-vivo dose measurement to the rectum for prostate cancer proton therapy. The linear response of the fibers to the dose delivered makes them good candidates of dosimeters. With thorough calibration and the ability to define a good correlation between the dose to the target and the dose to the fibers, such dosimeters can be used for real time dose verification to the target.
Monte Carlo analysis of neutron slowing-down-time spectrometer for fast reactor spent fuel assay
Chen, Jianwei; Lineberry, Michael
2007-07-01
Using the neutron slowing-down-time method as a nondestructive assay tool to improve input material accountancy for fast reactor spent fuel reprocessing is under investigation at Idaho State University. Monte Carlo analyses were performed to simulate the neutron slowing down process in different slowing down spectrometers, namely, lead and graphite, and determine their main parameters. {sup 238}U threshold fission chamber response was simulated in the Monte Carlo model to represent the spent fuel assay signals, the signature (fission/time) signals of {sup 235}U, {sup 239}Pu, and {sup 241}Pu were simulated as a convolution of fission cross sections and neutron flux inside the spent fuel. {sup 238}U detector signals were analyzed using linear regression model based on the signatures of fissile materials in the spent fuel to determine weight fractions of fissile materials in the Advanced Burner Test Reactor spent fuel. The preliminary results show even though lead spectrometer showed a better assay performance than graphite, graphite spectrometer could accurately determine weight fractions of {sup 239}Pu and {sup 241}Pu given proper assay energy range were chosen. (authors)
penORNL: a parallel monte carlo photon and electron transport package using PENELOPE
Bekar, Kursat B.; Miller, Thomas Martin; Patton, Bruce W.; Weber, Charles F.
2015-01-01
The parallel Monte Carlo photon and electron transport code package penORNL was developed at Oak Ridge National Laboratory to enable advanced scanning electron microscope (SEM) simulations on high performance computing systems. This paper discusses the implementations, capabilities and parallel performance of the new code package. penORNL uses PENELOPE for its physics calculations and provides all available PENELOPE features to the users, as well as some new features including source definitions specifically developed for SEM simulations, a pulse-height tally capability for detailed simulations of gamma and x-ray detectors, and a modified interaction forcing mechanism to enable accurate energy deposition calculations. The parallel performance of penORNL was extensively tested with several model problems, and very good linear parallel scaling was observed with up to 512 processors. penORNL, along with its new features, will be available for SEM simulations upon completion of the new pulse-height tally implementation.
Ibrahim, Ahmad M; Wilson, P.; Sawan, M.; Mosher, Scott W; Peplow, Douglas E.; Grove, Robert E
2013-01-01
Three mesh adaptivity algorithms were developed to facilitate and expedite the use of the CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques in accurate full-scale neutronics simulations of fusion energy systems with immense sizes and complicated geometries. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility and resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation. Additionally, because of the significant increase in the efficiency of FW-CADIS simulations, the three algorithms enabled this difficult calculation to be accurately solved on a regular computer cluster, eliminating the need for a world-class super computer.
MARKOV CHAIN MONTE CARLO POSTERIOR SAMPLING WITH THE HAMILTONIAN METHOD
K. HANSON
2001-02-01
The Markov Chain Monte Carlo technique provides a means for drawing random samples from a target probability density function (pdf). MCMC allows one to assess the uncertainties in a Bayesian analysis described by a numerically calculated posterior distribution. This paper describes the Hamiltonian MCMC technique in which a momentum variable is introduced for each parameter of the target pdf. In analogy to a physical system, a Hamiltonian H is defined as a kinetic energy involving the momenta plus a potential energy {var_phi}, where {var_phi} is minus the logarithm of the target pdf. Hamiltonian dynamics allows one to move along trajectories of constant H, taking large jumps in the parameter space with relatively few evaluations of {var_phi} and its gradient. The Hamiltonian algorithm alternates between picking a new momentum vector and following such trajectories. The efficiency of the Hamiltonian method for multidimensional isotropic Gaussian pdfs is shown to remain constant at around 7% for up to several hundred dimensions. The Hamiltonian method handles correlations among the variables much better than the standard Metropolis algorithm. A new test, based on the gradient of {var_phi}, is proposed to measure the convergence of the MCMC sequence.
Sci—Thur AM: YIS - 04: Gold Nanoparticle Enhanced Arc Radiotherapy: A Monte Carlo Feasibility Study
Koger, B; Kirkby, C
2014-08-15
Introduction: The use of gold nanoparticles (GNPs) in radiotherapy has shown promise for therapeutic enhancement. In this study, we explore the feasibility of enhancing radiotherapy with GNPs in an arc-therapy context. We use Monte Carlo simulations to quantify the macroscopic dose-enhancement ratio (DER) and tumour to normal tissue ratio (TNTR) as functions of photon energy over various tumour and body geometries. Methods: GNP-enhanced arc radiotherapy (GEART) was simulated using the PENELOPE Monte Carlo code and penEasy main program. We simulated 360° arc-therapy with monoenergetic photon energies 50 – 1000 keV and several clinical spectra used to treat a spherical tumour containing uniformly distributed GNPs in a cylindrical tissue phantom. Various geometries were used to simulate different tumour sizes and depths. Voxel dose was used to calculate DERs and TNTRs. Inhomogeneity effects were examined through skull dose in brain tumour treatment simulations. Results: Below 100 keV, DERs greater than 2.0 were observed. Compared to 6 MV, tumour dose at low energies was more conformai, with lower normal tissue dose and higher TNTRs. Both the DER and TNTR increased with increasing cylinder radius and decreasing tumour radius. The inclusion of bone showed excellent tumour conformality at low energies, though with an increase in skull dose (40% of tumour dose with 100 keV compared to 25% with 6 MV). Conclusions: Even in the presence of inhomogeneities, our results show promise for the treatment of deep-seated tumours with low-energy GEART, with greater tumour dose conformality and lower normal tissue dose than 6 MV.
SU-E-T-239: Monte Carlo Modelling of SMC Proton Nozzles Using TOPAS
Chung, K; Kim, J; Shin, J; Han, Y; Ju, S; Hong, C; Kim, D; Kim, H; Shin, E; Ahn, S; Chung, S; Choi, D
2014-06-01
Purpose: To expedite and cross-check the commissioning of the proton therapy nozzles at Samsung Medical Center using TOPAS. Methods: We have two different types of nozzles at Samsung Medical Center (SMC), a multi-purpose nozzle and a pencil beam scanning dedicated nozzle. Both nozzles have been modelled in Monte Carlo simulation by using TOPAS based on the vendor-provided geometry. The multi-purpose nozzle is mainly composed of wobbling magnets, scatterers, ridge filters and multi-leaf collimators (MLC). Including patient specific apertures and compensators, all the parts of the nozzle have been implemented in TOPAS following the geometry information from the vendor.The dedicated scanning nozzle has a simpler structure than the multi-purpose nozzle with a vacuum pipe at the down stream of the nozzle.A simple water tank volume has been implemented to measure the dosimetric characteristics of proton beams from the nozzles. Results: We have simulated the two proton beam nozzles at SMC. Two different ridge filters have been tested for the spread-out Bragg peak (SOBP) generation of wobbling mode in the multi-purpose nozzle. The spot sizes and lateral penumbra in two nozzles have been simulated and analyzed using a double Gaussian model. Using parallel geometry, both the depth dose curve and dose profile have been measured simultaneously. Conclusion: The proton therapy nozzles at SMC have been successfully modelled in Monte Carlo simulation using TOPAS. We will perform a validation with measured base data and then use the MC simulation to interpolate/extrapolate the measured data. We believe it will expedite the commissioning process of the proton therapy nozzles at SMC.
Multiscale MonteÂ Carlo equilibration: Pure Yang-Mills theory
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Endres, Michael G.; Brower, Richard C.; Orginos, Kostas; Detmold, William; Pochinsky, Andrew V.
2015-12-29
In this study, we present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
Monte Carlo Modeling of High-Energy Film Radiography (Journal Article) |
Office of Scientific and Technical Information (OSTI)
SciTech Connect Monte Carlo Modeling of High-Energy Film Radiography Citation Details In-Document Search Title: Monte Carlo Modeling of High-Energy Film Radiography High-energy film radiography methods, adapted in the past to performing specific tasks, must now meet increasing demands to identify defects and perform critical measurements in a wide variety of manufacturing processes. Although film provides unequaled resolution for most components and assemblies, image quality must be enhanced
Monte-Carlo particle dynamics in a variable specific impulse magnetoplasma
Office of Scientific and Technical Information (OSTI)
rocket (Journal Article) | SciTech Connect Monte-Carlo particle dynamics in a variable specific impulse magnetoplasma rocket Citation Details In-Document Search Title: Monte-Carlo particle dynamics in a variable specific impulse magnetoplasma rocket The self-consistent mathematical model in a Variable Specific Impulse Magnetoplasma Rocket (VASIMR) is examined. Of particular importance is the effect of a magnetic nozzle in enhancing the axial momentum of the exhaust. Also, different
PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code
Iandola, F N; O'Brien, M J; Procassini, R J
2010-11-29
Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.
Quantum Monte Carlo methods and lithium cluster properties. [Atomic clusters
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) (0.1981), 0.1895(9) (0.1874(4)), 0.1530(34) (0.1599(73)), 0.1664(37) (0.1724(110)), 0.1613(43) (0.1675(110)) Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) (0.0203(12)), 0.0188(10) (0.0220(21)), 0.0247(8) (0.0310(12)), 0.0253(8) (0.0351(8)) Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
Quantum Monte Carlo methods and lithium cluster properties
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) [0.1981], 0.1895(9) [0.1874(4)], 0.1530(34) [0.1599(73)], 0.1664(37) [0.1724(110)], 0.1613(43) [0.1675(110)] Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) [0.0203(12)], 0.0188(10) [0.0220(21)], 0.0247(8) [0.0310(12)], 0.0253(8) [0.0351(8)] Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
Energy density matrix formalism for interacting quantum systems: a quantum Monte Carlo study
Krogel, Jaron T; Kim, Jeongnim; Reboredo, Fernando A
2014-01-01
We develop an energy density matrix that parallels the one-body reduced density matrix (1RDM) for many-body quantum systems. Just as the density matrix gives access to the number density and occupation numbers, the energy density matrix yields the energy density and orbital occupation energies. The eigenvectors of the matrix provide a natural orbital partitioning of the energy density while the eigenvalues comprise a single particle energy spectrum obeying a total energy sum rule. For mean-field systems the energy density matrix recovers the exact spectrum. When correlation becomes important, the occupation energies resemble quasiparticle energies in some respects. We explore the occupation energy spectrum for the finite 3D homogeneous electron gas in the metallic regime and an isolated oxygen atom with ground state quantum Monte Carlo techniques imple- mented in the QMCPACK simulation code. The occupation energy spectrum for the homogeneous electron gas can be described by an effective mass below the Fermi level. Above the Fermi level evanescent behavior in the occupation energies is observed in similar fashion to the occupation numbers of the 1RDM. A direct comparison with total energy differences demonstrates a quantita- tive connection between the occupation energies and electron addition and removal energies for the electron gas. For the oxygen atom, the association between the ground state occupation energies and particle addition and removal energies becomes only qualitative. The energy density matrix provides a new avenue for describing energetics with quantum Monte Carlo methods which have traditionally been limited to total energies.
Size and habit evolution of PETN crystals - a lattice Monte Carlo study
Zepeda-Ruiz, L A; Maiti, A; Gee, R; Gilmer, G H; Weeks, B
2006-02-28
Starting from an accurate inter-atomic potential we develop a simple scheme of generating an ''on-lattice'' molecular potential of short range, which is then incorporated into a lattice Monte Carlo code for simulating size and shape evolution of nanocrystallites. As a specific example, we test such a procedure on the morphological evolution of a molecular crystal of interest to us, e.g., Pentaerythritol Tetranitrate, or PETN, and obtain realistic facetted structures in excellent agreement with experimental morphologies. We investigate several interesting effects including, the evolution of the initial shape of a ''seed'' to an equilibrium configuration, and the variation of growth morphology as a function of the rate of particle addition relative to diffusion.
Markov Chain Monte Carlo Sampling Methods for 1D Seismic and EM Data Inversion
Energy Science and Technology Software Center (OSTI)
2008-09-22
This software provides several Markov chain Monte Carlo sampling methods for the Bayesian model developed for inverting 1D marine seismic and controlled source electromagnetic (CSEM) data. The current software can be used for individual inversion of seismic AVO and CSEM data and for joint inversion of both seismic and EM data sets. The structure of the software is very general and flexible, and it allows users to incorporate their own forward simulation codes and rockmoreÂ Â» physics model codes easily into this software. Although the softwae was developed using C and C++ computer languages, the user-supplied codes can be written in C, C++, or various versions of Fortran languages. The software provides clear interfaces for users to plug in their own codes. The output of this software is in the format that the R free software CODA can directly read to build MCMC objects.Â«Â less
SU-E-T-578: MCEBRT, A Monte Carlo Code for External Beam Treatment Plan Verifications
Chibani, O; Ma, C; Eldib, A
2014-06-01
Purpose: Present a new Monte Carlo code (MCEBRT) for patient-specific dose calculations in external beam radiotherapy. The code MLC model is benchmarked and real patient plans are re-calculated using MCEBRT and compared with commercial TPS. Methods: MCEBRT is based on the GEPTS system (Med. Phys. 29 (2002) 835–846). Phase space data generated for Varian linac photon beams (6 – 15 MV) are used as source term. MCEBRT uses a realistic MLC model (tongue and groove, rounded ends). Patient CT and DICOM RT files are used to generate a 3D patient phantom and simulate the treatment configuration (gantry, collimator and couch angles; jaw positions; MLC sequences; MUs). MCEBRT dose distributions and DVHs are compared with those from TPS in absolute way (Gy). Results: Calculations based on the developed MLC model closely matches transmission measurements (pin-point ionization chamber at selected positions and film for lateral dose profile). See Fig.1. Dose calculations for two clinical cases (whole brain irradiation with opposed beams and lung case with eight fields) are carried out and outcomes are compared with the Eclipse AAA algorithm. Good agreement is observed for the brain case (Figs 2-3) except at the surface where MCEBRT dose can be higher by 20%. This is due to better modeling of electron contamination by MCEBRT. For the lung case an overall good agreement (91% gamma index passing rate with 3%/3mm DTA criterion) is observed (Fig.4) but dose in lung can be over-estimated by up to 10% by AAA (Fig.5). CTV and PTV DVHs from TPS and MCEBRT are nevertheless close (Fig.6). Conclusion: A new Monte Carlo code is developed for plan verification. Contrary to phantombased QA measurements, MCEBRT simulate the exact patient geometry and tissue composition. MCEBRT can be used as extra verification layer for plans where surface dose and tissue heterogeneity are an issue.
Fission matrix-based Monte Carlo criticality analysis of fuel storage pools
Farlotti, M.; Larsen, E. W.
2013-07-01
Standard Monte Carlo transport procedures experience difficulties in solving criticality problems in fuel storage pools. Because of the strong neutron absorption between fuel assemblies, source convergence can be very slow, leading to incorrect estimates of the eigenvalue and the eigenfunction. This study examines an alternative fission matrix-based Monte Carlo transport method that takes advantage of the geometry of a storage pool to overcome this difficulty. The method uses Monte Carlo transport to build (essentially) a fission matrix, which is then used to calculate the criticality and the critical flux. This method was tested using a test code on a simple problem containing 8 assemblies in a square pool. The standard Monte Carlo method gave the expected eigenfunction in 5 cases out of 10, while the fission matrix method gave the expected eigenfunction in all 10 cases. In addition, the fission matrix method provides an estimate of the error in the eigenvalue and the eigenfunction, and it allows the user to control this error by running an adequate number of cycles. Because of these advantages, the fission matrix method yields a higher confidence in the results than standard Monte Carlo. We also discuss potential improvements of the method, including the potential for variance reduction techniques. (authors)
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, William P. (Tracy, CA); Hartmann-Siantar, Christine L. (San Ramon, CA); Rathkopf, James A. (Livermore, CA)
1999-01-01
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, W.P.; Hartmann-Siantar, C.L.; Rathkopf, J.A.
1999-02-09
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media. 57 figs.
A Proposal for a Standard Interface Between Monte Carlo Tools And One-Loop Programs
Binoth, T.; Boudjema, F.; Dissertori, G.; Lazopoulos, A.; Denner, A.; Dittmaier, S.; Frederix, R.; Greiner, N.; Hoeche, Stefan; Giele, W.; Skands, P.; Winter, J.; Gleisberg, T.; Archibald, J.; Heinrich, G.; Krauss, F.; Maitre, D.; Huber, M.; Huston, J.; Kauer, N.; Maltoni, F.; /Louvain U., CP3 /Milan Bicocca U. /INFN, Turin /Turin U. /Granada U., Theor. Phys. Astrophys. /CERN /NIKHEF, Amsterdam /Heidelberg U. /Oxford U., Theor. Phys.
2011-11-11
Many highly developed Monte Carlo tools for the evaluation of cross sections based on tree matrix elements exist and are used by experimental collaborations in high energy physics. As the evaluation of one-loop matrix elements has recently been undergoing enormous progress, the combination of one-loop matrix elements with existing Monte Carlo tools is on the horizon. This would lead to phenomenological predictions at the next-to-leading order level. This note summarises the discussion of the next-to-leading order multi-leg (NLM) working group on this issue which has been taking place during the workshop on Physics at TeV Colliders at Les Houches, France, in June 2009. The result is a proposal for a standard interface between Monte Carlo tools and one-loop matrix element programs.
Lopez-Pino, N.; Padilla-Cabal, F.; Garcia-Alvarez, J. A.; Vazquez, L.; D'Alessandro, K.; Correa-Alfonso, C. M.; Godoy, W.; Maidana, N. L.; Vanin, V. R.
2013-05-06
A detailed characterization of a X-ray Si(Li) detector was performed to obtain the energy dependence of efficiency in the photon energy range of 6.4 - 59.5 keV, which was measured and reproduced by Monte Carlo (MC) simulations. Significant discrepancies between MC and experimental values were found when the manufacturer parameters of the detector were used in the simulation. A complete Computerized Tomography (CT) detector scan allowed to find the correct crystal dimensions and position inside the capsule. The computed efficiencies with the resulting detector model differed with the measured values no more than 10% in most of the energy range.
Monte Carlo modeling of transport in PbSe nanocrystal films
Carbone, I. Carter, S. A.; Zimanyi, G. T.
2013-11-21
A Monte Carlo hopping model was developed to simulate electron and hole transport in nanocrystalline PbSe films. Transport is carried out as a series of thermally activated hopping events between neighboring sites on a cubic lattice. Each site, representing an individual nanocrystal, is assigned a size-dependent electronic structure, and the effects of particle size, charging, interparticle coupling, and energetic disorder on electron and hole mobilities were investigated. Results of simulated field-effect measurements confirm that electron mobilities and conductivities at constant carrier densities increase with particle diameter by an order of magnitude up to 5?nm and begin to decrease above 6?nm. We find that as particle size increases, fewer hops are required to traverse the same distance and that site energy disorder significantly inhibits transport in films composed of smaller nanoparticles. The dip in mobilities and conductivities at larger particle sizes can be explained by a decrease in tunneling amplitudes and by charging penalties that are incurred more frequently when carriers are confined to fewer, larger nanoparticles. Using a nearly identical set of parameter values as the electron simulations, hole mobility simulations confirm measurements that increase monotonically with particle size over two orders of magnitude.
Çatl?, Serap; Tan?r, Güne?
2013-10-01
The present study aimed to investigate the effects of titanium, titanium alloy, and stainless steel hip prostheses on dose distribution based on the Monte Carlo simulation method, as well as the accuracy of the Eclipse treatment planning system (TPS) at 6 and 18 MV photon energies. In the present study the pencil beam convolution (PBC) method implemented in the Eclipse TPS was compared to the Monte Carlo method and ionization chamber measurements. The present findings show that if high-Z material is used in prosthesis, large dose changes can occur due to scattering. The variance in dose observed in the present study was dependent on material type, density, and atomic number, as well as photon energy; as photon energy increased back scattering decreased. The dose perturbation effect of hip prostheses was significant and could not be predicted accurately by the PBC method for hip prostheses. The findings show that for accurate dose calculation the Monte Carlo-based TPS should be used in patients with hip prostheses.
Monte Carlo Implementation Of Up- Or Down-Scattering Due To Collisions With
Office of Scientific and Technical Information (OSTI)
Material At Finite Temperature (Technical Report) | SciTech Connect Monte Carlo Implementation Of Up- Or Down-Scattering Due To Collisions With Material At Finite Temperature Citation Details In-Document Search Title: Monte Carlo Implementation Of Up- Or Down-Scattering Due To Collisions With Material At Finite Temperature Authors: Quaglioni, S ; Beck, B R Publication Date: 2011-06-03 OSTI Identifier: 1113914 Report Number(s): LLNL-TR-488174 DOE Contract Number: W-7405-ENG-48 Resource Type:
A Geant4 Implementation of a Novel Single-Event Monte Carlo Method for
Office of Scientific and Technical Information (OSTI)
Electron Dose Calculations. (Conference) | SciTech Connect A Geant4 Implementation of a Novel Single-Event Monte Carlo Method for Electron Dose Calculations. Citation Details In-Document Search Title: A Geant4 Implementation of a Novel Single-Event Monte Carlo Method for Electron Dose Calculations. Abstract not provided. Authors: Franke, Brian Claude ; Dixon, David A. ; Prinja, Anil K. Publication Date: 2013-11-01 OSTI Identifier: 1118160 Report Number(s): SAND2013-9631C 481400 DOE Contract
CASL-U-2015-0170-000 SHIFT: A Massively Parallel Monte Carlo
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
SHIFT: A Massively Parallel Monte Carlo Radiation Transport Package Tara M. Pandya, Seth R. Johnson, Gregory G. Davidson, Thomas M. Evans, and Steven P. Hamilton Oak Ridge National Laboratory April 19, 2015 CASL-U-2015-0170-000 ANS MC2015 - Joint International Conference on Mathematics and Computation (M&C), Supercomputing in Nuclear Applications (SNA) and the Monte Carlo (MC) Method * Nashville, Tennessee * April 19-23, 2015, on CD-ROM, American Nuclear Society, LaGrange Park, IL (2015)
Modification to the Monte Carlo N-Particle (MCNP) Visual Editor (MCNPVised)
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
to Read in Computer Aided Design (CAD) Files (Technical Report) | SciTech Connect Modification to the Monte Carlo N-Particle (MCNP) Visual Editor (MCNPVised) to Read in Computer Aided Design (CAD) Files Citation Details In-Document Search Title: Modification to the Monte Carlo N-Particle (MCNP) Visual Editor (MCNPVised) to Read in Computer Aided Design (CAD) Files Ã— You are accessing a document from the Department of Energy's (DOE) SciTech Connect. This site is a product of DOE's Office of
Zori 1.0: A Parallel Quantum Monte Carlo Electronic StructurePackage
Office of Scientific and Technical Information (OSTI)
(Journal Article) | SciTech Connect SciTech Connect Search Results Journal Article: Zori 1.0: A Parallel Quantum Monte Carlo Electronic StructurePackage Citation Details In-Document Search Title: Zori 1.0: A Parallel Quantum Monte Carlo Electronic StructurePackage No abstract prepared. Authors: Aspuru-Guzik, Alan ; Salomon-Ferrer, Romelia ; Austin, Brian ; Perusquia-Flores, Raul ; Griffin, Mary A. ; Oliva, Ricardo A. ; Skinner,David ; Dominik,Domin ; Lester Jr., William A. Publication Date:
MO-G-BRF-09: Investigating Magnetic Field Dose Effects in Mice: A Monte Carlo Study
Rubinstein, A; Guindani, M; Followill, D; Melancon, A; Hazle, J; Court, L
2014-06-15
Purpose: In MRI-linac treatments, radiation dose distributions are affected by magnetic fields, especially at high-density/low-density interfaces. Radiobiological consequences of magnetic field dose effects are presently unknown; therefore, preclinical studies are needed to ensure the safe clinical use of MRI-linacs. This study investigates the optimal combination of beam energy and magnetic field strength needed for preclinical murine studies. Methods: The Monte Carlo code MCNP6 was used to simulate the effects of a magnetic field when irradiating a mouse-sized lung phantom with a 1.0cmx1.0cm photon beam. Magnetic field effects were examined using various beam energies (225kVp, 662keV[Cs-137], and 1.25MeV[Co-60]) and magnetic field strengths (0.75T, 1.5T, and 3T). The resulting dose distributions were compared to Monte Carlo results for humans with various field sizes and patient geometries using a 6MV/1.5T MRI-linac. Results: In human simulations, the addition of a 1.5T magnetic field caused an average dose increase of 49% (range:36%–60%) to lung at the soft tissue-to-lung interface and an average dose decrease of 30% (range:25%–36%) at the lung-to-soft tissue interface. In mouse simulations, the magnetic fields had no effect on the 225kVp dose distribution. The dose increases for the Cs-137 beam were 12%, 33%, and 49% for 0.75T, 1.5T, and 3.0T magnetic fields, respectively while the dose decreases were 7%, 23%, and 33%. For the Co-60 beam, the dose increases were 14%, 45%, and 41%, and the dose decreases were 18%, 35%, and 35%. Conclusion: The magnetic field dose effects observed in mouse phantoms using a Co-60 beam with 1.5T or 3T fields and a Cs-137 beam with a 3T field compare well with those seen in simulated human treatments with an MRI-linac. These irradiator/magnet combinations are suitable for preclinical studies investigating potential biological effects of delivering radiation therapy in the presence of a magnetic field. Partially funded by Elekta.
Da, B.; Li, Z. Y.; Chang, H. C.; Ding, Z. J.; Mao, S. F.
2014-09-28
It has been experimentally found that the carbon surface contamination influences strongly the spectrum signals in reflection electron energy loss spectroscopy (REELS) especially at low primary electron energy. However, there is still little theoretical work dealing with the carbon contamination effect in REELS. Such a work is required to predict REELS spectrum for layered structural sample, providing an understanding of the experimental phenomena observed. In this study, we present a numerical calculation result on the spatially varying differential inelastic mean free path for a sample made of a carbon contamination layer of varied thickness on a SrTiO{sub 3} substrate. A Monte Carlo simulation model for electron interaction with a layered structural sample is built by combining this inelastic scattering cross-section with the Mott's cross-section for electron elastic scattering. The simulation results have clearly shown that the contribution of the electron energy loss from carbon surface contamination increases with decreasing primary energy due to increased individual scattering processes along trajectory parts carbon contamination layer. Comparison of the simulated spectra for different thicknesses of the carbon contamination layer and for different primary electron energies with experimental spectra clearly identifies that the carbon contamination in the measured sample was in the form of discontinuous islands other than the uniform film.
Alcouffe, R.E.
1985-01-01
A difficult class of problems for the discrete-ordinates neutral particle transport method is to accurately compute the flux due to a spatially localized source. Because the transport equation is solved for discrete directions, the so-called ray effect causes the flux at space points far from the source to be inaccurate. Thus, in general, discrete ordinates would not be the method of choice to solve such problems. It is better suited for calculating problems with significant scattering. The Monte Carlo method is suited to localized source problems, particularly if the amount of collisional interactions in minimal. However, if there are many scattering collisions and the flux at all space points is desired, then the Monte Carlo method becomes expensive. To take advantage of the attributes of both approaches, we have devised a first collision source method to combine the Monte Carlo and discrete-ordinates solutions. That is, particles are tracked from the source to their first scattering collision and tallied to produce a source for the discrete-ordinates calculation. A scattered flux is then computed by discrete ordinates, and the total flux is the sum of the Monte Carlo and discrete ordinates calculated fluxes. In this paper, we present calculational results using the MCNP and TWODANT codes for selected two-dimensional problems that show the effectiveness of this method.
Green's function Monte Carlo calculation for the ground state of helium trimers
Cabral, F.; Kalos, M.H.
1981-02-01
The ground state energy of weakly bound boson trimers interacting via Lennard-Jones (12,6) pair potentials is calculated using a Monte Carlo Green's Function Method. Threshold coupling constants for self binding are obtained by extrapolation to zero binding.
Saha, Krishnendu; Straus, Kenneth J.; Glick, Stephen J.; Chen, Yu.
2014-08-28
To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.
Monte Carlo modeling of neutron and gamma-ray imaging systems
Hall, J.
1996-04-01
Detailed numerical prototypes are essential to design of efficient and cost-effective neutron and gamma-ray imaging systems. We have exploited the unique capabilities of an LLNL-developed radiation transport code (COG) to develop code modules capable of simulating the performance of neutron and gamma-ray imaging systems over a wide range of source energies. COG allows us to simulate complex, energy-, angle-, and time-dependent radiation sources, model 3-dimensional system geometries with ``real world`` complexity, specify detailed elemental and isotopic distributions and predict the responses of various types of imaging detectors with full Monte Carlo accuray. COG references detailed, evaluated nuclear interaction databases allowingusers to account for multiple scattering, energy straggling, and secondary particle production phenomena which may significantly effect the performance of an imaging system by may be difficult or even impossible to estimate using simple analytical models. This work presents examples illustrating the use of these routines in the analysis of industrial radiographic systems for thick target inspection, nonintrusive luggage and cargoscanning systems, and international treaty verification.
Evaluation of vectorized Monte Carlo algorithms on GPUs for a neutron Eigenvalue problem
Du, X.; Liu, T.; Ji, W.; Xu, X. G.; Brown, F. B.
2013-07-01
Conventional Monte Carlo (MC) methods for radiation transport computations are 'history-based', which means that one particle history at a time is tracked. Simulations based on such methods suffer from thread divergence on the graphics processing unit (GPU), which severely affects the performance of GPUs. To circumvent this limitation, event-based vectorized MC algorithms can be utilized. A versatile software test-bed, called ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - was used for this study. ARCHER facilitates the development and testing of a MC code based on the vectorized MC algorithm implemented on GPUs by using NVIDIA's Compute Unified Device Architecture (CUDA). The ARCHER{sub GPU} code was designed to solve a neutron eigenvalue problem and was tested on a NVIDIA Tesla M2090 Fermi card. We found that although the vectorized MC method significantly reduces the occurrence of divergent branching and enhances the warp execution efficiency, the overall simulation speed is ten times slower than the conventional history-based MC method on GPUs. By analyzing detailed GPU profiling information from ARCHER, we discovered that the main reason was the large amount of global memory transactions, causing severe memory access latency. Several possible solutions to alleviate the memory latency issue are discussed. (authors)
Abdel-Khalik, Hany S.; Zhang, Qiong
2014-05-20
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10^{3} - 10^{5} times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.
The hydrophobic effect in a simple isotropic water-like model: Monte Carlo study
HuÅ¡, Matej; Urbic, Tomaz
2014-04-14
Using Monte Carlo computer simulations, we show that a simple isotropic water-like model with two characteristic lengths can reproduce the hydrophobic effect and the solvation properties of small and large non-polar solutes. Influence of temperature, pressure, and solute size on the thermodynamic properties of apolar solute solvation in a water model was systematically studied, showing two different solvation regimes. Small particles can fit into the cavities around the solvent particles, inducing additional order in the system and lowering the overall entropy. Large particles force the solvent to disrupt their network, increasing the entropy of the system. At low temperatures, the ordering effect of small solutes is very pronounced. Above the cross-over temperature, which strongly depends on the solute size, the entropy change becomes strictly positive. Pressure dependence was also investigated, showing a â€œcross-over pressureâ€ where the entropy and enthalpy of solvation are the lowest. These results suggest two fundamentally different solvation mechanisms, as observed experimentally in water and computationally in various water-like models.
Collapse transitions in thermosensitive multi-block copolymers: A Monte Carlo study
Rissanou, Anastassia N.; Tzeli, Despoina S.; Anastasiadis, Spiros H.; Bitsanis, Ioannis A.
2014-05-28
Monte Carlo simulations are performed on a simple cubic lattice to investigate the behavior of a single linear multiblock copolymer chain of various lengths N. The chain of type (A{sub n}B{sub n}){sub m} consists of alternating A and B blocks, where A are solvophilic and B are solvophobic and N = 2nm. The conformations are classified in five cases of globule formation by the solvophobic blocks of the chain. The dependence of globule characteristics on the molecular weight and on the number of blocks, which participate in their formation, is examined. The focus is on relative high molecular weight blocks (i.e., N in the range of 500–5000 units) and very differing energetic conditions for the two blocks (very good—almost athermal solvent for A and bad solvent for B). A rich phase behavior is observed as a result of the alternating architecture of the multiblock copolymer chain. We trust that thermodynamic equilibrium has been reached for chains of N up to 2000 units; however, for longer chains kinetic entrapments are observed. The comparison among equivalent globules consisting of different number of B-blocks shows that the more the solvophobic blocks constituting the globule the bigger its radius of gyration and the looser its structure. Comparisons between globules formed by the solvophobic blocks of the multiblock copolymer chain and their homopolymer analogs highlight the important role of the solvophilic A-blocks.
A Monte Carlo Analysis of Gas Centrifuge Enrichment Plant Process Load Cell Data
Garner, James R; Whitaker, J Michael
2013-01-01
As uranium enrichment plants increase in number, capacity, and types of separative technology deployed (e.g., gas centrifuge, laser, etc.), more automated safeguards measures are needed to enable the IAEA to maintain safeguards effectiveness in a fiscally constrained environment. Monitoring load cell data can significantly increase the IAEA s ability to efficiently achieve the fundamental safeguards objective of confirming operations as declared (i.e., no undeclared activities), but care must be taken to fully protect the operator s proprietary and classified information related to operations. Staff at ORNL, LANL, JRC/ISPRA, and University of Glasgow are investigating monitoring the process load cells at feed and withdrawal (F/W) stations to improve international safeguards at enrichment plants. A key question that must be resolved is what is the necessary frequency of recording data from the process F/W stations? Several studies have analyzed data collected at a fixed frequency. This paper contributes to load cell process monitoring research by presenting an analysis of Monte Carlo simulations to determine the expected errors caused by low frequency sampling and its impact on material balance calculations.
Bauge, E.
2015-01-15
The â€œFull modelâ€ evaluation process, that is used in CEA DAM DIF to evaluate nuclear data in the continuum region, makes extended use of nuclear models implemented in the TALYS code to account for experimental data (both differential and integral) by varying the parameters of these models until a satisfactory description of these experimental data is reached. For the evaluation of the covariance data associated with this evaluated data, the Backward-forward Monte Carlo (BFMC) method was devised in such a way that it mirrors the process of the â€œFull modelâ€ evaluation method. When coupled with the Total Monte Carlo method via the T6 system developed by NRG Petten, the BFMC method allows to make use of integral experiments to constrain the distribution of model parameters, and hence the distribution of derived observables and their covariance matrix. Together, TALYS, TMC, BFMC, and T6, constitute a powerful integrated tool for nuclear data evaluation, that allows for evaluation of nuclear data and the associated covariance matrix, all at once, making good use of all the available experimental information to drive the distribution of the model parameters and the derived observables.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. SomemoreÂ Â» specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000Â® problems. These benchmark and scaling studies show promising results.Â«Â less
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Pandya, Tara M; Johnson, Seth R; Evans, Thomas M; Davidson, Gregory G; Hamilton, Steven P; Godfrey, Andrew T
2016-01-01
This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore »specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 R problems. These benchmark and scaling studies show promising results.« less
Miura, Shinichi [Institute for Molecular Science, 38 Myodaiji, Okazaki 444-8585 (Japan)
2007-03-21
In this paper, we present a path integral hybrid Monte Carlo (PIHMC) method for rotating molecules in quantum fluids. This is an extension of our PIHMC for correlated Bose fluids [S. Miura and J. Tanaka, J. Chem. Phys. 120, 2160 (2004)] to handle the molecular rotation quantum mechanically. A novel technique referred to be an effective potential of quantum rotation is introduced to incorporate the rotational degree of freedom in the path integral molecular dynamics or hybrid Monte Carlo algorithm. For a permutation move to satisfy Bose statistics, we devise a multilevel Metropolis method combined with a configurational-bias technique for efficiently sampling the permutation and the associated atomic coordinates. Then, we have applied the PIHMC to a helium-4 cluster doped with a carbonyl sulfide molecule. The effects of the quantum rotation on the solvation structure and energetics were examined. Translational and rotational fluctuations of the dopant in the superfluid cluster were also analyzed.
Hart, S. W. D.; Maldonado, G. Ivan; Celik, Cihangir; Leal, Luiz C
2014-01-01
For many Monte Carlo codes cross sections are generally only created at a set of predetermined temperatures. This causes an increase in error as one moves further and further away from these temperatures in the Monte Carlo model. This paper discusses recent progress in the Scale Monte Carlo module KENO to create problem dependent, Doppler broadened, cross sections. Currently only broadening the 1D cross sections and probability tables is addressed. The approach uses a finite difference method to calculate the temperature dependent cross-sections for the 1D data, and a simple linear-logarithmic interpolation in the square root of temperature for the probability tables. Work is also ongoing to address broadening theS (alpha , beta) tables. With the current approach the temperature dependent cross sections are Doppler broadened before transport starts, and, for all but a few isotopes, the impact on cross section loading is negligible. Results can be compared with those obtained by using multigroup libraries, as KENO currently does interpolation on the multigroup cross sections to determine temperature dependent cross-sections. Current results compare favorably with these expected results.
Silva-Rodríguez, Jesús Aguiar, Pablo; Servicio de Medicina Nuclear, Complexo Hospitalario Universidade de Santiago de Compostela , 15782, Galicia; Grupo de Imaxe Molecular, Instituto de Investigación Sanitarias , Santiago de Compostela, 15706, Galicia ; Sánchez, Manuel; Mosquera, Javier; Luna-Vega, Víctor; Cortés, Julia; Garrido, Miguel; Pombar, Miguel; Ruibal, Álvaro; Grupo de Imaxe Molecular, Instituto de Investigación Sanitarias , Santiago de Compostela, 15706, Galicia; Fundación Tejerina, 28003, Madrid
2014-05-15
Purpose: Current procedure guidelines for whole body [18F]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) state that studies with visible dose extravasations should be rejected for quantification protocols. Our work is focused on the development and validation of methods for estimating extravasated doses in order to correct standard uptake value (SUV) values for this effect in clinical routine. Methods: One thousand three hundred sixty-seven consecutive whole body FDG-PET studies were visually inspected looking for extravasation cases. Two methods for estimating the extravasated dose were proposed and validated in different scenarios using Monte Carlo simulations. All visible extravasations were retrospectively evaluated using a manual ROI based method. In addition, the 50 patients with higher extravasated doses were also evaluated using a threshold-based method. Results: Simulation studies showed that the proposed methods for estimating extravasated doses allow us to compensate the impact of extravasations on SUV values with an error below 5%. The quantitative evaluation of patient studies revealed that paravenous injection is a relatively frequent effect (18%) with a small fraction of patients presenting considerable extravasations ranging from 1% to a maximum of 22% of the injected dose. A criterion based on the extravasated volume and maximum concentration was established in order to identify this fraction of patients that might be corrected for paravenous injection effect. Conclusions: The authors propose the use of a manual ROI based method for estimating the effectively administered FDG dose and then correct SUV quantification in those patients fulfilling the proposed criterion.
SU-E-T-238: Monte Carlo Estimation of Cerenkov Dose for Photo-Dynamic Radiotherapy
Chibani, O; Price, R; Ma, C; Eldib, A; Mora, G
2014-06-01
Purpose: Estimation of Cerenkov dose from high-energy megavoltage photon and electron beams in tissue and its impact on the radiosensitization using Protoporphyrine IX (PpIX) for tumor targeting enhancement in radiotherapy. Methods: The GEPTS Monte Carlo code is used to generate dose distributions from 18MV Varian photon beam and generic high-energy (45-MV) photon and (45-MeV) electron beams in a voxel-based tissueequivalent phantom. In addition to calculating the ionization dose, the code scores Cerenkov energy released in the wavelength range 375–425 nm corresponding to the pick of the PpIX absorption spectrum (Fig. 1) using the Frank-Tamm formula. Results: The simulations shows that the produced Cerenkov dose suitable for activating PpIX is 4000 to 5500 times lower than the overall radiation dose for all considered beams (18MV, 45 MV and 45 MeV). These results were contradictory to the recent experimental studies by Axelsson et al. (Med. Phys. 38 (2011) p 4127), where Cerenkov dose was reported to be only two orders of magnitude lower than the radiation dose. Note that our simulation results can be corroborated by a simple model where the Frank and Tamm formula is applied for electrons with 2 MeV/cm stopping power generating Cerenkov photons in the 375–425 nm range and assuming these photons have less than 1mm penetration in tissue. Conclusion: The Cerenkov dose generated by high-energy photon and electron beams may produce minimal clinical effect in comparison with the photon fluence (or dose) commonly used for photo-dynamic therapy. At the present time, it is unclear whether Cerenkov radiation is a significant contributor to the recently observed tumor regression for patients receiving radiotherapy and PpIX versus patients receiving radiotherapy only. The ongoing study will include animal experimentation and investigation of dose rate effects on PpIX response.
Minibeam radiation therapy for the management of osteosarcomas: A Monte Carlo study
Martínez-Rovira, I.; Prezado, Y.
2014-06-15
Purpose: Minibeam radiation therapy (MBRT) exploits the well-established tissue-sparing effect provided by the combination of submillimetric field sizes and a spatial fractionation of the dose. The aim of this work is to evaluate the feasibility and potential therapeutic gain of MBRT, in comparison with conventional radiotherapy, for osteosarcoma treatments. Methods: Monte Carlo simulations (PENELOPE/PENEASY code) were used as a method to study the dose distributions resulting from MBRT irradiations of a rat femur and a realistic human femur phantoms. As a figure of merit, peak and valley doses and peak-to-valley dose ratios (PVDR) were assessed. Conversion of absorbed dose to normalized total dose (NTD) was performed in the human case. Several field sizes and irradiation geometries were evaluated. Results: It is feasible to deliver a uniform dose distribution in the target while the healthy tissue benefits from a spatial fractionation of the dose. Very high PVDR values (?20) were achieved in the entrance beam path in the rat case. PVDR values ranged from 2 to 9 in the human phantom. NTD{sub 2.0} of 87 Gy might be reached in the tumor in the human femur while the healthy tissues might receive valley NTD{sub 2.0} lower than 20 Gy. The doses in the tumor and healthy tissues might be significantly higher and lower than the ones commonly delivered used in conventional radiotherapy. Conclusions: The obtained dose distributions indicate that a gain in normal tissue sparing might be expected. This would allow the use of higher (and potentially curative) doses in the tumor. Biological experiments are warranted.
SU-D-19A-03: Monte Carlo Investigation of the Mobetron to Perform Modulated Electron Beam Therapy
Emam, I; Eldib, A; Hosini, M; AlSaeed, E; Ma, C
2014-06-01
Purpose: Modulated electron radiotherapy (MERT) has been proposed as a mean of delivering conformal dose to shallow tumors while sparing distal structures and surrounding tissues. In intraoperative radiotherapy (IORT) utilizing Mobetron, an applicator is placed as closely as possible to the suspected cancerous tissues to be treated. In this study we investigate the characteristics of Mobetron electron beams collimated by an in-house prospective electron multileaf collimator (eMLC) and its feasibility for MERT. Methods: IntraOp Mobetron™ dedicated to perform radiotherapy during surgery was used in the study. It provides several energies (6, 9 and 12 MeV). Dosimetry measurements were performed to obtain percentage depth dose curves (PDD) and profiles for a 10-cm diameter applicator using the PTW MP3/XS 3D-scanning system and the semiflex ion chamber. MCBEAM/MCSIM Monte Carlo codes were used for the treatment head simulation and phantom dose calculation. The design of an electron beam collimation by an eMLC attached to the Mobetron head was also investigated using Monte Carlo simulations. Isodose distributions resulting from eMLC collimated beams were compared to that collimated using cutouts. The design for our Mobetron eMLC is based on our previous experiences with eMLCs designed for clinical linear accelerators. For Mobetron the eMLC is attached to the end of a spacer-mounted rectangular applicator at 50 cm SSD. Steel will be used as the leaf material because other materials would be toxic and will not be suitable for intraoperative applications. Results: Good agreement (within 2%) was achieved between measured and calculated PDD curves and profiles for all available energies. Dose distributiosn provided by the eMLC showed reasonable agreement (?3%/1mm) with those obtained by conventional cutouts. Conclusion: Monte Carlo simulations are capable of modeling Mobetron electron beams with a reliable accuracy. An eMLC attached to the Mobteron treatment head will allow better treatment options with those machines.
W/Z + b bbar/Jets at NLO Using the Monte Carlo MCFM
John M. Campbell
2001-05-29
We summarize recent progress in next-to-leading QCD calculations made using the Monte Carlo MCFM. In particular, we focus on the calculations of p{bar p} {r_arrow} Wb{bar b}, Zb{bar b} and highlight the significant corrections to background estimates for Higgs searches in the channels WH and ZH at the Tevatron. We also report on the current progress of, and strategies for, the calculation of the process p{bar p} {r_arrow} W/Z + 2 jets.
Shafer, J.D.; Shepard, J.R.
1997-04-01
We derive an approximate renormalization group (RG) flow equation for the local effective potential of single-component {phi}{sup 4} field theory at finite temperature. Previous zero-temperature RG equations are recovered in the low- and high-temperature limits, in the latter case, via the phenomenon of dimensional reduction. We numerically solve our RG equations to obtain local effective potentials at finite temperature. These are found to be in excellent agreement with Monte Carlo results, especially when lattice artifacts are accounted for in the RG treatment. {copyright} {ital 1997} {ital The American Physical Society}
Theory of melting at high pressures: Amending density functional theory with quantum Monte Carlo
Shulenburger, L.; Desjarlais, M. P.; Mattsson, T. R.
2014-10-01
We present an improved first-principles description of melting under pressure based on thermodynamic integration comparing Density Functional Theory (DFT) and quantum Monte Carlo (QMC) treatments of the system. The method is applied to address the longstanding discrepancy between density functional theory (DFT) calculations and diamond anvil cell (DAC) experiments on the melting curve of xenon, a noble gas solid where van der Waals binding is challenging for traditional DFT methods. The calculations show excellent agreement with data below 20 GPa and that the high-pressure melt curve is well described by a Lindemann behavior up to at least 80 GPa, a finding in stark contrast to DAC data.
Monte Carlo generators for studies of the 3D structure of the nucleon
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Avakian, Harut; D'Alesio, U.; Murgia, F.
2015-01-23
In this study, extraction of transverse momentum and space distributions of partons from measurements of spin and azimuthal asymmetries requires development of a self consistent analysis framework, accounting for evolution effects, and allowing control of systematic uncertainties due to variations of input parameters and models. Development of realistic Monte-Carlo generators, accounting for TMD evolution effects, spin-orbit and quark-gluon correlations will be crucial for future studies of quark-gluon dynamics in general and 3D structure of the nucleon in particular.
Study of DCX reaction on medium nuclei with Monte-Carlo Shell Model
Wu, H. C.; Gibbs, W. R.
2010-08-04
In this work a method is introduced to calculate the DCX reaction in the framework of Monte-Carlo Shell Model (MCSM). To facilitate the use of Zero-temperature formalism of MCSM, the Double-Isobaric-Analog State (DIAS) is derived from the ground state by using isospin shifting operator. The validity of this method is tested by comparing the MCSM results to those of the SU(3) symmetry case. Application of this method to DCX on {sup 56}Fe and {sup 93}Nb is discussed.
Quantized vortices in {sup 4}He droplets: A quantum Monte Carlo study
Sola, E.; Casulleras, J.; Boronat, J.
2007-08-01
We present a diffusion Monte Carlo study of a vortex line excitation attached to the center of a {sup 4}He droplet at zero temperature. The vortex energy is estimated for droplets of increasing number of atoms, from N=70 up to 300, showing a monotonous increase with N. The evolution of the core radius and its associated energy, the core energy, is also studied as a function of N. The core radius is {approx}1 A in the center and increases when approaching the droplet surface; the core energy per unit volume stabilizes at a value 2.8 K{sigma}{sup -3} ({sigma}=2.556 A) for N{>=}200.
Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation
Pecchia, M.; D'Auria, F.; Mazzantini, O.
2012-07-01
Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI for performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)
Looking for Auger signatures in III-nitride light emitters: A full-band Monte Carlo perspective
Bertazzi, Francesco Goano, Michele; Zhou, Xiangyu; Calciati, Marco; Ghione, Giovanni; Matsubara, Masahiko; Bellotti, Enrico
2015-02-09
Recent experiments of electron emission spectroscopy (EES) on III-nitride light-emitting diodes (LEDs) have shown a correlation between droop onset and hot electron emission at the cesiated surface of the LED p-cap. The observed hot electrons have been interpreted as a direct signature of Auger recombination in the LED active region, as highly energetic Auger-excited electrons would be collected in long-lived satellite valleys of the conduction band so that they would not decay on their journey to the surface across the highly doped p-contact layer. We discuss this interpretation by using a full-band Monte Carlo model based on first-principles electronic structure and lattice dynamics calculations. The results of our analysis suggest that Auger-excited electrons cannot be unambiguously detected in the LED structures used in the EES experiments. Additional experimental and simulative work are necessary to unravel the complex physics of GaN cesiated surfaces.
Zink, K.; Czarnecki, D.; Voigts-Rhetz, P. von; Looe, H. K.; Harder, D.
2014-11-01
Purpose: The electron fluence inside a parallel-plate ionization chamber positioned in a water phantom and exposed to a clinical electron beam deviates from the unperturbed fluence in water in absence of the chamber. One reason for the fluence perturbation is the well-known “inscattering effect,” whose physical cause is the lack of electron scattering in the gas-filled cavity. Correction factors determined to correct for this effect have long been recommended. However, more recent Monte Carlo calculations have led to some doubt about the range of validity of these corrections. Therefore, the aim of the present study is to reanalyze the development of the fluence perturbation with depth and to review the function of the guard rings. Methods: Spatially resolved Monte Carlo simulations of the dose profiles within gas-filled cavities with various radii in clinical electron beams have been performed in order to determine the radial variation of the fluence perturbation in a coin-shaped cavity, to study the influences of the radius of the collecting electrode and of the width of the guard ring upon the indicated value of the ionization chamber formed by the cavity, and to investigate the development of the perturbation as a function of the depth in an electron-irradiated phantom. The simulations were performed for a primary electron energy of 6 MeV. Results: The Monte Carlo simulations clearly demonstrated a surprisingly large in- and outward electron transport across the lateral cavity boundary. This results in a strong influence of the depth-dependent development of the electron field in the surrounding medium upon the chamber reading. In the buildup region of the depth-dose curve, the in–out balance of the electron fluence is positive and shows the well-known dose oscillation near the cavity/water boundary. At the depth of the dose maximum the in–out balance is equilibrated, and in the falling part of the depth-dose curve it is negative, as shown here the first time. The influences of both the collecting electrode radius and the width of the guard ring are reflecting the deep radial penetration of the electron transport processes into the gas-filled cavities and the need for appropriate corrections of the chamber reading. New values for these corrections have been established in two forms, one converting the indicated value into the absorbed dose to water in the front plane of the chamber, the other converting it into the absorbed dose to water at the depth of the effective point of measurement of the chamber. In the Appendix, the in–out imbalance of electron transport across the lateral cavity boundary is demonstrated in the approximation of classical small-angle multiple scattering theory. Conclusions: The in–out electron transport imbalance at the lateral boundaries of parallel-plate chambers in electron beams has been studied with Monte Carlo simulation over a range of depth in water, and new correction factors, covering all depths and implementing the effective point of measurement concept, have been developed.
SU-E-I-28: Evaluating the Organ Dose From Computed Tomography Using Monte Carlo Calculations
Ono, T; Araki, F
2014-06-01
Purpose: To evaluate organ doses from computed tomography (CT) using Monte Carlo (MC) calculations. Methods: A Philips Brilliance CT scanner (64 slice) was simulated using the GMctdospp (IMPS, Germany) based on the EGSnrc user code. The X-ray spectra and a bowtie filter for MC simulations were determined to coincide with measurements of half-value layer (HVL) and off-center ratio (OCR) profile in air. The MC dose was calibrated from absorbed dose measurements using a Farmer chamber and a cylindrical water phantom. The dose distribution from CT was calculated using patient CT images and organ doses were evaluated from dose volume histograms. Results: The HVLs of Al at 80, 100, and 120 kV were 6.3, 7.7, and 8.7 mm, respectively. The calculated HVLs agreed with measurements within 0.3%. The calculated and measured OCR profiles agreed within 3%. For adult head scans (CTDIvol) =51.4 mGy), mean doses for brain stem, eye, and eye lens were 23.2, 34.2, and 37.6 mGy, respectively. For pediatric head scans (CTDIvol =35.6 mGy), mean doses for brain stem, eye, and eye lens were 19.3, 24.5, and 26.8 mGy, respectively. For adult chest scans (CTDIvol=19.0 mGy), mean doses for lung, heart, and spinal cord were 21.1, 22.0, and 15.5 mGy, respectively. For adult abdominal scans (CTDIvol=14.4 mGy), the mean doses for kidney, liver, pancreas, spleen, and spinal cord were 17.4, 16.5, 16.8, 16.8, and 13.1 mGy, respectively. For pediatric abdominal scans (CTDIvol=6.76 mGy), mean doses for kidney, liver, pancreas, spleen, and spinal cord were 8.24, 8.90, 8.17, 8.31, and 6.73 mGy, respectively. In head scan, organ doses were considerably different from CTDIvol values. Conclusion: MC dose distributions calculated by using patient CT images are useful to evaluate organ doses absorbed to individual patients.
Tsvetkov, Pavel V.; Ames II, David E.; Alajo, Ayodeji B.; Pritchard, Megan L.
2006-07-01
Partitioning and transmutation of minor actinides are expected to have a positive impact on the future of nuclear technology. Their deployment would lead to incineration of hazardous nuclides and could potentially provide additional fuel supply. The U.S. DOE NERI Project assesses the possibility, advantages and limitations of involving minor actinides as a fuel component. The analysis takes into consideration and compares capabilities of actinide-fueled VHTRs with pebble-bed and prismatic cores to approach a reactor lifetime long operation without intermediate refueling. A hybrid Monte Carlo-deterministic methodology has been adopted for coupled neutronics-thermal hydraulics design studies of VHTRs. Within the computational scheme, the key technical issues are being addressed and resolved by implementing efficient automated modeling procedures and sequences, combining Monte Carlo and deterministic approaches, developing and applying realistic 3D coupled neutronics-thermal-hydraulics models with multi-heterogeneity treatments, developing and performing experimental/computational benchmarks for model verification and validation, analyzing uncertainty effects and error propagation. This paper introduces the suggested modeling approach, discusses benchmark results and the preliminary analysis of actinide-fueled VHTRs. The presented up-to-date results are in agreement with the available experimental data. Studies of VHTRs with minor actinides suggest promising performance. (authors)
A Coupled Neutron-Photon 3-D Combinatorial Geometry Monte Carlo Transport Code
Energy Science and Technology Software Center (OSTI)
1998-06-12
TART97 is a coupled neutron-photon, 3 dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly fast: if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system canmoreÂ Â» save you a great deal of time and energy. TART 97 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and ist data files.Â«Â less
Berg, John M.; Veirs, D. Kirk; Vaughn, Randolph B.; Cisneros, Michael R.; Smith, Coleman A.
2000-06-01
Standard modeling approaches can produce the most likely values of the formation constants of metal-ligand complexes if a particular set of species containing the metal ion is known or assumed to exist in solution equilibrium with complexing ligands. Identifying the most likely set of species when more than one set is plausible is a more difficult problem to address quantitatively. A Monte Carlo method of data analysis is described that measures the relative abilities of different speciation models to fit optical spectra of open-shell actinide ions. The best model(s) can be identified from among a larger group of models initially judged to be plausible. The method is demonstrated by analyzing the absorption spectra of aqueous Pu(IV) titrated with nitrate ion at constant 2 molal ionic strength in aqueous perchloric acid. The best speciation model supported by the data is shown to include three Pu(IV) species with nitrate coordination numbers 0, 1, and 2. Formation constants are {beta}{sub 1}=3.2{+-}0.5 and {beta}{sub 2}=11.2{+-}1.2, where the uncertainties are 95% confidence limits estimated by propagating raw data uncertainties using Monte Carlo methods. Principal component analysis independently indicates three Pu(IV) complexes in equilibrium. (c) 2000 Society for Applied Spectroscopy.
O'Brien, M J; Brantley, P S
2015-01-20
In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 2^{21} = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.
TU-F-18A-03: Improving Tissue Segmentation for Monte Carlo Dose Calculation Using DECT Data
Di, Salvio A; Bedwani, S; Carrier, J
2014-06-15
Purpose: To develop a new segmentation technique using dual energy CT (DECT) to overcome limitations related to segmentation from a standard Hounsfield unit (HU) to electron density (ED) calibration curve. Both methods are compared with a Monte Carlo analysis of dose distribution. Methods: DECT allows a direct calculation of both ED and effective atomic number (EAN) within a given voxel. The EAN is here defined as a function of the total electron cross-section of a medium. These values can be effectively acquired using a calibrated method from scans at two different energies. A prior stoichiometric calibration on a Gammex RMI phantom allows us to find the parameters to calculate EAN and ED within a voxel. Scans from a Siemens SOMATOM Definition Flash dual source system provided the data for our study. A Monte Carlo analysis compares dose distribution simulated by dosxyz-nrc, considering a head phantom defined by both segmentation techniques. Results: Results from depth dose and dose profile calculations show that materials with different atomic compositions but similar EAN present differences of less than 1%. Therefore, it is possible to define a short list of basis materials from which density can be adapted to imitate interaction behavior of any tissue. Comparison of the dose distributions on both segmentations shows a difference of 50% in dose in areas surrounding bone at low energy. Conclusion: The presented segmentation technique allows a more accurate medium definition in each voxel, especially in areas of tissue transition. Since the behavior of human tissues is highly sensitive at low energies, this reduces the errors on calculated dose distribution. This method could be further developed to optimize the tissue characterization based on anatomic site.
Pilati, S.; Giorgini, S.; Sakkos, K.; Boronat, J.; Casulleras, J.
2006-10-15
By using exact path-integral Monte Carlo methods we calculate the equation of state of an interacting Bose gas as a function of temperature both below and above the superfluid transition. The universal character of the equation of state for dilute systems and low temperatures is investigated by modeling the interatomic interactions using different repulsive potentials corresponding to the same s-wave scattering length. The results obtained for the energy and the pressure are compared to the virial expansion for temperatures larger than the critical temperature. At very low temperatures we find agreement with the ground-state energy calculated using the diffusion Monte Carlo method.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Mayers, Matthew Z.; Berkelbach, Timothy C.; Hybertsen, Mark S.; Reichman, David R.
2015-10-09
Ground-state diffusion Monte Carlo is used to investigate the binding energies and intercarrier radial probability distributions of excitons, trions, and biexcitons in a variety of two-dimensional transition-metal dichalcogenide materials. We compare these results to approximate variational calculations, as well as to analogous Monte Carlo calculations performed with simplified carrier interaction potentials. Our results highlight the successes and failures of approximate approaches as well as the physical features that determine the stability of small carrier complexes in monolayer transition-metal dichalcogenide materials. In conclusion, we discuss points of agreement and disagreement with recent experiments.
Monte Carlo calculations of electron beam quality conversion factors for several ion chamber types
Muir, B. R.; Rogers, D. W. O.
2014-11-01
Purpose: To provide a comprehensive investigation of electron beam reference dosimetry using Monte Carlo simulations of the response of 10 plane-parallel and 18 cylindrical ion chamber types. Specific emphasis is placed on the determination of the optimal shift of the chambers’ effective point of measurement (EPOM) and beam quality conversion factors. Methods: The EGSnrc system is used for calculations of the absorbed dose to gas in ion chamber models and the absorbed dose to water as a function of depth in a water phantom on which cobalt-60 and several electron beam source models are incident. The optimal EPOM shifts of the ion chambers are determined by comparing calculations of R{sub 50} converted from I{sub 50} (calculated using ion chamber simulations in phantom) to R{sub 50} calculated using simulations of the absorbed dose to water vs depth in water. Beam quality conversion factors are determined as the calculated ratio of the absorbed dose to water to the absorbed dose to air in the ion chamber at the reference depth in a cobalt-60 beam to that in electron beams. Results: For most plane-parallel chambers, the optimal EPOM shift is inside of the active cavity but different from the shift determined with water-equivalent scaling of the front window of the chamber. These optimal shifts for plane-parallel chambers also reduce the scatter of beam quality conversion factors, k{sub Q}, as a function of R{sub 50}. The optimal shift of cylindrical chambers is found to be less than the 0.5 r{sub cav} recommended by current dosimetry protocols. In most cases, the values of the optimal shift are close to 0.3 r{sub cav}. Values of k{sub ecal} are calculated and compared to those from the TG-51 protocol and differences are explained using accurate individual correction factors for a subset of ion chambers investigated. High-precision fits to beam quality conversion factors normalized to unity in a beam with R{sub 50} = 7.5 cm (k{sub Q}{sup ?}) are provided. These factors avoid the use of gradient correction factors as used in the TG-51 protocol although a chamber dependent optimal shift in the EPOM is required when using plane-parallel chambers while no shift is needed with cylindrical chambers. The sensitivity of these results to parameters used to model the ion chambers is discussed and the uncertainty related to the practical use of these results is evaluated. Conclusions: These results will prove useful as electron beam reference dosimetry protocols are being updated. The analysis of this work indicates that cylindrical ion chambers may be appropriate for use in low-energy electron beams but measurements are required to characterize their use in these beams.
Clay, Raymond C.; Mcminis, Jeremy; McMahon, Jeffrey M.; Pierleoni, Carlo; Ceperley, David M.; Morales, Miguel A.
2014-05-01
The ab initio phase diagram of dense hydrogen is very sensitive to errors in the treatment of electronic correlation. Recently, it has been shown that the choice of the density functional has a large effect on the predicted location of both the liquid-liquid phase transition and the solid insulator-to-metal transition in dense hydrogen. To identify the most accurate functional for dense hydrogen applications, we systematically benchmark some of the most commonly used functionals using quantum Monte Carlo. By considering several measures of functional accuracy, we conclude that the van der Waals and hybrid functionals significantly outperform local density approximation and Perdew-Burke-Ernzerhof. We support these conclusions by analyzing the impact of functional choice on structural optimization in the molecular solid, and on the location of the liquid-liquid phase transition.
Excitonic effects in two-dimensional semiconductors: Path integral Monte Carlo approach
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Velizhanin, Kirill A.; Saxena, Avadh
2015-11-11
The most striking features of novel two-dimensional semiconductors (e.g., transition metal dichalcogenide monolayers or phosphorene) is a strong Coulomb interaction between charge carriers resulting in large excitonic effects. In particular, this leads to the formation of multicarrier bound states upon photoexcitation (e.g., excitons, trions, and biexcitons), which could remain stable at near-room temperatures and contribute significantly to the optical properties of such materials. In our work we have used the path integral Monte Carlo methodology to numerically study properties of multicarrier bound states in two-dimensional semiconductors. Specifically, we have accurately investigated and tabulated the dependence of single-exciton, trion, and biexcitonmoreÂ Â» binding energies on the strength of dielectric screening, including the limiting cases of very strong and very weak screening. Our results of this work are potentially useful in the analysis of experimental data and benchmarking of theoretical and computational models.Â«Â less
Theory of melting at high pressures: Amending density functional theory with quantum Monte Carlo
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Shulenburger, L.; Desjarlais, M. P.; Mattsson, T. R.
2014-10-01
We present an improved first-principles description of melting under pressure based on thermodynamic integration comparing Density Functional Theory (DFT) and quantum Monte Carlo (QMC) treatments of the system. The method is applied to address the longstanding discrepancy between density functional theory (DFT) calculations and diamond anvil cell (DAC) experiments on the melting curve of xenon, a noble gas solid where van der Waals binding is challenging for traditional DFT methods. The calculations show excellent agreement with data below 20 GPa and that the high-pressure melt curve is well described by a Lindemann behavior up to at least 80 GPa, amoreÂ Â» finding in stark contrast to DAC data.Â«Â less
Sunny, E. E.; Martin, W. R. [University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor MI 48109 (United States)
2013-07-01
Current Monte Carlo codes use one of three models to model neutron scattering in the epithermal energy range: (1) the asymptotic scattering model, (2) the free gas scattering model, or (3) the S({alpha},{beta}) model, depending on the neutron energy and the specific Monte Carlo code. The free gas scattering model assumes the scattering cross section is constant over the neutron energy range, which is usually a good approximation for light nuclei, but not for heavy nuclei where the scattering cross section may have several resonances in the epithermal region. Several researchers in the field have shown that using the free gas scattering model in the vicinity of the resonances in the lower epithermal range can under-predict resonance absorption due to the up-scattering phenomenon. Existing methods all involve performing the collision analysis in the center-of-mass frame, followed by a conversion back to the laboratory frame. In this paper, we will present a new sampling methodology that (1) accounts for the energy-dependent scattering cross sections in the collision analysis and (2) acts in the laboratory frame, avoiding the conversion to the center-of-mass frame. The energy dependence of the scattering cross section was modeled with even-ordered polynomials to approximate the scattering cross section in Blackshaw's equations for the moments of the differential scattering PDFs. These moments were used to sample the outgoing neutron speed and angle in the laboratory frame on-the-fly during the random walk of the neutron. Results for criticality studies on fuel pin and fuel assembly calculations using these methods showed very close comparison to results using the reference Doppler-broadened rejection correction (DBRC) scheme. (authors)
SU-E-T-277: Raystation Electron Monte Carlo Commissioning and Clinical Implementation
Allen, C; Sansourekidou, P; Pavord, D
2014-06-01
Purpose: To evaluate the Raystation v4.0 Electron Monte Carlo algorithm for an Elekta Infinity linear accelerator and commission for clinical use. Methods: A total of 199 tests were performed (75 Export and Documentation, 20 PDD, 30 Profiles, 4 Obliquity, 10 Inhomogeneity, 55 MU Accuracy, and 5 Grid and Particle History). Export and documentation tests were performed with respect to MOSAIQ (Elekta AB) and RadCalc (Lifeline Software Inc). Mechanical jaw parameters and cutout magnifications were verified. PDD and profiles for open cones and cutouts were extracted and compared with water tank measurements. Obliquity and inhomogeneity for bone and air calculations were compared to film dosimetry. MU calculations for open cones and cutouts were performed and compared to both RadCalc and simple hand calculations. Grid size and particle histories were evaluated per energy for statistical uncertainty performance. Acceptability was categorized as follows: performs as expected, negligible impact on workflow, marginal impact, critical impact or safety concern, and catastrophic impact of safety concern. Results: Overall results are: 88.8% perform as expected, 10.2% negligible, 2.0% marginal, 0% critical and 0% catastrophic. Results per test category are as follows: Export and Documentation: 100% perform as expected, PDD: 100% perform as expected, Profiles: 66.7% perform as expected, 33.3% negligible, Obliquity: 100% marginal, Inhomogeneity 50% perform as expected, 50% negligible, MU Accuracy: 100% perform as expected, Grid and particle histories: 100% negligible. To achieve distributions with satisfactory smoothness level, 5,000,000 particle histories were used. Calculation time was approximately 1 hour. Conclusion: Raystation electron Monte Carlo is acceptable for clinical use. All of the issues encountered have acceptable workarounds. Known issues were reported to Raysearch and will be resolved in upcoming releases.
Neutrinos from WIMP annihilations obtained using a full three-flavor Monte Carlo approach
Blennow, Mattias; Ohlsson, Tommy; Edsjoe, Joakim E-mail: edsjo@physto.se
2008-01-15
Weakly interacting massive particles (WIMPs) are one of the main candidates for making up the dark matter in the Universe. If these particles make up the dark matter, then they can be captured by the Sun or the Earth, sink to the respective cores, annihilate, and produce neutrinos. Thus, these neutrinos can be a striking dark matter signature at neutrino telescopes looking towards the Sun and/or the Earth. Here, we improve previous analyses on computing the neutrino yields from WIMP annihilations in several respects. We include neutrino oscillations in a full three-flavor framework as well as all effects from neutrino interactions on the way through the Sun (absorption, energy loss, and regeneration from tau decays). In addition, we study the effects of non-zero values of the mixing angle {theta}{sub 13} as well as the normal and inverted neutrino mass hierarchies. Our study is performed in an event-based setting which makes these results very useful both for theoretical analyses and for building a neutrino telescope Monte Carlo code. All our results for the neutrino yields, as well as our Monte Carlo code, are publicly available. We find that the yield of muon-type neutrinos from WIMP annihilations in the Sun is enhanced or suppressed, depending on the dominant WIMP annihilation channel. This effect is due to an effective flavor mixing caused by neutrino oscillations. For WIMP annihilations inside the Earth, the distance from source to detector is too small to allow for any significant amount of oscillations at the neutrino energies relevant for neutrino telescopes.
Monte Carlo Simulation of Light Transport in Tissue, Beta Version
Energy Science and Technology Software Center (OSTI)
2003-12-09
Understanding light-tissue interaction is fundamental in the field of Biomedical Optics. It has important implications for both therapeutic and diagnostic technologies. In this program, light transport in scattering tissue is modeled by absorption and scattering events as each photon travels through the tissue. the path of each photon is determined statistically by calculating probabilities of scattering and absorption. Other meausured quantities are total reflected light, total transmitted light, and total heat absorbed.
Monte Carlo Simulations for Homeland Security Using Anthropomorphic Phantoms
Burns, Kimberly A.
2008-01-01
A radiological dispersion device (RDD) is a device which deliberately releases radioactive material for the purpose of causing terror or harm. In the event that a dirty bomb is detonated, there may be airborne radioactive material that can be inhaled as well as settle on an individuals leading to external contamination.
Prasad, Manish; Conforti, Patrick F.; Garrison, Barbara J.
2007-08-28
The coarse grained chemical reaction model is enhanced to build a molecular dynamics (MD) simulation framework with an embedded Monte Carlo (MC) based reaction scheme. The MC scheme utilizes predetermined reaction chemistry, energetics, and rate kinetics of materials to incorporate chemical reactions occurring in a substrate into the MD simulation. The kinetics information is utilized to set the probabilities for the types of reactions to perform based on radical survival times and reaction rates. Implementing a reaction involves changing the reactants species types which alters their interaction potentials and thus produces the required energy change. We discuss the application of this method to study the initiation of ultraviolet laser ablation in poly(methyl methacrylate). The use of this scheme enables the modeling of all possible photoexcitation pathways in the polymer. It also permits a direct study of the role of thermal, mechanical, and chemical processes that can set off ablation. We demonstrate that the role of laser induced heating, thermomechanical stresses, pressure wave formation and relaxation, and thermochemical decomposition of the polymer substrate can be investigated directly by suitably choosing the potential energy and chemical reaction energy landscape. The results highlight the usefulness of such a modeling approach by showing that various processes in polymer ablation are intricately linked leading to the transformation of the substrate and its ejection. The method, in principle, can be utilized to study systems where chemical reactions are expected to play a dominant role or interact strongly with other physical processes.
Fang, Yuan; Karim, Karim S.; Badano, Aldo
2014-01-15
Purpose: The authors describe the modification to a previously developed Monte Carlo model of semiconductor direct x-ray detector required for studying the effect of burst and recombination algorithms on detector performance. This work provides insight into the effect of different charge generation models for a-Se detectors on Swank noise and recombination fraction. Methods: The proposed burst and recombination models are implemented in the Monte Carlo simulation package, ARTEMIS, developed byFang et al. [“Spatiotemporal Monte Carlo transport methods in x-ray semiconductor detectors: Application to pulse-height spectroscopy in a-Se,” Med. Phys. 39(1), 308–319 (2012)]. The burst model generates a cloud of electron-hole pairs based on electron velocity, energy deposition, and material parameters distributed within a spherical uniform volume (SUV) or on a spherical surface area (SSA). A simple first-hit (FH) and a more detailed but computationally expensive nearest-neighbor (NN) recombination algorithms are also described and compared. Results: Simulated recombination fractions for a single electron-hole pair show good agreement with Onsager model for a wide range of electric field, thermalization distance, and temperature. The recombination fraction and Swank noise exhibit a dependence on the burst model for generation of many electron-hole pairs from a single x ray. The Swank noise decreased for the SSA compared to the SUV model at 4 V/?m, while the recombination fraction decreased for SSA compared to the SUV model at 30 V/?m. The NN and FH recombination results were comparable. Conclusions: Results obtained with the ARTEMIS Monte Carlo transport model incorporating drift and diffusion are validated with the Onsager model for a single electron-hole pair as a function of electric field, thermalization distance, and temperature. For x-ray interactions, the authors demonstrate that the choice of burst model can affect the simulation results for the generation of many electron-hole pairs. The SSA model is more sensitive to the effect of electric field compared to the SUV model and that the NN and FH recombination algorithms did not significantly affect simulation results.
Cranmer-Sargison, G.; Weston, S.; Evans, J. A.; Sidhu, N. P.; Thwaites, D. I.
2011-12-15
Purpose: The goal of this work was to implement a recently proposed small field dosimetry formalism [Alfonso et al., Med. Phys. 35(12), 5179-5186 (2008)] for a comprehensive set of diode detectors and provide the required Monte Carlo generated factors to correct measurement. Methods: Jaw collimated square small field sizes of side 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, and 3.0 cm normalized to a reference field of 5.0 cm x 5.0 cm were used throughout this study. Initial linac modeling was performed with electron source parameters at 6.0, 6.1, and 6.2 MeV with the Gaussian FWHM decreased in steps of 0.010 cm from 0.150 to 0.100 cm. DOSRZnrc was used to develop models of the IBA stereotactic field diode (SFD) as well as the PTW T60008, T60012, T60016, and T60017 field diodes. Simulations were run and isocentric, detector specific, output ratios (OR{sub det}) calculated at depths of 1.5, 5.0, and 10.0 cm. This was performed using the following source parameter subset: 6.1 and 6.2 MeV with a FWHM = 0.100, 0.110, and 0.120 cm. The source parameters were finalized by comparing experimental detector specific output ratios with simulation. Simulations were then run with the active volume and surrounding materials set to water and the replacement correction factors calculated according to the newly proposed formalism. Results: In all cases, the experimental field size widths (at the 50% level) were found to be smaller than the nominal, and therefore, the simulated field sizes were adjusted accordingly. At a FWHM = 0.150 cm simulation produced penumbral widths that were too broad. The fit improved as the FWHM was decreased, yet for all but the smallest field size worsened again at a FWHM = 0.100 cm. The simulated OR{sub det} were found to be greater than, equivalent to and less than experiment for spot size FWHM = 0.100, 0.110, and 0.120 cm, respectively. This is due to the change in source occlusion as a function of FWHM and field size. The corrections required for the 0.5 cm field size were 0.95 ({+-}1.0%) for the SFD, T60012 and T60017 diodes and 0.90 ({+-}1.0%) for the T60008 and T60016 diodes--indicating measured output ratios to be 5% and 10% high, respectively. Our results also revealed the correction factors to be the same within statistical variation at all depths considered. Conclusions: A number of general conclusions are evident: (1) small field OR{sub det} are very sensitive to the simulated source parameters, and therefore, rigorous Monte Carlo linac model commissioning, with respect to measurement, must be pursued prior to use, (2) backscattered dose to the monitor chamber should be included in simulated OR{sub det} calculations, (3) the corrections required for diode detectors are design dependent and therefore detailed detector modeling is required, and (4) the reported detector specific correction factors may be applied to experimental small field OR{sub det} consistent with those presented here.
Wagner, John C; Mosher, Scott W; Evans, Thomas M; Peplow, Douglas E.; Turner, John A
2011-01-01
This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform real commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the gold standard for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method, which attempts to achieve uniform statistical uncertainty throughout a designated problem space. The MC DD development is being implemented in conjunction with the Denovo deterministic radiation transport package to have direct access to the 3-D, massively parallel discrete-ordinates solver (to support the hybrid method) and the associated parallel routines and structure. This paper describes the hybrid method, its implementation, and initial testing results for a realistic 2-D quarter core pressurized-water reactor model and also describes the MC DD algorithm and its implementation.
Wagner, John C; Mosher, Scott W; Evans, Thomas M; Peplow, Douglas E.; Turner, John A
2010-01-01
This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform ''real'' commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the ''gold standard'' for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method, which attempts to achieve uniform statistical uncertainty throughout a designated problem space. The MC DD development is being implemented in conjunction with the Denovo deterministic radiation transport package to have direct access to the 3-D, massively parallel discrete-ordinates solver (to support the hybrid method) and the associated parallel routines and structure. This paper describes the hybrid method, its implementation, and initial testing results for a realistic 2-D quarter core pressurized-water reactor model and also describes the MC DD algorithm and its implementation.
Kuss, M.; Markel, T.; Kramer, W.
2011-01-01
Concentrated purchasing patterns of plug-in vehicles may result in localized distribution transformer overload scenarios. Prolonged periods of transformer overloading causes service life decrements, and in worst-case scenarios, results in tripped thermal relays and residential service outages. This analysis will review distribution transformer load models developed in the IEC 60076 standard, and apply the model to a neighborhood with plug-in hybrids. Residential distribution transformers are sized such that night-time cooling provides thermal recovery from heavy load conditions during the daytime utility peak. It is expected that PHEVs will primarily be charged at night in a residential setting. If not managed properly, some distribution transformers could become overloaded, leading to a reduction in transformer life expectancy, thus increasing costs to utilities and consumers. A Monte-Carlo scheme simulated each day of the year, evaluating 100 load scenarios as it swept through the following variables: number of vehicle per transformer, transformer size, and charging rate. A general method for determining expected transformer aging rate will be developed, based on the energy needs of plug-in vehicles loading a residential transformer.
Krueger, Rachel A.; Haibach, Frederick G.; Fry, Dana L.; Gomez, Maria A.
2015-04-21
A centrality measure based on the time of first returns rather than the number of steps is developed and applied to finding proton traps and access points to proton highways in the doped perovskite oxides: AZr{sub 0.875}D{sub 0.125}O{sub 3}, where A is Ba or Sr and the dopant D is Y or Al. The high centrality region near the dopant is wider in the SrZrO{sub 3} systems than the BaZrO{sub 3} systems. In the aluminum-doped systems, a region of intermediate centrality (secondary region) is found in a plane away from the dopant. Kinetic Monte Carlo (kMC) trajectories show that this secondary region is an entry to fast conduction planes in the aluminum-doped systems in contrast to the highest centrality area near the dopant trap. The yttrium-doped systems do not show this secondary region because the fast conduction routes are in the same plane as the dopant and hence already in the high centrality trapped area. This centrality measure complements kMC by highlighting key areas in trajectories. The limiting activation barriers found via kMC are in very good agreement with experiments and related to the barriers to escape dopant traps.
Feasibility of a Monte Carlo-deterministic hybrid method for fast reactor analysis
Heo, W.; Kim, W.; Kim, Y.; Yun, S.
2013-07-01
A Monte Carlo and deterministic hybrid method is investigated for the analysis of fast reactors in this paper. Effective multi-group cross sections data are generated using a collision estimator in the MCNP5. A high order Legendre scattering cross section data generation module was added into the MCNP5 code. Both cross section data generated from MCNP5 and TRANSX/TWODANT using the homogeneous core model were compared, and were applied to DIF3D code for fast reactor core analysis of a 300 MWe SFR TRU burner core. For this analysis, 9 groups macroscopic-wise data was used. In this paper, a hybrid calculation MCNP5/DIF3D was used to analyze the core model. The cross section data was generated using MCNP5. The k{sub eff} and core power distribution were calculated using the 54 triangle FDM code DIF3D. A whole core calculation of the heterogeneous core model using the MCNP5 was selected as a reference. In terms of the k{sub eff}, 9-group MCNP5/DIF3D has a discrepancy of -154 pcm from the reference solution, 9-group TRANSX/TWODANT/DIF3D analysis gives -1070 pcm discrepancy. (authors)
Evaluation of a new commercial Monte Carlo dose calculation algorithm for electron beams
Vandervoort, Eric J. Cygler, Joanna E.; The Faculty of Medicine, The University of Ottawa, Ottawa, Ontario K1H 8M5; Department of Physics, Carleton University, Ottawa, Ontario K1S 5B6 ; Tchistiakova, Ekaterina; Department of Medical Biophysics, University of Toronto, Ontario M5G 2M9; Heart and Stroke Foundation Centre for Stroke Recovery, Sunnybrook Research Institute, University of Toronto, Ontario M4N 3M5 ; La Russa, Daniel J.; The Faculty of Medicine, The University of Ottawa, Ottawa, Ontario K1H 8M5
2014-02-15
Purpose: In this report the authors present the validation of a Monte Carlo dose calculation algorithm (XiO EMC from Elekta Software) for electron beams. Methods: Calculated and measured dose distributions were compared for homogeneous water phantoms and for a 3D heterogeneous phantom meant to approximate the geometry of a trachea and spine. Comparisons of measurements and calculated data were performed using 2D and 3D gamma index dose comparison metrics. Results: Measured outputs agree with calculated values within estimated uncertainties for standard and extended SSDs for open applicators, and for cutouts, with the exception of the 17 MeV electron beam at extended SSD for cutout sizes smaller than 5 × 5 cm{sup 2}. Good agreement was obtained between calculated and experimental depth dose curves and dose profiles (minimum number of measurements that pass a 2%/2 mm agreement 2D gamma index criteria for any applicator or energy was 97%). Dose calculations in a heterogeneous phantom agree with radiochromic film measurements (>98% of pixels pass a 3 dimensional 3%/2 mm ?-criteria) provided that the steep dose gradient in the depth direction is considered. Conclusions: Clinically acceptable agreement (at the 2%/2 mm level) between the measurements and calculated data for measurements in water are obtained for this dose calculation algorithm. Radiochromic film is a useful tool to evaluate the accuracy of electron MC treatment planning systems in heterogeneous media.
Analysis of Radiation Effects in Silicon using Kinetic Monte Carlo Methods
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Hehr, Brian Douglas
2014-11-25
The transient degradation of semiconductor device performance under irradiation has long been an issue of concern. Neutron irradiation can instigate the formation of quasi-stable defect structures, thereby introducing new energy levels into the bandgap that alter carrier lifetimes and give rise to such phenomena as gain degradation in bipolar junction transistors. Normally, the initial defect formation phase is followed by a recovery phase in which defect-defect or defect-dopant interactions modify the characteristics of the damaged structure. A kinetic Monte Carlo (KMC) code has been developed to model both thermal and carrier injection annealing of initial defect structures in semiconductor materials.moreÂ Â» The code is employed to investigate annealing in electron-irradiated, p-type silicon as well as the recovery of base current in silicon transistors bombarded with neutrons at the Los Alamos Neutron Science Center (LANSCE) â€œBlue Roomâ€ facility. Our results reveal that KMC calculations agree well with these experiments once adjustments are made, within the appropriate uncertainty bounds, to some of the sensitive defect parameters.Â«Â less
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Clay, Raymond C.; Holzmann, Markus; Ceperley, David M.; Morales, Maguel A.
2016-01-19
An accurate understanding of the phase diagram of dense hydrogen and helium mixtures is a crucial component in the construction of accurate models of Jupiter, Saturn, and Jovian extrasolar planets. Though DFT based rst principles methods have the potential to provide the accuracy and computational e ciency required for this task, recent benchmarking in hydrogen has shown that achieving this accuracy requires a judicious choice of functional, and a quanti cation of the errors introduced. In this work, we present a quantum Monte Carlo based benchmarking study of a wide range of density functionals for use in hydrogen-helium mixtures atmoreÂ Â» thermodynamic conditions relevant for Jovian planets. Not only do we continue our program of benchmarking energetics and pressures, but we deploy QMC based force estimators and use them to gain insights into how well the local liquid structure is captured by di erent density functionals. We nd that TPSS, BLYP and vdW-DF are the most accurate functionals by most metrics, and that the enthalpy, energy, and pressure errors are very well behaved as a function of helium concentration. Beyond this, we highlight and analyze the major error trends and relative di erences exhibited by the major classes of functionals, and estimate the magnitudes of these e ects when possible.Â«Â less
Analysis of Radiation Effects in Silicon using Kinetic Monte Carlo Methods
Hehr, Brian Douglas
2014-11-25
The transient degradation of semiconductor device performance under irradiation has long been an issue of concern. Neutron irradiation can instigate the formation of quasi-stable defect structures, thereby introducing new energy levels into the bandgap that alter carrier lifetimes and give rise to such phenomena as gain degradation in bipolar junction transistors. Normally, the initial defect formation phase is followed by a recovery phase in which defect-defect or defect-dopant interactions modify the characteristics of the damaged structure. A kinetic Monte Carlo (KMC) code has been developed to model both thermal and carrier injection annealing of initial defect structures in semiconductor materials. The code is employed to investigate annealing in electron-irradiated, p-type silicon as well as the recovery of base current in silicon transistors bombarded with neutrons at the Los Alamos Neutron Science Center (LANSCE) “Blue Room” facility. Our results reveal that KMC calculations agree well with these experiments once adjustments are made, within the appropriate uncertainty bounds, to some of the sensitive defect parameters.
Müller, Florian Jenny, Patrick Meyer, Daniel W.
2013-10-01
Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley–Leverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.
Forward treatment planning for modulated electron radiotherapy (MERT) employing Monte Carlo methods
Henzen, D. Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Lössl, K.; Aebersold, D. M.; Fix, M. K.; Neuenschwander, H.; Stampanoni, M. F. M.
2014-03-15
Purpose: This paper describes the development of a forward planning process for modulated electron radiotherapy (MERT). The approach is based on a previously developed electron beam model used to calculate dose distributions of electron beams shaped by a photon multi leaf collimator (pMLC). Methods: As the electron beam model has already been implemented into the Swiss Monte Carlo Plan environment, the Eclipse treatment planning system (Varian Medical Systems, Palo Alto, CA) can be included in the planning process for MERT. In a first step, CT data are imported into Eclipse and a pMLC shaped electron beam is set up. This initial electron beam is then divided into segments, with the electron energy in each segment chosen according to the distal depth of the planning target volume (PTV) in beam direction. In order to improve the homogeneity of the dose distribution in the PTV, a feathering process (Gaussian edge feathering) is launched, which results in a number of feathered segments. For each of these segments a dose calculation is performed employing the in-house developed electron beam model along with the macro Monte Carlo dose calculation algorithm. Finally, an automated weight optimization of all segments is carried out and the total dose distribution is read back into Eclipse for display and evaluation. One academic and two clinical situations are investigated for possible benefits of MERT treatment compared to standard treatments performed in our clinics and treatment with a bolus electron conformal (BolusECT) method. Results: The MERT treatment plan of the academic case was superior to the standard single segment electron treatment plan in terms of organs at risk (OAR) sparing. Further, a comparison between an unfeathered and a feathered MERT plan showed better PTV coverage and homogeneity for the feathered plan, with V{sub 95%} increased from 90% to 96% and V{sub 107%} decreased from 8% to nearly 0%. For a clinical breast boost irradiation, the MERT plan led to a similar homogeneity in the PTV compared to the standard treatment plan while the mean body dose was lower for the MERT plan. Regarding the second clinical case, a whole breast treatment, MERT resulted in a reduction of the lung volume receiving more than 45% of the prescribed dose when compared to the standard plan. On the other hand, the MERT plan leads to a larger low-dose lung volume and a degraded dose homogeneity in the PTV. For the clinical cases evaluated in this work, treatment plans using the BolusECT technique resulted in a more homogenous PTV and CTV coverage but higher doses to the OARs than the MERT plans. Conclusions: MERT treatments were successfully planned for phantom and clinical cases, applying a newly developed intuitive and efficient forward planning strategy that employs a MC based electron beam model for pMLC shaped electron beams. It is shown that MERT can lead to a dose reduction in OARs compared to other methods. The process of feathering MERT segments results in an improvement of the dose homogeneity in the PTV.
BENCHMARK TESTS FOR MARKOV CHAIN MONTE CARLO FITTING OF EXOPLANET ECLIPSE OBSERVATIONS
Rogers, Justin; Lopez-Morales, Mercedes; Apai, Daniel; Adams, Elisabeth
2013-04-10
Ground-based observations of exoplanet eclipses provide important clues to the planets' atmospheric physics, yet systematics in light curve analyses are not fully understood. It is unknown if measurements suggesting near-infrared flux densities brighter than models predict are real, or artifacts of the analysis processes. We created a large suite of model light curves, using both synthetic and real noise, and tested the common process of light curve modeling and parameter optimization with a Markov Chain Monte Carlo algorithm. With synthetic white noise models, we find that input eclipse signals are generally recovered within 10% accuracy for eclipse depths greater than the noise amplitude, and to smaller depths for higher sampling rates and longer baselines. Red noise models see greater discrepancies between input and measured eclipse signals, often biased in one direction. Finally, we find that in real data, systematic biases result even with a complex model to account for trends, and significant false eclipse signals may appear in a non-Gaussian distribution. To quantify the bias and validate an eclipse measurement, we compare both the planet-hosting star and several of its neighbors to a separately chosen control sample of field stars. Re-examining the Rogers et al. Ks-band measurement of CoRoT-1b finds an eclipse 3190{sup +370}{sub -440} ppm deep centered at {phi}{sub me} = 0.50418{sup +0.00197}{sub -0.00203}. Finally, we provide and recommend the use of selected data sets we generated as a benchmark test for eclipse modeling and analysis routines, and propose criteria to verify eclipse detections.
Structural Stability and Defect Energetics of ZnO from Diffusion Quantum Monte Carlo
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Santana Palacio, Juan A; Krogel, Jaron T; Kim, Jeongnim; Kent, Paul R; Reboredo, Fernando A
2015-01-01
We have applied the many-body ab-initio diffusion quantum Monte Carlo (DMC) method to study Zn and ZnO crystals under pressure, and the energetics of the oxygen vacancy, zinc interstitial and hydrogen impurities in ZnO. We show that DMC is an accurate and practical method that can be used to characterize multiple properties of materials that are challenging for density functional theory approximations. DMC agrees with experimental measurements to within 0.3 eV, including the band-gap of ZnO, the ionization potential of O and Zn, and the atomization energy of O2, ZnO dimer, and wurtzite ZnO. DMC predicts the oxygen vacancy asmoreÂ Â» a deep donor with a formation energy of 5.0(2) eV under O-rich conditions and thermodynamic transition levels located between 1.8 and 2.5 eV from the valence band maximum. Our DMC results indicate that the concentration of zinc interstitial and hydrogen impurities in ZnO should be low under n-type, and Zn- and H-rich conditions because these defects have formation energies above 1.4 eV under these conditions. Comparison of DMC and hybrid functionals shows that these DFT approximations can be parameterized to yield a general correct qualitative description of ZnO. However, the formation energy of defects in ZnO evaluated with DMC and hybrid functionals can differ by more than 0.5 eV.Â«Â less
Interpretation of 3D void measurements with Tripoli4.6/JEFF3.1.1 Monte Carlo code
Blaise, P.; Colomba, A.
2012-07-01
The present work details the first analysis of the 3D void phase conducted during the EPICURE/UM17x17/7% mixed UOX/MOX configuration. This configuration is composed of a homogeneous central 17x17 MOX-7% assembly, surrounded by portions of 17x17 1102 assemblies with guide-tubes. The void bubble is modelled by a small waterproof 5x5 fuel pin parallelepiped box of 11 cm height, placed in the centre of the MOX assembly. This bubble, initially placed at the core mid-plane, is then moved in different axial positions to study the evolution in the core of the axial perturbation. Then, to simulate the growing of this bubble in order to understand the effects of increased void fraction along the fuel pin, 3 and 5 bubbles have been stacked axially, from the core mid-plane. The C/E comparison obtained with the Monte Carlo code Tripoli4 for both radial and axial fission rate distributions, and in particular the reproduction of the very important flux gradients at the void/water interfaces, changing as the bubble is displaced along the z-axis are very satisfactory. It demonstrates both the capability of the code and its library to reproduce this kind of situation, as the very good quality of the experimental results, confirming the UM-17x17 as an excellent experimental benchmark for 3D code validation. This work has been performed within the frame of the V and V program for the future APOLL03 deterministic code of CEA starting in 2012, and its V and V benchmarking database. (authors)
Su, L.; Du, X.; Liu, T.; Xu, X. G.
2013-07-01
An electron-photon coupled Monte Carlo code ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - is being developed at Rensselaer Polytechnic Institute as a software test bed for emerging heterogeneous high performance computers that utilize accelerators such as GPUs. In this paper, the preliminary results of code development and testing are presented. The electron transport in media was modeled using the class-II condensed history method. The electron energy considered ranges from a few hundred keV to 30 MeV. Moller scattering and bremsstrahlung processes above a preset energy were explicitly modeled. Energy loss below that threshold was accounted for using the Continuously Slowing Down Approximation (CSDA). Photon transport was dealt with using the delta tracking method. Photoelectric effect, Compton scattering and pair production were modeled. Voxelised geometry was supported. A serial ARHCHER-CPU was first written in C++. The code was then ported to the GPU platform using CUDA C. The hardware involved a desktop PC with an Intel Xeon X5660 CPU and six NVIDIA Tesla M2090 GPUs. ARHCHER was tested for a case of 20 MeV electron beam incident perpendicularly on a water-aluminum-water phantom. The depth and lateral dose profiles were found to agree with results obtained from well tested MC codes. Using six GPU cards, 6x10{sup 6} histories of electrons were simulated within 2 seconds. In comparison, the same case running the EGSnrc and MCNPX codes required 1645 seconds and 9213 seconds, respectively, on a CPU with a single core used. (authors)
Sepehri, Aliasghar; Loeffler, Troy D.; Chen, Bin
2014-08-21
A new method has been developed to generate bending angle trials to improve the acceptance rate and the speed of configurational-bias Monte Carlo. Whereas traditionally the trial geometries are generated from a uniform distribution, in this method we attempt to use the exact probability density function so that each geometry generated is likely to be accepted. In actual practice, due to the complexity of this probability density function, a numerical representation of this distribution function would be required. This numerical table can be generated a priori from the distribution function. This method has been tested on a united-atom model of alkanes including propane, 2-methylpropane, and 2,2-dimethylpropane, that are good representatives of both linear and branched molecules. It has been shown from these test cases that reasonable approximations can be made especially for the highly branched molecules to reduce drastically the dimensionality and correspondingly the amount of the tabulated data that is needed to be stored. Despite these approximations, the dependencies between the various geometrical variables can be still well considered, as evident from a nearly perfect acceptance rate achieved. For all cases, the bending angles were shown to be sampled correctly by this method with an acceptance rate of at least 96% for 2,2-dimethylpropane to more than 99% for propane. Since only one trial is required to be generated for each bending angle (instead of thousands of trials required by the conventional algorithm), this method can dramatically reduce the simulation time. The profiling results of our Monte Carlo simulation code show that trial generation, which used to be the most time consuming process, is no longer the time dominating component of the simulation.
Muir, B. R. Rogers, D. W. O.
2013-12-15
Purpose: To investigate recommendations for reference dosimetry of electron beams and gradient effects for the NE2571 chamber and to provide beam quality conversion factors using Monte Carlo simulations of the PTW Roos and NE2571 ion chambers. Methods: The EGSnrc code system is used to calculate the absorbed dose-to-water and the dose to the gas in fully modeled ion chambers as a function of depth in water. Electron beams are modeled using realistic accelerator simulations as well as beams modeled as collimated point sources from realistic electron beam spectra or monoenergetic electrons. Beam quality conversion factors are calculated with ratios of the doses to water and to the air in the ion chamber in electron beams and a cobalt-60 reference field. The overall ion chamber correction factor is studied using calculations of water-to-air stopping power ratios. Results: The use of an effective point of measurement shift of 1.55 mm from the front face of the PTW Roos chamber, which places the point of measurement inside the chamber cavity, minimizes the difference betweenR{sub 50}, the beam quality specifier, calculated from chamber simulations compared to that obtained using depth-dose calculations in water. A similar shift minimizes the variation of the overall ion chamber correction factor with depth to the practical range and reduces the root-mean-square deviation of a fit to calculated beam quality conversion factors at the reference depth as a function of R{sub 50}. Similarly, an upstream shift of 0.34 r{sub cav} allows a more accurate determination of R{sub 50} from NE2571 chamber calculations and reduces the variation of the overall ion chamber correction factor with depth. The determination of the gradient correction using a shift of 0.22 r{sub cav} optimizes the root-mean-square deviation of a fit to calculated beam quality conversion factors if all beams investigated are considered. However, if only clinical beams are considered, a good fit to results for beam quality conversion factors is obtained without explicitly correcting for gradient effects. The inadequacy of R{sub 50} to uniquely specify beam quality for the accurate selection of k{sub Q} factors is discussed. Systematic uncertainties in beam quality conversion factors are analyzed for the NE2571 chamber and amount to between 0.4% and 1.2% depending on assumptions used. Conclusions: The calculated beam quality conversion factors for the PTW Roos chamber obtained here are in good agreement with literature data. These results characterize the use of an NE2571 ion chamber for reference dosimetry of electron beams even in low-energy beams.
Monte Carlo based beam model using a photon MLC for modulated electron radiotherapy
Henzen, D. Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Vetterli, D.; Chatelain, C.; Fix, M. K.; Neuenschwander, H.; Stampanoni, M. F. M.
2014-02-15
Purpose: Modulated electron radiotherapy (MERT) promises sparing of organs at risk for certain tumor sites. Any implementation of MERT treatment planning requires an accurate beam model. The aim of this work is the development of a beam model which reconstructs electron fields shaped using the Millennium photon multileaf collimator (MLC) (Varian Medical Systems, Inc., Palo Alto, CA) for a Varian linear accelerator (linac). Methods: This beam model is divided into an analytical part (two photon and two electron sources) and a Monte Carlo (MC) transport through the MLC. For dose calculation purposes the beam model has been coupled with a macro MC dose calculation algorithm. The commissioning process requires a set of measurements and precalculated MC input. The beam model has been commissioned at a source to surface distance of 70 cm for a Clinac 23EX (Varian Medical Systems, Inc., Palo Alto, CA) and a TrueBeam linac (Varian Medical Systems, Inc., Palo Alto, CA). For validation purposes, measured and calculated depth dose curves and dose profiles are compared for four different MLC shaped electron fields and all available energies. Furthermore, a measured two-dimensional dose distribution for patched segments consisting of three 18 MeV segments, three 12 MeV segments, and a 9 MeV segment is compared with corresponding dose calculations. Finally, measured and calculated two-dimensional dose distributions are compared for a circular segment encompassed with a C-shaped segment. Results: For 15 × 34, 5 × 5, and 2 × 2 cm{sup 2} fields differences between water phantom measurements and calculations using the beam model coupled with the macro MC dose calculation algorithm are generally within 2% of the maximal dose value or 2 mm distance to agreement (DTA) for all electron beam energies. For a more complex MLC pattern, differences between measurements and calculations are generally within 3% of the maximal dose value or 3 mm DTA for all electron beam energies. For the two-dimensional dose comparisons, the differences between calculations and measurements are generally within 2% of the maximal dose value or 2 mm DTA. Conclusions : The results of the dose comparisons suggest that the developed beam model is suitable to accurately reconstruct photon MLC shaped electron beams for a Clinac 23EX and a TrueBeam linac. Hence, in future work the beam model will be utilized to investigate the possibilities of MERT using the photon MLC to shape electron beams.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Betzler, Benjamin R.; Kiedrowski, Brian C.; Brown, Forrest B.; Martin, William R.
2015-08-28
The time-dependent behavior of the energy spectrum in neutron transport was investigated with a formulation, based on continuous-time Markov processes, for computing Î± eigenvalues and eigenvectors in an infinite medium. In this study, a research Monte Carlo code called â€œTORTEâ€ (To Obtain Real Time Eigenvalues) was created and used to estimate elements of a transition rate matrix. TORTE is capable of using both multigroup and continuous-energy nuclear data, and verification was performed. Eigenvalue spectra for infinite homogeneous mixtures were obtained, and an eigenfunction expansion was used to investigate transient behavior of the neutron energy spectrum.
Astrakharchik, G. E.; Boronat, J.; Casulleras, J.; Kurbakov, I. L.; Lozovik, Yu. E.
2009-05-15
The equation of state of a weakly interacting two-dimensional Bose gas is studied at zero temperature by means of quantum Monte Carlo methods. Going down to as low densities as na{sup 2}{proportional_to}10{sup -100} permits us to obtain agreement on beyond mean-field level between predictions of perturbative methods and direct many-body numerical simulation, thus providing an answer to the fundamental question of the equation of state of a two-dimensional dilute Bose gas in the universal regime (i.e., entirely described by the gas parameter na{sup 2}). We also show that the measure of the frequency of a breathing collective oscillation in a trap at very low densities can be used to test the universal equation of state of a two-dimensional Bose gas.
MO-G-BRF-05: Determining Response to Anti-Angiogenic Therapies with Monte Carlo Tumor Modeling
Valentinuzzi, D; Simoncic, U; Jeraj, R; Titz, B
2014-06-15
Purpose: Patient response to anti-angiogenic therapies with vascular endothelial growth factor receptor - tyrosine kinase inhibitors (VEGFR TKIs) is heterogeneous. This study investigates key biological characteristics that drive differences in patient response via Monte Carlo computational modeling capable of simulating tumor response to therapy with VEGFR TKI. Methods: VEGFR TKIs potently block receptors, responsible for promoting angiogenesis in tumors. The model incorporates drug pharmacokinetic and pharmacodynamic properties, as well as patientspecific data of cellular proliferation derived from [18F]FLT-PET data. Sensitivity of tumor response was assessed for multiple parameters, including initial partial oxygen tension (pO{sub 2}), cell cycle time, daily vascular growth fraction, and daily vascular regression fraction. Results were benchmarked to clinical data (patient 2 weeks on VEGFR TKI, followed by 1-week drug holiday). The tumor pO{sub 2} was assumed to be uniform. Results: Among the investigated parameters, the simulated proliferation was most sensitive to the initial tumor pO{sub 2}. Initial change of 5 mmHg can already Result in significantly different levels of proliferation. The model reveals that hypoxic tumors (pO{sub 2} ? 20 mmHg) show the highest decrease of proliferation, experiencing mean FLT standardized uptake value (SUVmean) decrease for at least 50% at the end of the clinical trial (day 21). Oxygenated tumors (pO{sub 2} 20 mmHg) show a transient SUV decrease (30–50%) at the end of the treatment with VEGFR TKI (day 14) but experience a rapid SUV rebound close to the pre-treatment SUV levels (70–110%) at the time of a drug holiday (day 14–21) - the phenomenon known as a proliferative flare. Conclusion: Model's high sensitivity to initial pO{sub 2} clearly emphasizes the need for experimental assessment of the pretreatment tumor hypoxia status, as it might be predictive of response to antiangiogenic therapies and the occurrence of proliferative flare. Experimental assessment of other model parameters would further improve understanding of patient response.
Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W; Grove, Robert E
2015-01-01
Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNce reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).
Rota, R.; Casulleras, J.; Mazzanti, F.; Boronat, J.
2015-03-21
We present a method based on the path integral Monte Carlo formalism for the calculation of ground-state time correlation functions in quantum systems. The key point of the method is the consideration of time as a complex variable whose phase Î´ acts as an adjustable parameter. By using high-order approximations for the quantum propagator, it is possible to obtain Monte Carlo data all the way from purely imaginary time to Î´ values near the limit of real time. As a consequence, it is possible to infer accurately the spectral functions using simple inversion algorithms. We test this approach in the calculation of the dynamic structure function S(q, Ï‰) of two one-dimensional model systems, harmonic and quartic oscillators, for which S(q, Ï‰) can be exactly calculated. We notice a clear improvement in the calculation of the dynamic response with respect to the common approach based on the inverse Laplace transform of the imaginary-time correlation function.
Jiang, F.-J.; Nyfeler, M.; Kaempfer, F.
2009-07-15
Motivated by the possible mechanism for the pinning of the electronic liquid crystal direction in YBa{sub 2}Cu{sub 3}O{sub 6.45} as proposed by Pardini et al. [Phys. Rev. B 78, 024439 (2008)], we use the first-principles Monte Carlo method to study the spin-(1/2) Heisenberg model with antiferromagnetic couplings J{sub 1} and J{sub 2} on the square lattice. In particular, the low-energy constants spin stiffness {rho}{sub s}, staggered magnetization M{sub s}, and spin wave velocity c are determined by fitting the Monte Carlo data to the predictions of magnon chiral perturbation theory. Further, the spin stiffnesses {rho}{sub s1} and {rho}{sub s2} as a function of the ratio J{sub 2}/J{sub 1} of the couplings are investigated in detail. Although we find a good agreement between our results with those obtained by the series expansion method in the weakly anisotropic regime, for strong anisotropy we observe discrepancies.
Faught, A; Davidson, S; Kry, S; Ibbott, G; Followill, D; Fontenot, J; Etzel, C
2014-06-01
Purpose: To develop a comprehensive end-to-end test for Varian's TrueBeam linear accelerator for head and neck IMRT using a custom phantom designed to utilize multiple dosimetry devices. Purpose: To commission a multiple-source Monte Carlo model of Elekta linear accelerator beams of nominal energies 6MV and 10MV. Methods: A three source, Monte Carlo model of Elekta 6 and 10MV therapeutic x-ray beams was developed. Energy spectra of two photon sources corresponding to primary photons created in the target and scattered photons originating in the linear accelerator head were determined by an optimization process that fit the relative fluence of 0.25 MeV energy bins to the product of Fatigue-Life and Fermi functions to match calculated percent depth dose (PDD) data with that measured in a water tank for a 10x10cm2 field. Off-axis effects were modeled by a 3rd degree polynomial used to describe the off-axis half-value layer as a function of off-axis angle and fitting the off-axis fluence to a piecewise linear function to match calculated dose profiles with measured dose profiles for a 40×40cm2 field. The model was validated by comparing calculated PDDs and dose profiles for field sizes ranging from 3×3cm2 to 30×30cm2 to those obtained from measurements. A benchmarking study compared calculated data to measurements for IMRT plans delivered to anthropomorphic phantoms. Results: Along the central axis of the beam 99.6% and 99.7% of all data passed the 2%/2mm gamma criterion for 6 and 10MV models, respectively. Dose profiles at depths of dmax, through 25cm agreed with measured data for 99.4% and 99.6% of data tested for 6 and 10MV models, respectively. A comparison of calculated dose to film measurement in a head and neck phantom showed an average of 85.3% and 90.5% of pixels passing a 3%/2mm gamma criterion for 6 and 10MV models respectively. Conclusion: A Monte Carlo multiple-source model for Elekta 6 and 10MV therapeutic x-ray beams has been developed as a quality assurance tool for clinical trials.
Quantum Monte Carlo Study of the Ground-State Properties of a Fermi Gas in the BCS-BEC Crossover
Giorgini, S.; Astrakharchik, G. E.; Boronat, J.; Casulleras, J.
2006-11-07
The ground-state properties of a two-component Fermi gas with attractive short-range interactions are calculated using the fixed-node diffusion Monte Carlo method. The interaction strength is varied over a wide range by tuning the value of the s-wave scattering length of the two-body potential. We calculate the ground-state energy per particle and we characterize the equation of state of the system. Off-diagonal long-range order is investigated through the asymptotic behavior of the two-body density matrix. The condensate fraction of pairs is calculated in the unitary limit and on both sides of the BCS-BEC crossover.
Mayorga, P. A.; Departamento de Física Atómica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada ; Brualla, L.; Sauerwein, W.; Lallena, A. M.
2014-01-15
Purpose: Retinoblastoma is the most common intraocular malignancy in the early childhood. Patients treated with external beam radiotherapy respond very well to the treatment. However, owing to the genotype of children suffering hereditary retinoblastoma, the risk of secondary radio-induced malignancies is high. The University Hospital of Essen has successfully treated these patients on a daily basis during nearly 30 years using a dedicated “D”-shaped collimator. The use of this collimator that delivers a highly conformed small radiation field, gives very good results in the control of the primary tumor as well as in preserving visual function, while it avoids the devastating side effects of deformation of midface bones. The purpose of the present paper is to propose a modified version of the “D”-shaped collimator that reduces even further the irradiation field with the scope to reduce as well the risk of radio-induced secondary malignancies. Concurrently, the new dedicated “D”-shaped collimator must be easier to build and at the same time produces dose distributions that only differ on the field size with respect to the dose distributions obtained by the current collimator in use. The scope of the former requirement is to facilitate the employment of the authors' irradiation technique both at the authors' and at other hospitals. The fulfillment of the latter allows the authors to continue using the clinical experience gained in more than 30 years. Methods: The Monte Carlo codePENELOPE was used to study the effect that the different structural elements of the dedicated “D”-shaped collimator have on the absorbed dose distribution. To perform this study, the radiation transport through a Varian Clinac 2100 C/D operating at 6 MV was simulated in order to tally phase-space files which were then used as radiation sources to simulate the considered collimators and the subsequent dose distributions. With the knowledge gained in that study, a new, simpler, “D”-shaped collimator is proposed. Results: The proposed collimator delivers a dose distribution which is 2.4 cm wide along the inferior-superior direction of the eyeball. This width is 0.3 cm narrower than that of the dose distribution obtained with the collimator currently in clinical use. The other relevant characteristics of the dose distribution obtained with the new collimator, namely, depth doses at clinically relevant positions, penumbrae width, and shape of the lateral profiles, are statistically compatible with the results obtained for the collimator currently in use. Conclusions: The smaller field size delivered by the proposed collimator still fully covers the planning target volume with at least 95% of the maximum dose at a depth of 2 cm and provides a safety margin of 0.2 cm, so ensuring an adequate treatment while reducing the irradiated volume.
Kyriakou, Ioanna; Emfietzoglou, Dimitris; Nojeh, Alireza; Moscovitch, Marko
2013-02-28
A systematic study of electron-beam penetration and backscattering in multi-walled carbon nanotube (MWCNT) materials for beam energies of {approx}0.3 to 30 keV is presented based on event-by-event Monte Carlo simulation of electron trajectories using state-of-the-art scattering cross sections. The importance of different analytic approximations for computing the elastic and inelastic electron-scattering cross sections for MWCNTs is emphasized. We offer a simple parameterization for the total and differential elastic-scattering Mott cross section, using appropriate modifications to the Browning formula and the Thomas-Fermi screening parameter. A discrete-energy-loss approach to inelastic scattering based on dielectric theory is adopted using different descriptions of the differential cross section. The sensitivity of electron penetration and backscattering parameters to the underlying scattering models is examined. Our simulations confirm the recent experimental backscattering data on MWCNT forests and, in particular, the steep increase of the backscattering yield at sub-keV energies as well as the sidewalls escape effect at high-beam energies.
Fang Yuan; Badal, Andreu; Allec, Nicholas; Karim, Karim S.; Badano, Aldo
2012-01-15
Purpose: The authors describe a detailed Monte Carlo (MC) method for the coupled transport of ionizing particles and charge carriers in amorphous selenium (a-Se) semiconductor x-ray detectors, and model the effect of statistical variations on the detected signal. Methods: A detailed transport code was developed for modeling the signal formation process in semiconductor x-ray detectors. The charge transport routines include three-dimensional spatial and temporal models of electron-hole pair transport taking into account recombination and trapping. Many electron-hole pairs are created simultaneously in bursts from energy deposition events. Carrier transport processes include drift due to external field and Coulombic interactions, and diffusion due to Brownian motion. Results: Pulse-height spectra (PHS) have been simulated with different transport conditions for a range of monoenergetic incident x-ray energies and mammography radiation beam qualities. Two methods for calculating Swank factors from simulated PHS are shown, one using the entire PHS distribution, and the other using the photopeak. The latter ignores contributions from Compton scattering and K-fluorescence. Comparisons differ by approximately 2% between experimental measurements and simulations. Conclusions: The a-Se x-ray detector PHS responses simulated in this work include three-dimensional spatial and temporal transport of electron-hole pairs. These PHS were used to calculate the Swank factor and compare it with experimental measurements. The Swank factor was shown to be a function of x-ray energy and applied electric field. Trapping and recombination models are all shown to affect the Swank factor.
Chibani, Omar C-M Ma, Charlie
2014-05-15
Purpose: To present a new accelerated Monte Carlo code for CT-based dose calculations in high dose rate (HDR) brachytherapy. The new code (HDRMC) accounts for both tissue and nontissue heterogeneities (applicator and contrast medium). Methods: HDRMC uses a fast ray-tracing technique and detailed physics algorithms to transport photons through a 3D mesh of voxels representing the patient anatomy with applicator and contrast medium included. A precalculated phase space file for the{sup 192}Ir source is used as source term. HDRM is calibrated to calculated absolute dose for real plans. A postprocessing technique is used to include the exact density and composition of nontissue heterogeneities in the 3D phantom. Dwell positions and angular orientations of the source are reconstructed using data from the treatment planning system (TPS). Structure contours are also imported from the TPS to recalculate dose-volume histograms. Results: HDRMC was first benchmarked against the MCNP5 code for a single source in homogenous water and for a loaded gynecologic applicator in water. The accuracy of the voxel-based applicator model used in HDRMC was also verified by comparing 3D dose distributions and dose-volume parameters obtained using 1-mm{sup 3} versus 2-mm{sup 3} phantom resolutions. HDRMC can calculate the 3D dose distribution for a typical HDR cervix case with 2-mm resolution in 5 min on a single CPU. Examples of heterogeneity effects for two clinical cases (cervix and esophagus) were demonstrated using HDRMC. The neglect of tissue heterogeneity for the esophageal case leads to the overestimate of CTV D90, CTV D100, and spinal cord maximum dose by 3.2%, 3.9%, and 3.6%, respectively. Conclusions: A fast Monte Carlo code for CT-based dose calculations which does not require a prebuilt applicator model is developed for those HDR brachytherapy treatments that use CT-compatible applicators. Tissue and nontissue heterogeneities should be taken into account in modern HDR brachytherapy planning.
Evaluation of Monte Carlo Electron-Transport Algorithms in the Integrated
Office of Scientific and Technical Information (OSTI)
Tiger Series Codes for Stochastic-Media Simulations. (Conference) | SciTech Connect Patrick ; Prinja, Anil K. Publication Date: 2013-09-01 OSTI Identifier: 1110389 Report Number(s): SAND2013-7609C 473868
Evaluation of Monte Carlo Electron-Transport Algorithms in the Integrated
Office of Scientific and Technical Information (OSTI)
Tiger Series Codes for Stochastic-Media Simulations. (Conference) | SciTech Connect P. ; Prinja, Anil K. Publication Date: 2013-10-01 OSTI Identifier: 1114635 Report Number(s): SAND2013-8831C 477016
Jung, Jae Won; Kim, Jong Oh; Yeo, Inhwan Jason; Cho, Young-Bin; Kim, Sun Mo; DiBiase, Steven
2012-12-15
Purpose: Fast and accurate transit portal dosimetry was investigated by developing a density-scaled layer model of electronic portal imaging device (EPID) and applying it to a clinical environment. Methods: The model was developed for fast Monte Carlo dose calculation. The model was validated through comparison with measurements of dose on EPID using first open beams of varying field sizes under a 20-cm-thick flat phantom. After this basic validation, the model was further tested by applying it to transit dosimetry and dose reconstruction that employed our predetermined dose-response-based algorithm developed earlier. The application employed clinical intensity-modulated beams irradiated on a Rando phantom. The clinical beams were obtained through planning on pelvic regions of the Rando phantom simulating prostate and large pelvis intensity modulated radiation therapy. To enhance agreement between calculations and measurements of dose near penumbral regions, convolution conversion of acquired EPID images was alternatively used. In addition, thickness-dependent image-to-dose calibration factors were generated through measurements of image and calculations of dose in EPID through flat phantoms of various thicknesses. The factors were used to convert acquired images in EPID into dose. Results: For open beam measurements, the model showed agreement with measurements in dose difference better than 2% across open fields. For tests with a Rando phantom, the transit dosimetry measurements were compared with forwardly calculated doses in EPID showing gamma pass rates between 90.8% and 98.8% given 4.5 mm distance-to-agreement (DTA) and 3% dose difference (DD) for all individual beams tried in this study. The reconstructed dose in the phantom was compared with forwardly calculated doses showing pass rates between 93.3% and 100% in isocentric perpendicular planes to the beam direction given 3 mm DTA and 3% DD for all beams. On isocentric axial planes, the pass rates varied between 95.8% and 99.9% for all individual beams and they were 98.2% and 99.9% for the composite beams of the small and large pelvis cases, respectively. Three-dimensional gamma pass rates were 99.0% and 96.4% for the small and large pelvis cases, respectively. Conclusions: The layer model of EPID built for Monte Carlo calculations offered fast (less than 1 min) and accurate calculation for transit dosimety and dose reconstruction.
G. S. Chang; R. C. Pederson
2005-07-01
Mixed oxide (MOX) test capsules prepared with weapons-derived plutonium have been irradiated to a burnup of 50 GWd/t. The MOX fuel was fabricated at Los Alamos National Laboratory by a master-mix process and has been irradiated in the Advanced Test Reactor (ATR) at the Idaho National Laboratory (INL). Previous withdrawals of the same fuel have occurred at 9, 21, 30, and 40 GWd/t. Oak Ridge National Laboratory (ORNL) manages this test series for the Department of Energy’s Fissile Materials Disposition Program (FMDP). The fuel burnup analyses presented in this study were performed using MCWO, a welldeveloped tool that couples the Monte Carlo transport code MCNP with the isotope depletion and buildup code ORIGEN-2. MCWO analysis yields time-dependent and neutron-spectrum-dependent minor actinide and Pu concentrations for the ATR small I-irradiation test position. The purpose of this report is to validate both the Weapons-Grade Mixed Oxide (WG-MOX) test assembly model and the new fuel burnup analysis methodology by comparing the computed results against the neutron monitor measurements.
Many-body ab-initio diffusion quantum Monte Carlo applied to the strongly correlated oxide NiO
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Mitra, Chandrima; Krogel, Jaron T.; Santana, Juan A.; Reboredo, Fernando A.
2015-10-28
We present a many-body diffusion quantum Monte Carlo (DMC) study of the bulk and defect properties of NiO. We find excellent agreement with experimental values, within 0.3%, 0.6%, and 3.5% for the lattice constant, cohesive energy, and bulk modulus, respectively. The quasiparticle bandgap was also computed, and the DMC result of 4.72 (0.17) eV compares well with the experimental value of 4.3 eV. Furthermore, DMC calculations of excited states at the L, Z, and the gamma point of the Brillouin zone reveal a flat upper valence band for NiO, in good agreement with Angle Resolved Photoemission Spectroscopy results. To studymore »defect properties, we evaluated the formation energies of the neutral and charged vacancies of oxygen and nickel in NiO. A formation energy of 7.2 (0.15) eV was found for the oxygen vacancy under oxygen rich conditions. For the Ni vacancy, we obtained a formation energy of 3.2 (0.15) eV under Ni rich conditions. These results confirm that NiO occurs as a p-type material with the dominant intrinsic vacancy defect being Ni vacancy.« less
SU-E-T-584: Commissioning of the MC2 Monte Carlo Dose Computation Engine
Titt, U; Mirkovic, D; Liu, A; Ciangaru, G; Mohan, R; Anand, A; Perles, L
2014-06-01
Purpose: An automated system, MC2, was developed to convert DICOM proton therapy treatment plans into a sequence MCNPX input files, and submit these to a computing cluster. MC2 converts the results into DICOM format, and any treatment planning system can import the data for comparison vs. conventional dose predictions. This work describes the data and the efforts made to validate the MC2 system against measured dose profiles and how the system was calibrated to predict the correct number of monitor units (MUs) to deliver the prescribed dose. Methods: A set of simulated lateral and longitudinal profiles was compared to data measured for commissioning purposes and during annual quality assurance efforts. Acceptance criteria were relative dose differences smaller than 3% and differences in range (in water) of less than 2 mm. For two out of three double scattering beam lines validation results were already published. Spot checks were performed to assure proper performance. For the small snout, all available measurements were used for validation vs. simulated data. To calibrate the dose per MU, the energy deposition per source proton at the center of the spread out Bragg peaks (SOBPs) was recorded for a set of SOBPs from each option. Subsequently these were then scaled to the results of dose per MU determination based on published methods. The simulations of the doses in the magnetically scanned beam line were also validated vs. measured longitudinal and lateral profiles. The source parameters were fine tuned to achieve maximum agreement with measured data. The dosimetric calibration was performed by scoring energy deposition per proton, and scaling the results to a standard dose measurement of a 10 x 10 x 10 cm3 volume irradiation using 100 MU. Results: All simulated data passed the acceptance criteria. Conclusion: MC2 is fully validated and ready for clinical application.
Kim, Jeongnim; Reboredo, Fernando A
2014-01-01
The self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo J. Chem. Phys. {\\bf 136}, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. {\\bf 89}, 6316 (1988)] are blended to obtain a method for the calculation of thermodynamic properties of many-body systems at low temperatures. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric trial wave functions. A statistical method is derived for the calculation of finite temperature properties of many-body systems near the ground state. In the process we also obtain a parallel algorithm that optimizes the many-body basis of a small subspace of the many-body Hilbert space. This small subspace is optimized to have maximum overlap with the one expanded by the lower energy eigenstates of a many-body Hamiltonian. We show in a model system that the Helmholtz free energy is minimized within this subspace as the iteration number increases. We show that the subspace expanded by the small basis systematically converges towards the subspace expanded by the lowest energy eigenstates. Possible applications of this method to calculate the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can be also used to accelerate the calculation of the ground or excited states with Quantum Monte Carlo.
Les Houches Guidebook to Monte Carlo generators for hadron collider physics
Dobbs, M.A
2004-08-24
Recently the collider physics community has seen significant advances in the formalisms and implementations of event generators. This review is a primer of the methods commonly used for the simulation of high energy physics events at particle colliders. We provide brief descriptions, references, and links to the specific computer codes which implement the methods. The aim is to provide an overview of the available tools, allowing the reader to ascertain which tool is best for a particular application, but also making clear the limitations of each tool.
Monte Carlo modeling of electron density in hypersonic rarefied gas flows
Fan, Jin; Zhang, Yuhuai; Jiang, Jianzheng
2014-12-09
The electron density distribution around a vehicle employed in the RAM-C II flight test is calculated with the DSMC method. To resolve the mole fraction of electrons which is several orders lower than those of the primary species in the free stream, an algorithm named as trace species separation (TSS) is utilized. The TSS algorithm solves the primary and trace species separately, which is similar to the DSMC overlay techniques; however it generates new simulated molecules of trace species, such as ions and electrons in each cell, basing on the ionization and recombination rates directly, which differs from the DSMC overlay techniques based on probabilistic models. The electron density distributions computed by TSS agree well with the flight data measured in the RAM-C II test along a decent trajectory at three altitudes 81km, 76km, and 71km.
Talamo, A.; Gohar, Y. (Nuclear Engineering Division) [Nuclear Engineering Division
2011-05-12
This study investigates the performance of the YALINA Booster subcritical assembly, located in Belarus, during operation with high (90%), medium (36%), and low (21%) enriched uranium fuels in the assembly's fast zone. The YALINA Booster is a zero-power, subcritical assembly driven by a conventional neutron generator. It was constructed for the purpose of investigating the static and dynamic neutronics properties of accelerator driven subcritical systems, and to serve as a fast neutron source for investigating the properties of nuclear reactions, in particular transmutation reactions involving minor-actinides. The first part of this study analyzes the assembly's performance with several fuel types. The MCNPX and MONK Monte Carlo codes were used to determine effective and source neutron multiplication factors, effective delayed neutron fraction, prompt neutron lifetime, neutron flux profiles and spectra, and neutron reaction rates produced from the use of three neutron sources: californium, deuterium-deuterium, and deuterium-tritium. In the latter two cases, the external neutron source operates in pulsed mode. The results discussed in the first part of this report show that the use of low enriched fuel in the fast zone of the assembly diminishes neutron multiplication. Therefore, the discussion in the second part of the report focuses on finding alternative fuel loading configurations that enhance neutron multiplication while using low enriched uranium fuel. It was found that arranging the interface absorber between the fast and the thermal zones in a circular rather than a square array is an effective method of operating the YALINA Booster subcritical assembly without downgrading neutron multiplication relative to the original value obtained with the use of the high enriched uranium fuels in the fast zone.
Liu, T.; Ding, A.; Ji, W.; Xu, X. G. [Nuclear Engineering and Engineering Physics, Rensselaer Polytechnic Inst., Troy, NY 12180 (United States); Carothers, C. D. [Dept. of Computer Science, Rensselaer Polytechnic Inst. RPI (United States); Brown, F. B. [Los Alamos National Laboratory (LANL) (United States)
2012-07-01
Monte Carlo (MC) method is able to accurately calculate eigenvalues in reactor analysis. Its lengthy computation time can be reduced by general-purpose computing on Graphics Processing Units (GPU), one of the latest parallel computing techniques under development. The method of porting a regular transport code to GPU is usually very straightforward due to the 'embarrassingly parallel' nature of MC code. However, the situation becomes different for eigenvalue calculation in that it will be performed on a generation-by-generation basis and the thread coordination should be explicitly taken care of. This paper presents our effort to develop such a GPU-based MC code in Compute Unified Device Architecture (CUDA) environment. The code is able to perform eigenvalue calculation under simple geometries on a multi-GPU system. The specifics of algorithm design, including thread organization and memory management were described in detail. The original CPU version of the code was tested on an Intel Xeon X5660 2.8 GHz CPU, and the adapted GPU version was tested on NVIDIA Tesla M2090 GPUs. Double-precision floating point format was used throughout the calculation. The result showed that a speedup of 7.0 and 33.3 were obtained for a bare spherical core and a binary slab system respectively. The speedup factor was further increased by a factor of {approx}2 on a dual GPU system. The upper limit of device-level parallelism was analyzed, and a possible method to enhance the thread-level parallelism was proposed. (authors)
Chorin, Alexandre J.
2007-12-12
A sampling method for spin systems is presented. The spin lattice is written as the union of a nested sequence of sublattices, all but the last with conditionally independent spins, which are sampled in succession using their marginals. The marginals are computed concurrently by a fast algorithm; errors in the evaluation of the marginals are offset by weights. There are no Markov chains and each sample is independent of the previous ones; the cost of a sample is proportional to the number of spins (but the number of samples needed for good statistics may grow with array size). The examples include the Edwards-Anderson spin glass in three dimensions.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Avoids Born-Oppenheimer approximation Distinctive feature of QMC Theory is straightforward but needs good wave functions. Sidesteps variance problem associated with...
SU-E-T-585: Commissioning of Electron Monte Carlo in Eclipse Treatment Planning System for TrueBeam
Yang, X; Lasio, G; Zhou, J; Lin, M; Yi, B; Guerrero, M
2014-06-01
Purpose: To commission electron Monte Carlo (eMC) algorithm in Eclipse Treatment Planning System (TPS) for TrueBeam Linacs, including the evaluation of dose calculation accuracy for small fields and oblique beams and comparison with the existing eMC model for Clinacs. Methods: Electron beam percent-depth-dose (PDDs) and profiles with and without applicators, as well as output factors, were measured from two Varian TrueBeam machines. Measured data were compared against the Varian TrueBeam Representative Beam Data (VTBRBD). The selected data set was transferred into Eclipse for beam configuration. Dose calculation accuracy from eMC was evaluated for open fields, small cut-out fields, and oblique beams at different incident angles. The TrueBeam data was compared to the existing Clinac data and eMC model to evaluate the differences among Linac types. Results: Our measured data indicated that electron beam PDDs from our TrueBeam machines are well matched to those from our Varian Clinac machines, but in-air profiles, cone factors and open-filed output factors are significantly different. The data from our two TrueBeam machines were well represented by the VTBRBD. Variations of TrueBeam PDDs and profiles were within the 2% /2mm criteria for all energies, and the output factors for fields with and without applicators all agree within 2%. Obliquity factor for two clinically relevant applicator sizes (10×10 and 15×15 cm{sup 2}) and three oblique angles (15, 30, and 45 degree) were measured for nominal R100, R90, and R80 of each electron beam energy. Comparisons of calculations using eMC of obliquity factors and cut-out factors versus measurements will be presented. Conclusion: eMC algorithm in Eclipse TPS can be configured using the VTBRBD. Significant differences between TrueBeam and Clinacs were found in in-air profiles and open field output factors. The accuracy of the eMC algorithm was evaluated for a wide range of cut-out factors and oblique incidence.
Chrissanthopoulos, A.; Jovari, P.; Kaban, I.; Gruner, S.; Kavetskyy, T.; Borc, J.; Wang, W.; Ren, J.; Chen, G.; Yannopoulos, S.N.
2012-08-15
We report an investigation of the structure and vibrational modes of Ge-In-S-AgI bulk glasses using X-ray diffraction, EXAFS spectroscopy, Reverse Monte-Carlo (RMC) modelling, Raman spectroscopy, and density functional theoretical (DFT) calculations. The combination of these techniques made it possible to elucidate the short- and medium-range structural order of these glasses. Data interpretation revealed that the AgI-free glass structure is composed of a network where GeS{sub 4/2} tetrahedra are linked with trigonal InS{sub 3/2} units; S{sub 3/2}Ge-GeS{sub 3/2} ethane-like species linked with InS{sub 4/2}{sup -} tetrahedra form sub-structures which are dispersed in the network structure. The addition of AgI into the Ge-In-S glassy matrix causes appreciable structural changes, enriching the Indium species with Iodine terminal atoms. The existence of trigonal species InS{sub 2/2}I and tetrahedral units InS{sub 3/2}I{sup -} and InS{sub 2/2}I{sub 2}{sup -} is compatible with the EXAFS and RMC analysis. Their vibrational properties (harmonic frequencies and Raman activities) calculated by DFT are in very good agreement with the experimental values determined by Raman spectroscopy. - Graphical abstract: Experiment (XRD, EXAFS, RMC, Raman scattering) and density functional calculations are employed to study the structure of AgI-doped Ge-In-S glasses. The role of mixed structural units as illustrated in the figure is elucidated. Highlights: Black-Right-Pointing-Pointer Doping Ge-In-S glasses with AgI causes significant changes in glass structure. Black-Right-Pointing-Pointer Experiment and DFT are combined to elucidate short- and medium-range structural order. Black-Right-Pointing-Pointer Indium atoms form both (InS{sub 4/2}){sup -} tetrahedra and InS{sub 3/2} planar triangles. Black-Right-Pointing-Pointer (InS{sub 4/2}){sup -} tetrahedra bond to (S{sub 3/2}Ge-GeS{sub 3/2}){sup 2+} ethane-like units forming neutral sub-structures. Black-Right-Pointing-Pointer Mixed chalcohalide species (InS{sub 3/2}I){sup -} offer vulnerable sites for the uptake of Ag{sup +}.
Wang, L; Fourkal, E; Hayes, S; Jin, L; Ma, C
2014-06-01
Purpose: To study the dosimetric difference resulted in using the pencil beam algorithm instead of Monte Carlo (MC) methods for tumors adjacent to the skull. Methods: We retrospectively calculated the dosimetric differences between RT and MC algorithms for brain tumors treated with CyberKnife located adjacent to the skull for 18 patients (total of 27 tumors). The median tumor sizes was 0.53-cc (range 0.018-cc to 26.2-cc). The absolute mean distance from the tumor to the skull was 2.11 mm (range - 17.0 mm to 9.2 mm). The dosimetric variables examined include the mean, maximum, and minimum doses to the target, the target coverage (TC) and conformality index. The MC calculation used the same MUs as the RT dose calculation without further normalization and 1% statistical uncertainty. The differences were analyzed by tumor size and distance from the skull. Results: The TC was generally reduced with the MC calculation (24 out of 27 cases). The average difference in TC between RT and MC was 3.3% (range 0.0% to 23.5%). When the TC was deemed unacceptable, the plans were re-normalized in order to increase the TC to 99%. This resulted in a 6.9% maximum change in the prescription isodose line. The maximum changes in the mean, maximum, and minimum doses were 5.4 %, 7.7%, and 8.4%, respectively, before re-normalization. When the TC was analyzed with regards to target size, it was found that the worst coverage occurred with the smaller targets (0.018-cc). When the TC was analyzed with regards to the distance to the skull, there was no correlation between proximity to the skull and TC between the RT and MC plans. Conclusions: For smaller targets (< 4.0-cc), MC should be used to re-evaluate the dose coverage after RT is used for the initial dose calculation in order to ensure target coverage.
Spadea, Maria Francesca; Verburg, Joost Mathias; Seco, Joao; Baroni, Guido
2014-01-15
Purpose: The aim of the study was to evaluate the dosimetric impact of low-Z and high-Z metallic implants on IMRT plans. Methods: Computed tomography (CT) scans of three patients were analyzed to study effects due to the presence of Titanium (low-Z), Platinum and Gold (high-Z) inserts. To eliminate artifacts in CT images, a sinogram-based metal artifact reduction algorithm was applied. IMRT dose calculations were performed on both the uncorrected and corrected images using a commercial planning system (convolution/superposition algorithm) and an in-house Monte Carlo platform. Dose differences between uncorrected and corrected datasets were computed and analyzed using gamma index (P?{sub <1}) and setting 2 mm and 2% as distance to agreement and dose difference criteria, respectively. Beam specific depth dose profiles across the metal were also examined. Results: Dose discrepancies between corrected and uncorrected datasets were not significant for low-Z material. High-Z materials caused under-dosage of 20%–25% in the region surrounding the metal and over dosage of 10%–15% downstream of the hardware. Gamma index test yielded P?{sub <1}>99% for all low-Z cases; while for high-Z cases it returned 91% < P?{sub <1}< 99%. Analysis of the depth dose curve of a single beam for low-Z cases revealed that, although the dose attenuation is altered inside the metal, it does not differ downstream of the insert. However, for high-Z metal implants the dose is increased up to 10%–12% around the insert. In addition, Monte Carlo method was more sensitive to the presence of metal inserts than superposition/convolution algorithm. Conclusions: The reduction in terms of dose of metal artifacts in CT images is relevant for high-Z implants. In this case, dose distribution should be calculated using Monte Carlo algorithms, given their superior accuracy in dose modeling in and around the metal. In addition, the knowledge of the composition of metal inserts improves the accuracy of the Monte Carlo dose calculation significantly.
Barrera, C A; Moran, M J
2007-08-21
The Neutron Imaging System (NIS) is one of seven ignition target diagnostics under development for the National Ignition Facility. The NIS is required to record hot-spot (13-15 MeV) and downscattered (6-10 MeV) images with a resolution of 10 microns and a signal-to-noise ratio (SNR) of 10 at the 20% contour. The NIS is a valuable diagnostic since the downscattered neutrons reveal the spatial distribution of the cold fuel during an ignition attempt, providing important information in the case of a failed implosion. The present study explores the parameter space of several line-of-sight (LOS) configurations that could serve as the basis for the final design. Six commercially available organic scintillators were experimentally characterized for their light emission decay profile and neutron sensitivity. The samples showed a long lived decay component that makes direct recording of a downscattered image impossible. The two best candidates for the NIS detector material are: EJ232 (BC422) plastic fibers or capillaries filled with EJ399B. A Monte Carlo-based end-to-end model of the NIS was developed to study the imaging capabilities of several LOS configurations and verify that the recovered sources meet the design requirements. The model includes accurate neutron source distributions, aperture geometries (square pinhole, triangular wedge, mini-penumbral, annular and penumbral), their point spread functions, and a pixelated scintillator detector. The modeling results show that a useful downscattered image can be obtained by recording the primary peak and the downscattered images, and then subtracting a decayed version of the former from the latter. The difference images need to be deconvolved in order to obtain accurate source distributions. The images are processed using a frequency-space modified-regularization algorithm and low-pass filtering. The resolution and SNR of these sources are quantified by using two surrogate sources. The simulations show that all LOS configurations have a resolution of 7 microns or better. The 28 m LOS with a 7 x 7 array of 100-micron mini-penumbral apertures or 50-micron square pinholes meets the design requirements and is a very good design alternative.
Sharma, Diksha; Badano, Aldo
2013-03-15
Purpose: hybridMANTIS is a Monte Carlo package for modeling indirect x-ray imagers using columnar geometry based on a hybrid concept that maximizes the utilization of available CPU and graphics processing unit processors in a workstation. Methods: The authors compare hybridMANTIS x-ray response simulations to previously published MANTIS and experimental data for four cesium iodide scintillator screens. These screens have a variety of reflective and absorptive surfaces with different thicknesses. The authors analyze hybridMANTIS results in terms of modulation transfer function and calculate the root mean square difference and Swank factors from simulated and experimental results. Results: The comparison suggests that hybridMANTIS better matches the experimental data as compared to MANTIS, especially at high spatial frequencies and for the thicker screens. hybridMANTIS simulations are much faster than MANTIS with speed-ups up to 5260. Conclusions: hybridMANTIS is a useful tool for improved description and optimization of image acquisition stages in medical imaging systems and for modeling the forward problem in iterative reconstruction algorithms.
Ondis, L.A., II; Tyburski, L.J.; Moskowitz, B.S.
2000-03-01
The RCP01 Monte Carlo program is used to analyze many geometries of interest in nuclear design and analysis of light water moderated reactors such as the core in its pressure vessel with complex piping arrangement, fuel storage arrays, shipping and container arrangements, and neutron detector configurations. Written in FORTRAN and in use on a variety of computers, it is capable of estimating steady state neutron or photon reaction rates and neutron multiplication factors. The energy range covered in neutron calculations is that relevant to the fission process and subsequent slowing-down and thermalization, i.e., 20 MeV to 0 eV. The same energy range is covered for photon calculations.
Monte-Carlo simulation of noise in hard X-ray Transmission Crystal...
Office of Scientific and Technical Information (OSTI)
UPMC, 91128 Palaiseau (France) University of Milano, via Celoria 16, 20133 Milano (Italy) Publication Date: 2014-11-15 OSTI Identifier: 22308598 Resource Type: Journal Article ...
Random-Walk Monte Carlo Simulation of Intergranular Gas Bubble Nucleation in UO2 Fuel
Yongfeng Zhang; Michael R. Tonks; S. B. Biner; D.A. Andersson
2012-11-01
Using a random-walk particle algorithm, we investigate the clustering of fission gas atoms on grain bound- aries in oxide fuels. The computational algorithm implemented in this work considers a planar surface representing a grain boundary on which particles appear at a rate dictated by the Booth flux, migrate two dimensionally according to their grain boundary diffusivity, and coalesce by random encounters. Specifically, the intergranular bubble nucleation density is the key variable we investigate using a parametric study in which the temperature, grain boundary gas diffusivity, and grain boundary segregation energy are varied. The results reveal that the grain boundary bubble nucleation density can vary widely due to these three parameters, which may be an important factor in the observed variability in intergranular bubble percolation among grain boundaries in oxide fuel during fission gas release.
Ding, D.; Chen, X.; Minnich, A. J.
2014-04-07
Recently, a pump beam size dependence of thermal conductivity was observed in Si at cryogenic temperatures using time-domain thermal reflectance (TDTR). These observations were attributed to quasiballistic phonon transport, but the interpretation of the measurements has been semi-empirical. Here, we present a numerical study of the heat conduction that occurs in the full 3D geometry of a TDTR experiment, including an interface, using the Boltzmann transport equation. We identify the radial suppression function that describes the suppression in heat flux, compared to Fourier's law, that occurs due to quasiballistic transport and demonstrate good agreement with experimental data. We also discuss unresolved discrepancies that are important topics for future study.
Sheu, R; Tseng, T; Powers, A; Lo, Y
2014-06-01
Purpose: To provide commissioning and acceptance test data of the Varian Eclipse electron Monte Carlo model (eMC v.11) for TrueBeam linac. We also investigated the uncertainties in beam model parameters and dose calculation results for different geometric configurations. Methods: For beam commissioning, PTW CC13 thimble chamber and IBA Blue Phantom2 were used to collect PDD and dose profiles in air. Cone factors were measured with a parallel plate chamber (PTW N23342) in solid water. GafChromic EBT3 films were used for dose calculation verifications to compare with parallel plate chamber results in the following test geometries: oblique incident, extended distance, small cutouts, elongated cutouts, irregular surface, and heterogeneous layers. Results: Four electron energies (6e, 9e, 12e, and 15e) and five cones (6×6, 10×10, 15×15, 20×20, and 25×25) with standard cutouts were calculated for different grid sizes (1, 1.5,2, and 2.5 mm) and compared with chamber measurements. The results showed calculations performed with a coarse grid size underestimated the absolute dose. The underestimation decreased as energy increased. For 6e, the underestimation (max 3.3 %) was greater than the statistical uncertainty level (3%) and was systematically observed for all cone sizes. By using a 1mm grid size, all the calculation results agreed with measurements within 5% for all test configurations. The calculations took 21s and 46s for 6e and 15e (2.5mm grid size) respectively distributed on 4 calculation servants. Conclusion: In general, commissioning the eMC dose calculation model on TrueBeam is straightforward and thedose calculation is in good agreement with measurements for all test cases. Monte Carlo dose calculation provides more accurate results which improves treatment planning quality. However, the normal acceptable grid size (2.5mm) would cause systematic underestimation in absolute dose calculation for lower energies, such as 6e. Users need to be cautious in this situation.
MaGe - a GEANT4-based Monte Carlo Application Framework for Low-background Germanium Experiments
Boswell, M.; Chan, Yuen-Dat; Detwiler, Jason A.; Finnerty, P.; Henning, R.; Gehman, Victor; Johnson, Robert A.; Jordan, David V.; Kazkaz, Kareem; Knapp, Markus; Kroninger, Kevin; Lenz, Daniel; Leviner, L.; Liu, Jing; Liu, Xiang; MacMullin, S.; Marino, Michael G.; Mokhtarani, A.; Pandola, Luciano; Schubert, Alexis G.; Schubert, J.; Tomei, Claudia; Volynets, Oleksandr
2011-06-13
We describe a physics simulation software framework, MAGE, that is based on the GEANT4 simulation toolkit. MAGE is used to simulate the response of ultra-low radioactive background radiation detectors to ionizing radiation, speci?cally the MAJ ORANA and GE RDA neutrinoless double-beta decay experiments. MAJ ORANA and GERDA use high-purity germanium technology to search for the neutrinoless double-beta decay of the 76 Ge isotope, and MAGE is jointly developed between these two collaborations. The MAGE framework contains simulated geometries of common objects, prototypes, test stands, and the actual experiments. It also implements customized event generators, GE ANT 4 physics lists, and output formats. All of these features are available as class libraries that are typically compiled into a single executable. The user selects the particular experimental setup implementation at run-time via macros. The combination of all these common classes into one framework reduces duplication of efforts, eases comparison between simulated data and experiment, and simpli?es the addition of new detectors to be simulated. This paper focuses on the software framework, custom event generators, and physics list.
Simulation of atomic diffusion in the Fcc NiAl system: A kinetic...
Office of Scientific and Technical Information (OSTI)
Simulation of atomic diffusion in the Fcc NiAl system: A kinetic Monte Carlo study Citation Details In-Document Search Title: Simulation of atomic diffusion in the Fcc NiAl system: ...
Cox, Stephen J.; Michaelides, Angelos; Department of Chemistry, University College London, 20 Gordon Street, London WC1H 0AJ ; Towler, Michael D.; Theory of Condensed Matter Group, Cavendish Laboratory, University of Cambridge, J.J. Thomson Avenue, Cambridge CB3 0HE ; Alfè, Dario; Department of Earth Sciences, University College London Gower Street, London WC1E 6BT
2014-05-07
High quality reference data from diffusion Monte Carlo calculations are presented for bulk sI methane hydrate, a complex crystal exhibiting both hydrogen-bond and dispersion dominated interactions. The performance of some commonly used exchange-correlation functionals and all-atom point charge force fields is evaluated. Our results show that none of the exchange-correlation functionals tested are sufficient to describe both the energetics and the structure of methane hydrate accurately, while the point charge force fields perform badly in their description of the cohesive energy but fair well for the dissociation energetics. By comparing to ice I{sub h}, we show that a good prediction of the volume and cohesive energies for the hydrate relies primarily on an accurate description of the hydrogen bonded water framework, but that to correctly predict stability of the hydrate with respect to dissociation to ice I{sub h} and methane gas, accuracy in the water-methane interaction is also required. Our results highlight the difficulty that density functional theory faces in describing both the hydrogen bonded water framework and the dispersion bound methane.
Khledi, Navid; Sardari, Dariush; Arbabi, Azim; Ameri, Ahmad; Mohammadi, Mohammad
2015-02-24
Depending on the location and depth of tumor, the electron or photon beams might be used for treatment. Electron beam have some advantages over photon beam for treatment of shallow tumors to spare the normal tissues beyond of the tumor. In the other hand, the photon beam are used for deep targets treatment. Both of these beams have some limitations, for example the dependency of penumbra with depth, and the lack of lateral equilibrium for small electron beam fields. In first, we simulated the conventional head configuration of Varian 2300 for 16 MeV electron, and the results approved by benchmarking the Percent Depth Dose (PDD) and profile of the simulation and measurement. In the next step, a perforated Lead (Pb) sheet with 1mm thickness placed at the top of the applicator holder tray. This layer producing bremsstrahlung x-ray and a part of the electrons passing through the holes, in result, we have a simultaneous mixed electron and photon beam. For making the irradiation field uniform, a layer of steel placed after the Pb layer. The simulation was performed for 10×10, and 4×4 cm2 field size. This study was showed the advantages of mixing the electron and photon beam by reduction of pure electron's penumbra dependency with the depth, especially for small fields, also decreasing of dramatic changes of PDD curve with irradiation field size.
EMAM, M; Eldib, A; Lin, M; Li, J; Chibani, O; Ma, C
2014-06-01
Purpose: An in-house Monte Carlo based treatment planning system (MC TPS) has been developed for modulated electron radiation therapy (MERT). Our preliminary MERT planning experience called for a more user friendly graphical user interface. The current work aimed to design graphical windows and tools to facilitate the contouring and planning process. Methods: Our In-house GUI MC TPS is built on a set of EGS4 user codes namely MCPLAN and MCBEAM in addition to an in-house optimization code, which was named as MCOPTIM. Patient virtual phantom is constructed using the tomographic images in DICOM format exported from clinical treatment planning systems (TPS). Treatment target volumes and critical structures were usually contoured on clinical TPS and then sent as a structure set file. In our GUI program we developed a visualization tool to allow the planner to visualize the DICOM images and delineate the various structures. We implemented an option in our code for automatic contouring of the patient body and lungs. We also created an interface window displaying a three dimensional representation of the target and also showing a graphical representation of the treatment beams. Results: The new GUI features helped streamline the planning process. The implemented contouring option eliminated the need for performing this step on clinical TPS. The auto detection option for contouring the outer patient body and lungs was tested on patient CTs and it was shown to be accurate as compared to that of clinical TPS. The three dimensional representation of the target and the beams allows better selection of the gantry, collimator and couch angles. Conclusion: An in-house GUI program has been developed for more efficient MERT planning. The application of aiding tools implemented in the program is time saving and gives better control of the planning process.
Dupuy, Nicolas; Bouaouli, Samira; Mauri, Francesco Casula, Michele; Sorella, Sandro
2015-06-07
We study the ionization energy, electron affinity, and the ? ? ?{sup ?} ({sup 1}L{sub a}) excitation energy of the anthracene molecule, by means of variational quantum Monte Carlo (QMC) methods based on a Jastrow correlated antisymmetrized geminal power (JAGP) wave function, developed on molecular orbitals (MOs). The MO-based JAGP ansatz allows one to rigorously treat electron transitions, such as the HOMO ? LUMO one, which underlies the {sup 1}L{sub a} excited state. We present a QMC optimization scheme able to preserve the rank of the antisymmetrized geminal power matrix, thanks to a constrained minimization with projectors built upon symmetry selected MOs. We show that this approach leads to stable energy minimization and geometry relaxation of both ground and excited states, performed consistently within the correlated QMC framework. Geometry optimization of excited states is needed to make a reliable and direct comparison with experimental adiabatic excitation energies. This is particularly important in ?-conjugated and polycyclic aromatic hydrocarbons, where there is a strong interplay between low-lying energy excitations and structural modifications, playing a functional role in many photochemical processes. Anthracene is an ideal benchmark to test these effects. Its geometry relaxation energies upon electron excitation are of up to 0.3 eV in the neutral {sup 1}L{sub a} excited state, while they are of the order of 0.1 eV in electron addition and removal processes. Significant modifications of the ground state bond length alternation are revealed in the QMC excited state geometry optimizations. Our QMC study yields benchmark results for both geometries and energies, with values below chemical accuracy if compared to experiments, once zero point energy effects are taken into account.
Besemer, A; Bednarz, B; Titz, B; Grudzinski, J; Weichert, J; Hall, L
2014-06-01
Purpose: Combination targeted radionuclide therapy (TRT) is appealing because it can potentially exploit different mechanisms of action from multiple radionuclides as well as the variable dose rates due to the different radionuclide half-lives. The work describes the development of a multiobjective optimization algorithm to calculate the optimal ratio of radionuclide injection activities for delivery of combination TRT. Methods: The ‘diapeutic’ (diagnostic and therapeutic) agent, CLR1404, was used as a proof-of-principle compound in this work. Isosteric iodine substitution in CLR1404 creates a molecular imaging agent when labeled with I-124 or a targeted radiotherapeutic agent when labeled with I-125 or I-131. PET/CT images of high grade glioma patients were acquired at 4.5, 24, and 48 hours post injection of 124I-CLR1404. The therapeutic 131I-CLR1404 and 125ICLR1404 absorbed dose (AD) and biological effective dose (BED) were calculated for each patient using a patient-specific Monte Carlo dosimetry platform. The optimal ratio of injection activities for each radionuclide was calculated with a multi-objective optimization algorithm using the weighted sum method. Objective functions such as the tumor dose heterogeneity and the ratio of the normal tissue to tumor doses were minimized and the relative importance weights of each optimization function were varied. Results: For each optimization function, the program outputs a Pareto surface map representing all possible combinations of radionuclide injection activities so that values that minimize the objective function can be visualized. A Pareto surface map of the weighted sum given a set of user-specified importance weights is also displayed. Additionally, the ratio of optimal injection activities as a function of the all possible importance weights is generated so that the user can select the optimal ratio based on the desired weights. Conclusion: Multi-objective optimization of radionuclide injection activities can provide an invaluable tool for maximizing the dosimetric benefits in multi-radionuclide combination TRT. BT, JG, and JW are affiliated with Cellectar Biosciences which owns the licensing rights to CLR1404 and related compounds.
Forbang, R Teboh
2014-06-01
Purpose: MultiPlan, the treatment planning system for the CyberKnife Robotic Radiosurgery system offers two approaches to dose computation, namely Ray-Tracing (RT), the default technique and Monte Carlo (MC), an option. RT is deterministic, however it accounts for primary heterogeneity only. MC on the other hand has an uncertainty associated with the calculation results. The advantage is that in addition, it accounts for heterogeneity effects on the scattered dose. Not all sites will benefit from MC. The goal of this work was to focus on central nervous system (CNS) tumors and compare dosimetrically, treatment plans computed with RT versus MC. Methods: Treatment plans were computed using both RT and MC for sites covering (a) the brain (b) C-spine (c) upper T-spine (d) lower T-spine (e) L-spine and (f) sacrum. RT was first used to compute clinically valid treatment plans. Then the same treatment parameters, monitor units, beam weights, etc., were used in the MC algorithm to compute the dose distribution. The plans were then compared for tumor coverage to illustrate the difference if any. All MC calculations were performed at a 1% uncertainty. Results: Using the RT technique, the tumor coverage for the brain, C-spine (C3â€“C7), upper T-spine (T4â€“T6), lower T-spine (T10), Lspine (L2) and sacrum were 96.8%, 93.1%, 97.2%, 87.3%, 91.1%, and 95.3%. The corresponding tumor coverage based on the MC approach was 98.2%, 95.3%, 87.55%, 88.2%, 92.5%, and 95.3%. It should be noted that the acceptable planning target coverage for our clinical practice is >95%. The coverage can be compromised for spine tumors to spare normal tissues such as the spinal cord. Conclusion: For treatment planning involving the CNS, RT and MC appear to be similar for most sites but for the T-spine area where most of the beams traverse lung tissue. In this case, MC is highly recommended.
Stochastic Parallel PARticle Kinetic Simulator
Energy Science and Technology Software Center (OSTI)
2008-07-01
SPPARKS is a kinetic Monte Carlo simulator which implements kinetic and Metropolis Monte Carlo solvers in a general way so that they can be hooked to applications of various kinds. Specific applications are implemented in SPPARKS as physical models which generate events (e.g. a diffusive hop or chemical reaction) and execute them one-by-one. Applications can run in paralle so long as the simulation domain can be partitoned spatially so that multiple events can be invokedmoreÂ Â» simultaneously. SPPARKS is used to model various kinds of mesoscale materials science scenarios such as grain growth, surface deposition and growth, and reaction kinetics. It can also be used to develop new Monte Carlo models that hook to the existing solver and paralle infrastructure provided by the code.Â«Â less
Praveen, E. Satyanarayana, S. V. M.
2014-04-24
Traditional definition of phase transition involves an infinitely large system in thermodynamic limit. Finite systems such as biological proteins exhibit cooperative behavior similar to phase transitions. We employ recently discovered analysis of inflection points of microcanonical entropy to estimate the transition temperature of the phase transition in q state Potts model on a finite two dimensional square lattice for q=3 (second order) and q=8 (first order). The difference of energy density of states (DOS) ? ln g(E) = ln g(E+ ?E) ?ln g(E) exhibits a point of inflexion at a value corresponding to inverse transition temperature. This feature is common to systems exhibiting both first as well as second order transitions. While the difference of DOS registers a monotonic variation around the point of inflexion for systems exhibiting second order transition, it has an S-shape with a minimum and maximum around the point of inflexion for the case of first order transition.
Burke, Timothy Patrick; Kiedrowski, Brian; Martin, William R.; Brown, Forrest B.
2015-08-27
KDEs show potential reducing variance for global solutions (flux, reaction rates) when compared to histogram solutions.
Fubiani, G.; Boeuf, J. P. [Université de Toulouse, UPS, INPT, LAPLACE (Laboratoire Plasma et Conversion d'Energie), 118 route de Narbonne, F-31062 Toulouse cedex 9 (France) [Université de Toulouse, UPS, INPT, LAPLACE (Laboratoire Plasma et Conversion d'Energie), 118 route de Narbonne, F-31062 Toulouse cedex 9 (France); CNRS, LAPLACE, F-31062 Toulouse (France)
2013-11-15
Results from a 3D self-consistent Particle-In-Cell Monte Carlo Collisions (PIC MCC) model of a high power fusion-type negative ion source are presented for the first time. The model is used to calculate the plasma characteristics of the ITER prototype BATMAN ion source developed in Garching. Special emphasis is put on the production of negative ions on the plasma grid surface. The question of the relative roles of the impact of neutral hydrogen atoms and positive ions on the cesiated grid surface has attracted much attention recently and the 3D PIC MCC model is used to address this question. The results show that the production of negative ions by positive ion impact on the plasma grid is small with respect to the production by atomic hydrogen or deuterium bombardment (less than 10%)
Fission Particle Emission Multiplicity Simulation
Energy Science and Technology Software Center (OSTI)
2006-09-27
Simulates discrete neutron and gamma-ray emission from the fission of heavy nuclei that is either spontaneous or neutron induced. This is a function library that encapsulates the fission physics and is intended to be called Monte Carlo transport code.
Eersel, H. van, E-mail: h.v.eersel@tue.nl; Coehoorn, R. [Department of Applied Physics, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands); Philips Research Laboratories, High Tech Campus 4, 5656 AE Eindhoven (Netherlands); Bobbert, P. A.; Janssen, R. A. J. [Department of Applied Physics, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands)
2014-10-06
We present an advanced molecular-scale organic light-emitting diode (OLED) model, integrating both electronic and excitonic processes. Using this model, we can reproduce the measured efficiency roll-off for prototypical phosphorescent OLED stacks based on the green dye tris[2-phenylpyridine]iridium (Ir(ppy){sub 3}) and the red dye octaethylporphine platinum (PtOEP) and study the cause of the roll-off as function of the current density. Both the voltage versus current density characteristics and roll-off agree well with experimental data. Surprisingly, the results of the simulations lead us to conclude that, contrary to what is often assumed, not triplet-triplet annihilation but triplet-polaron quenching is the dominant mechanism causing the roll-off under realistic operating conditions. Simulations for devices with an optimized recombination profile, achieved by carefully tuning the dye trap depth, show that it will be possible to fabricate OLEDs with a drastically reduced roll-off. It is envisaged that J{sub 90}, the current density at which the efficiency is reduced to 90%, can be increased by almost one order of magnitude as compared to the experimental state-of-the-art.
Chen Huixiao; Lohr, Frank; Fritz, Peter; Wenz, Frederik; Dobler, Barbara; Lorenz, Friedlieb; Muehlnickel, Werner
2010-11-01
Purpose: Dose calculation based on pencil beam (PB) algorithms has its shortcomings predicting dose in tissue heterogeneities. The aim of this study was to compare dose distributions of clinically applied non-intensity-modulated radiotherapy 15-MV plans for stereotactic body radiotherapy between voxel Monte Carlo (XVMC) calculation and PB calculation for lung lesions. Methods and Materials: To validate XVMC, one treatment plan was verified in an inhomogeneous thorax phantom with EDR2 film (Eastman Kodak, Rochester, NY). Both measured and calculated (PB and XVMC) dose distributions were compared regarding profiles and isodoses. Then, 35 lung plans originally created for clinical treatment by PB calculation with the Eclipse planning system (Varian Medical Systems, Palo Alto, CA) were recalculated by XVMC (investigational implementation in PrecisePLAN [Elekta AB, Stockholm, Sweden]). Clinically relevant dose-volume parameters for target and lung tissue were compared and analyzed statistically. Results: The XVMC calculation agreed well with film measurements (<1% difference in lateral profile), whereas the deviation between PB calculation and film measurements was up to +15%. On analysis of 35 clinical cases, the mean dose, minimal dose and coverage dose value for 95% volume of gross tumor volume were 1.14 {+-} 1.72 Gy, 1.68 {+-} 1.47 Gy, and 1.24 {+-} 1.04 Gy lower by XVMC compared with PB, respectively (prescription dose, 30 Gy). The volume covered by the 9 Gy isodose of lung was 2.73% {+-} 3.12% higher when calculated by XVMC compared with PB. The largest differences were observed for small lesions circumferentially encompassed by lung tissue. Conclusions: Pencil beam dose calculation overestimates dose to the tumor and underestimates lung volumes exposed to a given dose consistently for 15-MV photons. The degree of difference between XVMC and PB is tumor size and location dependent. Therefore XVMC calculation is helpful to further optimize treatment planning.
Quantum Process Matrix Computation by Monte Carlo
Energy Science and Technology Software Center (OSTI)
2012-09-11
The software package, processMC, is a python script that allows for the rapid modeling of small , noisy quantum systems and the computation of the averaged quantum evolution map.
Statistical assessment of Monte Carlo distributional tallies
Kiedrowski, Brian C; Solomon, Clell J
2010-12-09
Four tests are developed to assess the statistical reliability of distributional or mesh tallies. To this end, the relative variance density function is developed and its moments are studied using simplified, non-transport models. The statistical tests are performed upon the results of MCNP calculations of three different transport test problems and appear to show that the tests are appropriate indicators of global statistical quality.
Cao, M; Tenn, S; Lee, C; Yang, Y; Lamb, J; Agazaryan, N; Lee, P; Low, D
2014-06-01
Purpose: To evaluate performance of three commercially available treatment planning systems for stereotactic body radiation therapy (SBRT) of lung cancer using the following algorithms: Boltzmann transport equation based algorithm (AcurosXB AXB), convolution based algorithm Anisotropic Analytic Algorithm (AAA); and Monte Carlo based algorithm (XVMC). Methods: A total of 10 patients with early stage non-small cell peripheral lung cancer were included. The initial clinical plans were generated using the XVMC based treatment planning system with a prescription of 54Gy in 3 fractions following RTOG0613 protocol. The plans were recalculated with the same beam parameters and monitor units using AAA and AXB algorithms. A calculation grid size of 2mm was used for all algorithms. The dose distribution, conformity, and dosimetric parameters for the targets and organs at risk (OAR) are compared between the algorithms. Results: The average PTV volume was 19.6mL (range 4.2–47.2mL). The volume of PTV covered by the prescribed dose (PTV-V100) were 93.97±2.00%, 95.07±2.07% and 95.10±2.97% for XVMC, AXB and AAA algorithms, respectively. There was no significant difference in high dose conformity index; however, XVMC predicted slightly higher values (p=0.04) for the ratio of 50% prescription isodose volume to PTV (R50%). The percentage volume of total lungs receiving dose >20Gy (LungV20Gy) were 4.03±2.26%, 3.86±2.22% and 3.85±2.21% for XVMC, AXB and AAA algorithms. Examination of dose volume histograms (DVH) revealed small differences in targets and OARs for most patients. However, the AAA algorithm was found to predict considerable higher PTV coverage compared with AXB and XVMC algorithms in two cases. The dose difference was found to be primarily located at the periphery region of the target. Conclusion: For clinical SBRT lung treatment planning, the dosimetric differences between three commercially available algorithms are generally small except at target periphery. XVMC and AXB algorithms are recommended for accurate dose estimation at tissue boundaries.
Mesoscale Simulations of Coarsening in GB Networks
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Mukul Kumar is the Principal Investigator for Mesoscale Simulations of Coarsening in GB Networks LLNL BES Programs Highlight Mesoscale Simulations of Coarsening in GB Networks The Phase Field Model evolves a grain boundary network with realistic network correlations, as seeded by a group-theory-based Monte Carlo model M. Tang, B. W. Reed, and M. Kumar, J. Appl. Phys. 112, 043505 (2012) V. Bulatov, B. W. Reed, and M. Kumar; "Grain boundary energy function for FCC metals," Physical
Park, Su-Jung; /Bonn U.
2004-02-01
The measurement of the t{bar t} production cross section at {radical}s = 1.96 TeV using the final state with an electron and jets is studied with Monte Carlo event samples. All methods used in the real data analysis to measure efficiencies and to estimate the background contributions are examined. The studies focus on measuring the electron reconstruction efficiencies as well as on improving the electron identification and background suppression. With a generated input cross section of 7 pb the following result is obtained: {sigma}{sub t{bar t}} = (7 {+-} 1.63(stat){sub -1.14}{sup +0.94} (syst)) pb.
Simulating variable source problems via post processing of individual particle tallies
Bleuel, D.L.; Donahue, R.J.; Ludewigt, B.A.; Vujic, J.
2000-10-20
Monte Carlo is an extremely powerful method of simulating complex, three dimensional environments without excessive problem simplification. However, it is often time consuming to simulate models in which the source can be highly varied. Similarly difficult are optimization studies involving sources in which many input parameters are variable, such as particle energy, angle, and spatial distribution. Such studies are often approached using brute force methods or intelligent guesswork. One field in which these problems are often encountered is accelerator-driven Boron Neutron Capture Therapy (BNCT) for the treatment of cancers. Solving the reverse problem of determining the best neutron source for optimal BNCT treatment can be accomplished by separating the time-consuming particle-tracking process of a full Monte Carlo simulation from the calculation of the source weighting factors which is typically performed at the beginning of a Monte Carlo simulation. By post-processing these weighting factors on a recorded file of individual particle tally information, the effect of changing source variables can be realized in a matter of seconds, instead of requiring hours or days for additional complete simulations. By intelligent source biasing, any number of different source distributions can be calculated quickly from a single Monte Carlo simulation. The source description can be treated as variable and the effect of changing multiple interdependent source variables on the problem's solution can be determined. Though the focus of this study is on BNCT applications, this procedure may be applicable to any problem that involves a variable source.
Sandia Energy - Carlos Michelén
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Micheln Home Carlos Micheln Carlos Micheln Engineering Sciences R&D Department: Water Power Technologies Michelen Carlos Micheln joined the Water Power Technologies...
Loading relativistic Maxwell distributions in particle simulations
Zenitani, Seiji
2015-04-15
Numerical algorithms to load relativistic Maxwell distributions in particle-in-cell (PIC) and Monte-Carlo simulations are presented. For stationary relativistic Maxwellian, the inverse transform method and the Sobol algorithm are reviewed. To boost particles to obtain relativistic shifted-Maxwellian, two rejection methods are proposed in a physically transparent manner. Their acceptance efficiencies are ?50% for generic cases and 100% for symmetric distributions. They can be combined with arbitrary base algorithms.
A Hybrid Variance Reduction Method Based on Gaussian Process for Core Simulation
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Hybrid Variance Reduction Method Based on Gaussian Process for Core Simulation Zeyun Wu, Qiong Zhang and Hany S. Abdel-Khalik Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 {zwu3, qzhang7, abdelkhalik}@ncsu.edu INTRODUCTION Variance reduction techniques is usually employed to accelerate the convergence of Monte Carlo (MC) simulation. Hybrid deterministic-MC methods [1, 2, 3] have been recently developed to achieve the goal of global variance reduction.
Quantum simulations of strongly coupled quark-gluon plasma
Filinov, V. S.; Ivanov, Yu. B.; Bonitz, M.; Levashov, P. R.; Fortov, V. E.
2012-06-15
A strongly coupled quark-gluon plasma (QGP) of heavy constituent quasi-particles is studied by a path-integral Monte-Carlo method. This approach is a quantum generalization of the classical molecular dynamics by Gelman, Shuryak, and Zahed. It is shown that this method is able to reproduce the QCD lattice equation of state. The results indicate that the QGP reveals liquid-like rather than gaslike properties. Quantum effects turned out to be of prime importance in these simulations.
CASL-U-2015-0155-000 VERA Core Simulator
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
5-000 VERA Core Simulator Methodology for PWR Cycle Depletion Brendan Kochunas, Daniel Jabaay, Shane Stimpson, Aaron Graham, and Thomas Downar University of Michigan Benjamin Collins, Kang Seog Kim, William Wieselquist, Kevin Clarno, and Jess Gehin Oak Ridge National Laboratory Scott Palmtag Core Physics, Inc. March 29, 2015 CASL-U-2015-0155-000 ANS MC2015 - Joint International Conference on Mathematics and Computation (M&C), Supercomputing in Nuclear Applications (SNA) and the Monte Carlo
Huš, Matej; Urbic, Tomaz; Munaò, Gianmarco
2014-10-28
Thermodynamic and structural properties of a coarse-grained model of methanol are examined by Monte Carlo simulations and reference interaction site model (RISM) integral equation theory. Methanol particles are described as dimers formed from an apolar Lennard-Jones sphere, mimicking the methyl group, and a sphere with a core-softened potential as the hydroxyl group. Different closure approximations of the RISM theory are compared and discussed. The liquid structure of methanol is investigated by calculating site-site radial distribution functions and static structure factors for a wide range of temperatures and densities. Results obtained show a good agreement between RISM and Monte Carlo simulations. The phase behavior of methanol is investigated by employing different thermodynamic routes for the calculation of the RISM free energy, drawing gas-liquid coexistence curves that match the simulation data. Preliminary indications for a putative second critical point between two different liquid phases of methanol are also discussed.
Consortium for Advanced Simulation of Light Water Reactors (CASL...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
and Monte Carlo transport applications. Exnihilo is based on a package architecture model such that each package provides well-defined capabilities. Exnihilo currently...
Cluster computing software for GATE simulations
Beenhouwer, Jan de; Staelens, Steven; Kruecker, Dirk; Ferrer, Ludovic; D'Asseler, Yves; Lemahieu, Ignace; Rannou, Fernando R.
2007-06-15
Geometry and tracking (GEANT4) is a Monte Carlo package designed for high energy physics experiments. It is used as the basis layer for Monte Carlo simulations of nuclear medicine acquisition systems in GEANT4 Application for Tomographic Emission (GATE). GATE allows the user to realistically model experiments using accurate physics models and time synchronization for detector movement through a script language contained in a macro file. The downside of this high accuracy is long computation time. This paper describes a platform independent computing approach for running GATE simulations on a cluster of computers in order to reduce the overall simulation time. Our software automatically creates fully resolved, nonparametrized macros accompanied with an on-the-fly generated cluster specific submit file used to launch the simulations. The scalability of GATE simulations on a cluster is investigated for two imaging modalities, positron emission tomography (PET) and single photon emission computed tomography (SPECT). Due to a higher sensitivity, PET simulations are characterized by relatively high data output rates that create rather large output files. SPECT simulations, on the other hand, have lower data output rates but require a long collimator setup time. Both of these characteristics hamper scalability as a function of the number of CPUs. The scalability of PET simulations is improved here by the development of a fast output merger. The scalability of SPECT simulations is improved by greatly reducing the collimator setup time. Accordingly, these two new developments result in higher scalability for both PET and SPECT simulations and reduce the computation time to more practical values.
Sergeeva, Ekaterina A; Katichev, A R; Kirillin, M Yu
2011-01-24
Using the radiative transfer theory and Monte Carlo simulations, we analyse the effect of scattering in a medium and of the size of the detector pinhole on the formation of the fluorescent signal in standard two-photon fluorescence microscopy (TPFM) systems. The theoretical analysis is based on a small-angle diffusion approximation of the radiative transfer equation, adapted to calculate the propagation of focused infrared radiation in media similar to the biological tissues in their optical properties. The accuracy of the model is evaluated by comparing the calculated excitation intensity in a highly scattering medium with the results of Monte Carlo simulations. To simulate a tightly focused Gaussian beam by the Monte Carlo method, the so called 'ray-optics' approach that correctly takes into account the finite size and shape of the beam waist is applied. It is shown that in the combined confocal and two-photon scanning microscopy systems not equipped with an external 'nondescanned' detector, the scattering significantly affects both the nonlinear excitation efficiency in the medium and the fluorescence collection efficiency of the system. In such systems, the rate of the useful TPFM signal in-depth decay is 1.5 - 2 times higher than in systems equipped with a 'nondescanned' detector. (application of lasers and laser-optical methods in life sciences)
Burlon, Alejandro A.; Valda, Alejandro A.; Girola, Santiago; Minsky, Daniel M.; Kreiner, Andres J.
2010-08-04
In the frame of the construction of a Tandem Electrostatic Quadrupole Accelerator facility devoted to the Accelerator-Based Boron Neutron Capture Therapy, a Beam Shaping Assembly has been characterized by means of Monte-Carlo simulations and measurements. The neutrons were generated via the {sup 7}Li(p, n){sup 7}Be reaction by irradiating a thick LiF target with a 2.3 MeV proton beam delivered by the TANDAR accelerator at CNEA. The emerging neutron flux was measured by means of activation foils while the beam quality and directionality was evaluated by means of Monte Carlo simulations. The parameters show compliance with those suggested by IAEA. Finally, an improvement adding a beam collimator has been evaluated.
Yan, Xin -Hu; Ye, Yun -Xiu; Chen, Jian -Ping; Lu, Hai -Jiang; Zhu, Peng -Jia; Jiang, Feng -Jian
2015-07-17
The radiation and ionization energy loss are presented for single arm Monte Carlo simulation for the GDH sum rule experiment in Hall-A at Jefferson Lab. Radiation and ionization energy loss are discussed for $^{12}C$ elastic scattering simulation. The relative momentum ratio $\\frac{\\Delta p}{p}$ and $^{12}C$ elastic cross section are compared without and with radiation energy loss and a reasonable shape is obtained by the simulation. The total energy loss distribution is obtained, showing a Landau shape for $^{12}C$ elastic scattering. This simulation work will give good support for radiation correction analysis of the GDH sum rule experiment.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Yan, Xin -Hu; Ye, Yun -Xiu; Chen, Jian -Ping; Lu, Hai -Jiang; Zhu, Peng -Jia; Jiang, Feng -Jian
2015-07-17
The radiation and ionization energy loss are presented for single arm Monte Carlo simulation for the GDH sum rule experiment in Hall-A at Jefferson Lab. Radiation and ionization energy loss are discussed formoreÂ Â» $$^{12}C$$ elastic scattering simulation. The relative momentum ratio $$\\frac{\\Delta p}{p}$$ and $$^{12}C$$ elastic cross section are compared without and with radiation energy loss and a reasonable shape is obtained by the simulation. The total energy loss distribution is obtained, showing a Landau shape for $$^{12}C$$ elastic scattering. This simulation work will give good support for radiation correction analysis of the GDH sum rule experiment.Â«Â less
Applications of FLUKA Monte Carlo Code for Nuclear and Accelerator...
Office of Scientific and Technical Information (OSTI)
Presently the code is maintained on Linux. The validity of the physical models implemented ... in particle accelerators, radiation protection and dosimetry, including the specific ...
Monte Carlo Solution for Uncertainty Propagation in Particle...
Office of Scientific and Technical Information (OSTI)
Resource Relation: Conference: International Conference on Math. and Comp. Methods Applied ... presentation at the International Conference on Math. and Comp. Methods Applied to Nucl. ...
Fast Monte Carlo for radiation therapy: the PEREGRINE Project...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
... Close Cite: Bibtex Format Close 0 pages in this document matching the terms "" Search For Terms: Enter terms in the toolbar above to search the full text of this document for ...
Uncertainty Quantification with Monte Carlo Hauser-Feshbach Calculatio...
Office of Scientific and Technical Information (OSTI)
LANL Country of Publication: United States Language: English Subject: Atomic and Nuclear Physics; Nuclear Fuel Cycle & Fuel Materials(11); Nuclear Physics & Radiation Physics(73)...
Monte-Carlo particle dynamics in a variable specific impulse...
Office of Scientific and Technical Information (OSTI)
Authors: Ilin, A.V. 1 ; Diaz, F.R.C. ; Squire, J.P. 2 ; Carter, M.D. 3 + Show Author Affiliations Lockheed Martin Space Mission Systems and Services, Houston, TX (United ...
Monte Carlo Hauser-Feshbach Calculations of Prompt Fission Neutrons...
Office of Scientific and Technical Information (OSTI)
Org: DOELANL Country of Publication: United States Language: English Subject: Atomic and Nuclear Physics; Nuclear Fuel Cycle & Fuel Materials(11); Nuclear Physics & Radiation...
Monte Carlo Solution for Uncertainty Propagation in Particle Transport with
Office of Scientific and Technical Information (OSTI)
a Stochastic Galerkin Method. (Conference) | SciTech Connect Authors: Franke, Brian C. ; Prinja, Anil K. Publication Date: 2013-01-01 OSTI Identifier: 1063492 Report Number(s): SAND2013-0204C DOE Contract Number: AC04-94AL85000 Resource Type: Conference Resource Relation: Conference: Proposed for presentation at the International Conference on Math. and Comp. Methods Applied to Nucl. Sci. and Engg. (M&C 2013) held May 5-9, 2013 in Sun Valley, ID. Research Org: Sandia National
Monte Carlo Solution for Uncertainty Propagation in Particle Transport with
Office of Scientific and Technical Information (OSTI)
a Stochastic Galerkin Method. (Conference) | SciTech Connect Abstract not provided. Authors: Franke, Brian C. ; Prinja, Anil K. Publication Date: 2013-04-01 OSTI Identifier: 1078905 Report Number(s): SAND2013-3409C 448625 DOE Contract Number: AC04-94AL85000 Resource Type: Conference Resource Relation: Conference: International Conference on Math. and Comp. Methods Applied to Nucl. Sci. and Engg. (M&C 2013) held May 5-9, 2013 in Sun Valley, ID.; Related Information: Proposed for
Quantum Monte Carlo Calculations of Light Nuclei Using Chiral...
Office of Scientific and Technical Information (OSTI)
GrantContract Number: AC02-05CH11231 Type: Publisher's Accepted Manuscript Journal Name: Physical Review Letters Additional Journal Information: Journal Volume: 113; Journal ...
Tests of Monte Carlo Independent Column Approximation With a...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Jrvenoja Heikki Jrvinen Risnen Finnish Meteorological Institute Figure 1. Root-mean-square sampling errors in local instant- aneous total (LW+SW) net flux at the surface...
Monte Carlo Hauser-Feshbach Calculations of Prompt Fission Neutrons...
Office of Scientific and Technical Information (OSTI)
DOELANL Country of Publication: United States Language: English Subject: Atomic and Nuclear Physics; Nuclear Fuel Cycle & Fuel Materials(11); Nuclear Physics & Radiation...
A Monte Carlo Approach To Generator Portfolio Planning And Carbon...
solar thermal, and rooftop photovoltaics, as well as hydroelectric, geothermal, and natural gas plants. The portfolios produced by the model take advantage of the aggregation of...
The Monte Carlo Independent Column Approximation Model Intercomparison...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Service of Canada Raisanen, Petri Finnish Meteorological Institute Pincus, Robert NOAA-CIRES Climate Diagnostics Center Morcrette, Jean-Jacques European Centre for...
In the OSTI Collections: Monte Carlo Methods | OSTI, US Dept...
Office of Scientific and Technical Information (OSTI)
... and acetic acid if the reaction is catalyzed on the surface of a gold-palladium alloy. ... calculation was used to work out how the gold and palladium atoms are likely to be ...
Diagnostic Mass-Consistent Wind Field Monte Carlo Dispersion Model
Energy Science and Technology Software Center (OSTI)
1991-01-01
MATHEW generates a diagnostic mass-consistent, three-dimensional wind field based on point measurements of wind speed and direction. It accounts for changes in topography within its calculational domain. The modeled wind field is used by the Langrangian ADPIC dispersion model. This code is designed to predict the atmospheric boundary layer transport and diffusion of neutrally bouyant, non-reactive species as well as first-order chemical reactions and radioactive decay (including daughter products).
Effects of self-seeding and crystal post-selection on the quality of Monte
Office of Scientific and Technical Information (OSTI)
Carlo-integrated SFX data (Journal Article) | SciTech Connect Journal Article: Effects of self-seeding and crystal post-selection on the quality of Monte Carlo-integrated SFX data Citation Details In-Document Search Title: Effects of self-seeding and crystal post-selection on the quality of Monte Carlo-integrated SFX data Abstract is not provided Authors: Barends, Thomas ; White, Thomas A. ; Barty, Anton ; Foucar, Lutz ; Messerschmidt, Marc ; Alonso-Mori, Roberto [1] ; Botha, Sabine ;
Tikare, Veena; Hernandez-Rivera, Efrain; Madison, Jonathan D.; Holm, Elizabeth Ann; Patterson, Burton R.; Homer, Eric R.
2013-09-01
Most materials microstructural evolution processes progress with multiple processes occurring simultaneously. In this work, we have concentrated on the processes that are active in nuclear materials, in particular, nuclear fuels. These processes are coarsening, nucleation, differential diffusion, phase transformation, radiation-induced defect formation and swelling, often with temperature gradients present. All these couple and contribute to evolution that is unique to nuclear fuels and materials. Hybrid model that combines elements from the Potts Monte Carlo, phase-field models and others have been developed to address these multiple physical processes. These models are described and applied to several processes in this report. An important feature of the models developed are that they are coded as applications within SPPARKS, a Sandiadeveloped framework for simulation at the mesoscale of microstructural evolution processes by kinetic Monte Carlo methods. This makes these codes readily accessible and adaptable for future applications.
Multi-physics microstructural simulation of sintering.
Tikare, Veena
2010-06-01
Simulating the detailed evolution of microstructure at the mesoscale is increasingly being addressed by a number of methods. Discrete element modeling and Potts kinetic Monte Carlo have achieved success in capturing different aspects of sintering well. Discrete element cannot treat the details of neck formation and other shape evolution, especially when considering particles of arbitrary shapes. Potts kMC treats the micorstructural evolution very well, but cannot incorporate complex stress states that form especially during differential sintering. A model that is capable of simulating microstructural evolution during sintering at the mesoscale and can incorporate differential stresses is being developed. This multi-physics model that can treat both interfacial energies and the inter-particle stresses will be introduced. It will be applied to simulate microstructural evolution while resolving individual particles and the stresses that develop between them due to local shrinkage. Results will be presented and the future development of this model will be discussed.
Nexus: a modular workflow management system for quantum simulation codes
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Krogel, Jaron T.
2015-08-24
The management of simulation workflows is a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantummoreÂ Â» chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.Â«Â less
Computer simulation of beam steering by crystal channeling
Biryukov, V.
1995-04-01
The Monte Carlo computer program CATCH for the simulation of planar channeling in bent crystals is presented. The program tracks a charged particle through the deformed crystal lattice with the use of the continuous-potential approximation and by taking into account the processes of both single and multiple scattering on electrons and nuclei. The output consists of the exit angular distributions, the energy loss spectra, and the spectra of any close-encounter process of interest. The program predictions for the feed-out and feed-in rates, energy loss spectra, and beam bending efficiency are compared with the recent experimental data.
Structural simulations of nanomaterials self-assembled from ionic macrocycles.
van Swol, Frank B.; Medforth, Craig John
2010-10-01
Recent research at Sandia has discovered a new class of organic binary ionic solids with tunable optical, electronic, and photochemical properties. These nanomaterials, consisting of a novel class of organic binary ionic solids, are currently being developed at Sandia for applications in batteries, supercapacitors, and solar energy technologies. They are composed of self-assembled oligomeric arrays of very large anions and large cations, but their crucial internal arrangement is thus far unknown. This report describes (a) the development of a relevant model of nonconvex particles decorated with ions interacting through short-ranged Yukawa potentials, and (b) the results of initial Monte Carlo simulations of the self-assembly binary ionic solids.
ARM - Publications: Science Team Meeting Documents: Evaluation of the Monte
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Carlo Independent Column Approximation (McICA) implementation in the GEOS-5 Single Column Model Evaluation of the Monte Carlo Independent Column Approximation (McICA) implementation in the GEOS-5 Single Column Model Oreopoulos, Lazaros JCET/UMBC and NASA/GSFC Bacmeister, Julio GEST/UMBC and NASA/GSFC Cahalan, Robert NASA/Goddard Space Flight Ctr/913 Barker, Howard Meteorological Service of Canada The McICA method ( Barker et al., 2002; Pincus et al., 2003) has been recently implemented in
Comparison of Gas Puff Imaging Data in NSTX with the DEGAS 2 Simulation
Cao, B.; Stotler, D. P.; Zweben, S. J.; Bell, M.; Diallo, A.; Leblanc, B.
2012-10-27
Gas-Puff-Imaging (GPI) is a two dimensional diagnostic which measures the edge D? light emission from a neutral D2 gas puff nears the outer mid-plane of NSTX. DEGAS 2 is a 3-D Monte Carlo code used to model neutral transport and atomic physics in tokamak plasmas. In this paper we compare measurements of the D? light emission obtained by GPI on NSTX with DEGAS 2 simulations of D? light emission for specific experiments. Both the simulated spatial distribution and absolute intensity of the D? light emission agree well with the experimental data obtained between ELMs in H-mode.
Comparison of Gas Puff Imaging Data in NSTX with the DEGAS 2 Simulation
Cao, B.; Stotler, D. P.; Zweben, S. J.; Bell, M.; Diallo, A.; Leblanc, B.
2012-11-08
Gas-Puff-Imaging (GPI) is a two dimensional diagnostic which measures the edge D? light emission from a neutral D2 gas puff nears the outer mid-plane of NSTX. DEGAS 2 is a 3-D Monte Carlo code used to model neutral transport and atomic physics in tokamak plasmas. In this paper we compare measurements of the D? light emission obtained by GPI on NSTX with DEGAS 2 simulations of D? light emission for specific experiments. Both the simulated spatial distribution and absolute intensity of the D? light emission agree well with the experimental data obtained between ELMs in H-mode. __________________________________________________
Comparison of Gas Puff Imaging Data in NSTX with the DEGAS 2 Simulation
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
(Technical Report) | SciTech Connect Comparison of Gas Puff Imaging Data in NSTX with the DEGAS 2 Simulation Citation Details In-Document Search Title: Comparison of Gas Puff Imaging Data in NSTX with the DEGAS 2 Simulation Gas-Pu -Imaging (GPI) is a two dimensional diagnostic which measures the edge DÎ± light emission from a neutral DÎ± gas puff near the outer mid- plane of the National Spherical Torus Experiment (NSTX). DEGAS 2 is a 3-D Monte Carlo code used to model neutral transport and
Pedestal Fueling Simulations with a Coupled Kinetic-kinetic Plasma-neutral Transport Code
D.P. Stotler, C.S. Chang, S.H. Ku, J. Lang and G.Y. Park
2012-08-29
A Monte Carlo neutral transport routine, based on DEGAS2, has been coupled to the guiding center ion-electron-neutral neoclassical PIC code XGC0 to provide a realistic treatment of neutral atoms and molecules in the tokamak edge plasma. The DEGAS2 routine allows detailed atomic physics and plasma-material interaction processes to be incorporated into these simulations. The spatial pro le of the neutral particle source used in the DEGAS2 routine is determined from the uxes of XGC0 ions to the material surfaces. The kinetic-kinetic plasma-neutral transport capability is demonstrated with example pedestal fueling simulations.
Assessment of Molecular Modeling & Simulation
2002-01-03
This report reviews the development and applications of molecular and materials modeling in Europe and Japan in comparison to those in the United States. Topics covered include computational quantum chemistry, molecular simulations by molecular dynamics and Monte Carlo methods, mesoscale modeling of material domains, molecular-structure/macroscale property correlations like QSARs and QSPRs, and related information technologies like informatics and special-purpose molecular-modeling computers. The panel's findings include the following: The United States leads this field in many scientific areas. However, Canada has particular strengths in DFT methods and homogeneous catalysis; Europe in heterogeneous catalysis, mesoscale, and materials modeling; and Japan in materials modeling and special-purpose computing. Major government-industry initiatives are underway in Europe and Japan, notably in multi-scale materials modeling and in development of chemistry-capable ab-initio molecular dynamics codes.
Kruschwitz, Craig; Wu, M.; Rochau, G. A.
2013-06-13
We present results of Monte Carlo simulations of microchannel plate (MCP) response to x-rays in the 250 eV to 20 keV energy range as a function of both x-ray energy and impact angle. The model is based on the model presented in Rochau et al. (2006). However, while the Rochau et al. (2006) model was two-dimensional, and their results only went to 5 keV, our results have been expanded to 20 keV, and our model has been incorporated into a three-dimensional Monte Carlo MCP model that we have developed over the past several years (Kruschwitz et al. 2011). X-ray penetration through multiple MCP pore walls is increasingly important above 5 keV. The effect of x-ray penetration through multiple pores on MCP performance was studied and is presented.
Liang, Faming; Cheng, Yichen; Lin, Guang
2014-06-13
Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to have such a long CPU time. This paper proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation Markov chain Monte Carlo, it is shown that the new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, e.g., a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors.
Measurement of the $B^-$ lifetime using a simulation free approach for trigger bias correction
Aaltonen, T.; Adelman, J.; Alvarez Gonzalez, B.; Amerio, S.; Amidei, D.; Anastassov, A.; Annovi, A.; Antos, J.; Apollinari, G.; Appel, J.; Apresyan, A.
2010-04-01
The collection of a large number of B hadron decays to hadronic final states at the CDF II detector is possible due to the presence of a trigger that selects events based on track impact parameters. However, the nature of the selection requirements of the trigger introduces a large bias in the observed proper decay time distribution. A lifetime measurement must correct for this bias and the conventional approach has been to use a Monte Carlo simulation. The leading sources of systematic uncertainty in the conventional approach are due to differences between the data and the Monte Carlo simulation. In this paper they present an analytic method for bias correction without using simulation, thereby removing any uncertainty between data and simulation. This method is presented in the form of a measurement of the lifetime of the B{sup -} using the mode B{sup -} {yields} D{sup 0}{pi}{sup -}. The B{sup -} lifetime is measured as {tau}{sub B{sup -}} = 1.663 {+-} 0.023 {+-} 0.015 ps, where the first uncertainty is statistical and the second systematic. This new method results in a smaller systematic uncertainty in comparison to methods that use simulation to correct for the trigger bias.
GEANT4 Simulation of Hadronic Interactions at 8-GeV/C to 10-GeV/C: Response
Office of Scientific and Technical Information (OSTI)
to the HARP-CDP Group (Journal Article) | SciTech Connect GEANT4 Simulation of Hadronic Interactions at 8-GeV/C to 10-GeV/C: Response to the HARP-CDP Group Citation Details In-Document Search Title: GEANT4 Simulation of Hadronic Interactions at 8-GeV/C to 10-GeV/C: Response to the HARP-CDP Group The results of the HARP-CDP group on the comparison of GEANT4 Monte Carlo predictions versus experimental data are discussed. It is shown that the problems observed by the group are caused by an
Modeling and Simulating Blast Effects on Electric Substations
Lyle G. Roybal; Robert F. Jeffers; Kent E. McGillivary; Tony D. Paul; Ryan Jacobson
2009-05-01
A software simulation tool was developed at the Idaho National Laboratory to estimate the fragility of electric substation components subject to an explosive blast. Damage caused by explosively driven fragments on a generic electric substation was estimated by using a ray-tracing technique to track and tabulate fragment impacts and penetrations of substation components. This technique is based on methods used for assessing vulnerability of military aircraft and ground vehicles to explosive blasts. An open-source rendering and ray-trace engine was used for geometric modeling and interactions between fragments and substation components. Semi-empirical material interactions models were used to calculate blast parameters and simulate high-velocity material interactions between explosively driven fragments and substation components. Finally, a Monte Carlo simulation was added to model the random nature of fragment generation allowing a skilled analyst to predict failure probabilities of substation components.
Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang; Gur, Sourav; Danielson, Thomas L.; Hin, Celine N.; Pannala, Sreekanth; Frantziskonis, George N.
2016-01-28
We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmoreÂ Â» be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.Â«Â less
A new dipolar potential for numerical simulations of polar fluids on the 4D hypersphere
Caillol, Jean-Michel; Trulsson, Martin
2014-09-28
We present a new method for Monte Carlo or Molecular Dynamics numerical simulations of three-dimensional polar fluids. The simulation cell is defined to be the surface of the northern hemisphere of a four-dimensional (hyper)sphere. The point dipoles are constrained to remain tangent to the sphere and their interactions are derived from the basic laws of electrostatics in this geometry. The dipole-dipole potential has two singularities which correspond to the following boundary conditions: when a dipole leaves the northern hemisphere at some point of the equator, it reappears at the antipodal point bearing the same dipole moment. We derive all the formal expressions needed to obtain the thermodynamic and structural properties of a polar liquid at thermal equilibrium in actual numerical simulation. We notably establish the expression of the static dielectric constant of the fluid as well as the behavior of the pair correlation at large distances. We report and discuss the results of extensive numerical Monte Carlo simulations for two reference states of a fluid of dipolar hard spheres and compare these results with previous methods with a special emphasis on finite size effects.
Office of Environmental Management (EM)
Carlos Valdez, Chair Department of Energy Washington, DC 20585 December 10, 2012 Northern New Mexico Citizens' Advisory Board 94 Cities of Gold Road Santa Fe, New Mexico 87506 Dear Mr. Valdez: Thank you and the Northern New Mexico Citizens' Advisory Board (NNMCAB) for recommendation No. 2012-02, "Expand the Mission of the NNMCAB to Include Advice and Recommendations on the Evaluation and Use of Waste Isolation Pilot Plant (WIPP) as it pertains to the Disposal of Legacy Non-TRU Radioactive
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Wang, Lin -Lin; Tan, Teck L.; Johnson, Duane D.
2015-02-27
We simulate the adsorption isotherms for alloyed nanoparticles (nanoalloys) with adsorbates to determine cyclic voltammetry (CV) during electrocatalysis. The effect of alloying on nanoparticle adsorption isotherms is provided by a hybrid-ensemble Monte Carlo simulation that uses the cluster expansion method extended to non-exchangeable coupled lattices for nanoalloys with adsorbates. Exemplified here for the hydrogen evolution reaction, a 2-dimensional CV is mapped for Pdâ€“Pt nanoalloys as a function of both electrochemical potential and the global Pt composition, and shows a highly non-linear alloying effect on CV. Detailed features in CV arise from the interplay among the H-adsorption in multiple sites thatmoreÂ Â» is closely correlated with alloy configurations, which are in turn affected by the H-coverage. The origins of specific features in CV curves are assigned. As a result, the method provides a more complete means to design nanoalloys for electrocatalysis.Â«Â less
Federal University of Sao Carlos | Open Energy Information
Sao Carlos Jump to: navigation, search Name: Federal University of Sao Carlos Place: Sao Carlos, Sao Paulo, Brazil Zip: 13565-905 Product: Federal university of Sao Carlos....
Eolica Montes de Cierzo | Open Energy Information
Montes de Cierzo Jump to: navigation, search Name: Eolica Montes de Cierzo Place: Navarra, Spain Sector: Wind energy Product: Spanish wind farm developer in the region of Navarra....
PIC simulation of electrodeless plasma thruster with rotating electric field
Nomura, Ryosuke; Ohnishi, Naofumi; Nishida, Hiroyuki
2012-11-27
For longer lifetime of electric propulsion system, an electrodeless plasma thruster with rotating electric field have been proposed utilizing a helicon plasma source. The rotating electric field may produce so-called Lissajous acceleration of helicon plasma in the presence of diverging magnetic field through a complicated mechanism originating from many parameters. Two-dimensional simulations of the Lissajous acceleration were conducted by a code based on Particle-In-Cell (PIC) method and Monte Carlo Collision (MCC) method for understanding plasma motion in acceleration area and for finding the optimal condition. Obtained results show that azimuthal current depends on ratio of electron drift radius to plasma region length, AC frequency, and axial magnetic field. When ratio of cyclotron frequency to the AC frequency is higher than unity, reduction of the azimuthal current by collision effect is little or nothing.
Quantum simulations of strongly coupled quark-gluon plasma
Filinov, V. S.; Ivanov, Yu. B.; Bonitz, M.; Levashov, P. R.; Fortov, V. E.
2011-09-15
A strongly coupled quark-gluon plasma (QGP) of heavy constituent quasiparticles is studied by a path-integral Monte-Carlo method. This approach is a quantum generalization of the model developed by B.A. Gelman, E.V. Shuryak, and I. Zahed. It is shown that this method is able to reproduce the QCD lattice equation of state and also yields valuable insight into the internal structure of the QGP. The results indicate that the QGP reveals liquid-like rather than gas-like properties. At temperatures just above the critical one it was found that bound quark-antiquark states still survive. These states are bound by effective string-like forces and turn out to be colorless. At the temperature as large as twice the critical one no bound states are observed. Quantum effects turned out to be of prime importance in these simulations.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Teich-McGoldrick, Stephanie L.; Greathouse, Jeffery A.; Jove-Colon, Carlos F.; Cygan, Randall Timothy
2015-08-27
In this study, the swelling properties of smectite clay minerals are relevant to many engineering applications including environmental remediation, repository design for nuclear waste disposal, borehole stability in drilling operations, and additives for numerous industrial processes and commercial products. We used molecular dynamics and grand canonical Monte Carlo simulations to study the effects of layer charge location, interlayer cation, and temperature on intracrystalline swelling of montmorillonite and beidellite clay minerals. For a beidellite model with layer charge exclusively in the tetrahedral sheet, strong ionâ€“surface interactions shift the onset of the two-layer hydrate to higher water contents. In contrast, for amoreÂ Â» montmorillonite model with layer charge exclusively in the octahedral sheet, weaker ionâ€“surface interactions result in the formation of fully hydrated ions (two-layer hydrate) at much lower water contents. Clay hydration enthalpies and interlayer atomic density profiles are consistent with the swelling results. Water adsorption isotherms from grand canonical Monte Carlo simulations are used to relate interlayer hydration states to relative humidity, in good agreement with experimental findings.Â«Â less
Teich-McGoldrick, Stephanie L.; Greathouse, Jeffery A.; Jove-Colon, Carlos F.; Cygan, Randall Timothy
2015-08-27
In this study, the swelling properties of smectite clay minerals are relevant to many engineering applications including environmental remediation, repository design for nuclear waste disposal, borehole stability in drilling operations, and additives for numerous industrial processes and commercial products. We used molecular dynamics and grand canonical Monte Carlo simulations to study the effects of layer charge location, interlayer cation, and temperature on intracrystalline swelling of montmorillonite and beidellite clay minerals. For a beidellite model with layer charge exclusively in the tetrahedral sheet, strong ionâ€“surface interactions shift the onset of the two-layer hydrate to higher water contents. In contrast, for a montmorillonite model with layer charge exclusively in the octahedral sheet, weaker ionâ€“surface interactions result in the formation of fully hydrated ions (two-layer hydrate) at much lower water contents. Clay hydration enthalpies and interlayer atomic density profiles are consistent with the swelling results. Water adsorption isotherms from grand canonical Monte Carlo simulations are used to relate interlayer hydration states to relative humidity, in good agreement with experimental findings.
TOPAS Tool for Particle Simulation
Energy Science and Technology Software Center (OSTI)
2013-05-30
TOPAS lets users simulate the passage of subatomic particles moving through any kind of radiation therapy treatment system, can import a patient geometry, can record dose and other quantities, has advanced graphics, and is fully four-dimensional (3D plus time) to handle the most challenging time-dependent aspects of modern cancer treatments.TOPAS unlocks the power of the most accurate particle transport simulation technique, the Monte Carlo (MC) method, while removing the painstaking coding work such methods usedmoreÂ Â» to require. Research physicists can use TOPAS to improve delivery systems towards safer and more effective radiation therapy treatments, easily setting up and running complex simulations that previously used to take months of preparation. Clinical physicists can use TOPAS to increase accuracy while reducing side effects, simulating patient-specific treatment plans at the touch of a button. TOPAS is designed as a Â“user codeÂ” layered on top of the Geant4 Simulation Toolkit. TOPAS includes the standard Geant4 toolkit, plus additional code to make Geant4 easier to control and to extend Geant4 functionality. TOPAS aims to make proton simulation both Â“reliableÂ” and Â“repeatable.Â” Â“ReliableÂ” means both accurate physics and a high likelihood to simulate precisely what the user intended to simulate, reducing issues of wrong units, wrong materials, wrong scoring locations, etc. Â“RepeatableÂ” means not just getting the same result from one simulation to another, but being able to easily restore a previously used setup and reducing sources of error when a setup is passed from one user to another. TOPAS control system incorporates key lessons from safety management, proactively removing possible sources of user error such as line-ordering mistakes In control files. TOPAS has been used to model proton therapy treatment examples including the UCSF eye treatment head, the MGH stereotactic alignment in radiosurgery treatment head and the MGH gantry treatment heads in passive scattering and scanning modes, and has demonstrated dose calculation based on patient-specific CT data.Â«Â less
Simulation studies of self-organization of microtubules and molecular motors.
Jian, Z.; Karpeev, D.; Aranson, I. S.; Bates, P. W.; Michigan State Univ.
2008-05-01
We perform Monte Carlo type simulation studies of self-organization of microtubules interacting with molecular motors. We model microtubules as stiff polar rods of equal length exhibiting anisotropic diffusion in the plane. The molecular motors are implicitly introduced by specifying certain probabilistic collision rules resulting in realignment of the rods. This approximation of the complicated microtubule-motor interaction by a simple instant collision allows us to bypass the 'computational bottlenecks' associated with the details of the diffusion and the dynamics of motors and the reorientation of microtubules. Consequently, we are able to perform simulations of large ensembles of microtubules and motors on a very large time scale. This simple model reproduces all important phenomenology observed in in vitro experiments: Formation of vortices for low motor density and raylike asters and bundles for higher motor density.
Three-body interactions in complex fluids: Virial coefficients from simulation finite-size effects
Ashton, Douglas J.; Wilding, Nigel B.
2014-06-28
A simulation technique is described for quantifying the contribution of three-body interactions to the thermodynamical properties of coarse-grained representations of complex fluids. The method is based on a new approach for determining virial coefficients from the measured volume-dependent asymptote of a certain structural function. By comparing the third virial coefficient B{sub 3} for a complex fluid with that of an approximate coarse-grained model described by a pair potential, three body effects can be quantified. The strategy is applicable to both Molecular Dynamics and Monte Carlo simulation. Its utility is illustrated via measurements of three-body effects in models of star polymers and in highly size-asymmetrical colloid-polymer mixtures.
San Carlos Apache Tribe Solar Feasibility Study
San Carlos Apache Tribe Solar Feasibility Study San Carlos Apache Tribe And Reservation * 90 miles east of Phoenix * Membership: 15,000 * 1.83 million acres. * 2nd highest rated level of solar resource potential * Main employers: Tribe / IHS /BIA / Schools / Casino / Telecom * Utilities: Telecom, MTSS, Utility Authority * Revenue: Casino, farming, water leasing, saw mill, hunting/fishing, sand & gravel/telecom San Carlos Apache Reservation San Carlos Apache Mission Statement The Apache
Mont Vista Capital LLC | Open Energy Information
Vista Capital LLC Jump to: navigation, search Name: Mont Vista Capital LLC Place: New York, New York Zip: 10167 Sector: Services Product: Mont Vista Capital is a leading global...
Loyalka, Sudarshan
2015-04-09
The purpose of this project was to develop methods and tools that will aid in safety evaluation of nuclear fuels and licensing of nuclear reactors relating to accidents.The objectives were to develop more detailed and faster computations of fission product transport and aerosol evolution as they generally relate to nuclear fuel and/or nuclear reactor accidents. The two tasks in the project related to molecular transport in nuclear fuel and aerosol transport in reactor vessel and containment. For both the tasks, explorations of coupling of Direct Simulation Monte Carlo with Navier-Stokes solvers or the Sectional method were not successful. However, Mesh free methods for the Direct Simulation Monte Carlo method were successfully explored.These explorations permit applications to porous and fractured media, and arbitrary geometries.The computations were carried out in Mathematica and are fully parallelized. The project has resulted in new computational tools (algorithms and programs) that will improve the fidelity of computations to actual physics, chemistry and transport of fission products in the nuclear fuel and aerosol in reactor primary and secondary containments.
Li, Yulan; Hu, Shenyang Y.; Montgomery, Robert; Gao, Fei; Sun, Xin; Tonks, Michael; Biner, Bullent; Millet, Paul; Tikare, Veena; Radhakrishnan, Balasubramaniam; Andersson , David
2012-04-11
A study was conducted to evaluate the capabilities of different numerical methods used to represent microstructure behavior at the mesoscale for irradiated material using an idealized benchmark problem. The purpose of the mesoscale benchmark problem was to provide a common basis to assess several mesoscale methods with the objective of identifying the strengths and areas of improvement in the predictive modeling of microstructure evolution. In this work, mesoscale models (phase-field, Potts, and kinetic Monte Carlo) developed by PNNL, INL, SNL, and ORNL were used to calculate the evolution kinetics of intra-granular fission gas bubbles in UO2 fuel under post-irradiation thermal annealing conditions. The benchmark problem was constructed to include important microstructural evolution mechanisms on the kinetics of intra-granular fission gas bubble behavior such as the atomic diffusion of Xe atoms, U vacancies, and O vacancies, the effect of vacancy capture and emission from defects, and the elastic interaction of non-equilibrium gas bubbles. An idealized set of assumptions was imposed on the benchmark problem to simplify the mechanisms considered. The capability and numerical efficiency of different models are compared against selected experimental and simulation results. These comparisons find that the phase-field methods, by the nature of the free energy formulation, are able to represent a larger subset of the mechanisms influencing the intra-granular bubble growth and coarsening mechanisms in the idealized benchmark problem as compared to the Potts and kinetic Monte Carlo methods. It is recognized that the mesoscale benchmark problem as formulated does not specifically highlight the strengths of the discrete particle modeling used in the Potts and kinetic Monte Carlo methods. Future efforts are recommended to construct increasingly more complex mesoscale benchmark problems to further verify and validate the predictive capabilities of the mesoscale modeling methods used in this study.
Valone, S.M.; Hanson, D.E.; Kress, J.D.
1998-05-08
Simulations of Cl plasma etch of Si surfaces with MD techniques agree reasonably well with the available experimental information on yields and surface morphologies. This information has been supplied to a Monte Carlo etch profile resulting in substantial agreement with comparable inputs provided through controlled experiments. To the extent that more recent measurements of etch rates are more reliable than older ones, preliminary MD simulations using bond-order corrections to the atomic interactions between neighboring Si atoms on the surface improves agreement with experiment through an increase in etch rate and improved agreement with XPS measurements of surface stoichiometry. Thermochemical and geometric analysis of small Si-Br molecules is consistent with the current notions of the effects of including brominated species in etchant gases.
Schach Von Wittenau, Alexis E. (Livermore, CA)
2003-01-01
A method is provided to represent the calculated phase space of photons emanating from medical accelerators used in photon teletherapy. The method reproduces the energy distributions and trajectories of the photons originating in the bremsstrahlung target and of photons scattered by components within the accelerator head. The method reproduces the energy and directional information from sources up to several centimeters in radial extent, so it is expected to generalize well to accelerators made by different manufacturers. The method is computationally both fast and efficient overall sampling efficiency of 80% or higher for most field sizes. The computational cost is independent of the number of beams used in the treatment plan.
Final report for LDRD13-0130 : exponentially convergent Monte Carlo for electron transport.
Franke, Brian Claude
2013-09-01
This is the final report on the LDRD, though the interested reader is referred to the ANS Transactions paper which more thoroughly documents the technical work of this project.
Monte Carlo Implementation Of Up- Or Down-Scattering Due To Collisions...
Office of Scientific and Technical Information (OSTI)
Subject: 71 CLASSICAL AND QUANTUMM MECHANICS, GENERAL PHYSICS; 71 CLASSICAL AND QUANTUMM MECHANICS, GENERAL PHYSICS; 22 GENERAL STUDIES OF NUCLEAR REACTORS; 73 NUCLEAR PHYSICS AND ...
Zori 1.0: A Parallel Quantum Monte Carlo Electronic StructurePackage...
Office of Scientific and Technical Information (OSTI)
Authors: Aspuru-Guzik, Alan ; Salomon-Ferrer, Romelia ; Austin, Brian ; Perusquia-Flores, Raul ; Griffin, Mary A. ; Oliva, Ricardo A. ; Skinner,David ; Dominik,Domin ; Lester Jr., ...
Monte Carlo Implementation Of Up- Or Down-Scattering Due To Collisions...
Office of Scientific and Technical Information (OSTI)
PHYSICS; 71 CLASSICAL AND QUANTUMM MECHANICS, GENERAL PHYSICS; 22 GENERAL STUDIES OF NUCLEAR REACTORS; 73 NUCLEAR PHYSICS AND RADIATION PHYSICS Word Cloud More Like This Full...
Structure of Cu64.5Zr35.5 Metallic glass by reverse Monte Carlo...
Office of Scientific and Technical Information (OSTI)
2 + Show Author Affiliations Ames Laboratory University of Science and Technology of China Publication Date: 2014-02-07 OSTI Identifier: 1134611 Report Number(s): IS-J 8231...
Monte Carlo Fundamentals E B. BROWN and T M. S N
Office of Scientific and Technical Information (OSTI)
constitute or imply its endorsement, recommendation, or favoring by the United States ... independent calculations mi result from job j (all J jobs are identical except for ...
CASL-U-2015-0247-000 The OpenMC Monte Carlo Particle Transport...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Ellis, Nich Horelik, Benoit Forget, Kord Smith Massachusetts Institute of Technology ... Paul Romano 3 , Benoit Forget, 1 and Kord Smith 1 1 Massachusetts Institute of Technology, ...
CASL-U-2015-0170-000-a SHIFT: A New Monte Carlo Package Seth...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
CASL-U-2015-0170-000-a 2 Shift: MC RT Code What makes Shift an HPC transport code? * Modern framework - Rapid development with C++11, Python, CMake, CTest, ... - Integration...
A Geant4 Implementation of a Novel Single-Event Monte Carlo Method...
Office of Scientific and Technical Information (OSTI)
Number: AC04-94AL85000 Resource Type: Conference Resource Relation: Conference: 2013 American Nuclear Society Winter Meeting held November 10-14, 2013 in Washington, DC.; Related...
Neutron Emission Characteristics of Two Mixed-Oxide Fuels: Simulations and Initial Experiments
D. L. Chichester; S. A. Pozzi; J. L. Dolan; M. Flaska; J. T. Johnson; E. H. Seabury; E. M. Gantz
2009-07-01
Simulations and experiments have been carried out to investigate the neutron emission characteristics of two mixed-oxide (MOX) fuels at Idaho National Laboratory (INL). These activities are part of a project studying advanced instrumentation techniques in support of the U.S. Department of Energy's Fuel Cycle Research and Development program and it's Materials Protection, Accounting, and Control for Transmutation (MPACT) campaign. This analysis used the MCNP-PoliMi Monte Carlo simulation tool to determine the relative strength and energy spectra of the different neutron source terms within these fuels, and then used this data to simulate the detection and measurement of these emissions using an array of liquid scintillator neutron spectrometers. These calculations accounted for neutrons generated from the spontaneous fission of the actinides in the MOX fuel as well as neutrons created via (alpha,n) reactions with oxygen in the MOX fuel. The analysis was carried out to allow for characterization of both neutron energy as well as neutron coincidences between multiple detectors. Coincidences between prompt gamma rays and neutrons were also analyzed. Experiments were performed at INL with the same materials used in the simulations to benchmark and begin validation tests of the simulations. Data was collected in these experiments using an array of four liquid scintillators and a high-speed waveform digitizer. Advanced digital pulse-shape discrimination algorithms were developed and used to collect this data. Results of the simulation and modeling studies are presented together with preliminary results from the experimental campaign.
A study of astrometric distortions due to â€œtree ringsâ€ in CCD sensors using LSST Photon Simulator
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Beamer, Benjamin; Nomerotski, Andrei; Tsybychev, Dmitri
2015-05-22
Imperfections in the production process of thick CCDs lead to circularly symmetric dopant concentration variations, which in turn produce electric fields transverse to the surface of the fully depleted CCD that displace the photogenerated charges. We use PhoSim, a Monte Carlo photon simulator, to explore and examine the likely impacts these dopant concentration variations will have on astrometric measurements in LSST. The scale and behavior of both the astrometric shifts imparted to point sources and the intensity variations in flat field images that result from these doping imperfections are similar to those previously observed in Dark Energy Camera CCDs, givingmoreÂ Â» initial confirmation of PhoSim's model for these effects. In addition, the organized shape distortions were observed as a result of the symmetric nature of these dopant variations, causing nominally round sources to be imparted with a measurable ellipticity either aligned with or transverse to the radial direction of this dopant variation pattern.Â«Â less
Simulation of radiation damping in rings, using stepwise ray-tracing methods
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Meot, F.
2015-06-26
The ray-tracing code Zgoubi computes particle trajectories in arbitrary magnetic and/or electric field maps or analytical field models. It includes a built-in fitting procedure, spin tracking many Monte Carlo processes. The accuracy of the integration method makes it an efficient tool for multi-turn tracking in periodic machines. Energy loss by synchrotron radiation, based on Monte Carlo techniques, had been introduced in Zgoubi in the early 2000s for studies regarding the linear collider beam delivery system. However, only recently has this Monte Carlo tool been used for systematic beam dynamics and spin diffusion studies in rings, including eRHIC electron-ion collider projectmoreÂ Â» at the Brookhaven National Laboratory. Some beam dynamics aspects of this recent use of Zgoubi capabilities, including considerations of accuracy as well as further benchmarking in the presence of synchrotron radiation in rings, are reported here.Â«Â less
Simulation of radiation damping in rings, using stepwise ray-tracing methods
Meot, F.
2015-06-26
The ray-tracing code Zgoubi computes particle trajectories in arbitrary magnetic and/or electric field maps or analytical field models. It includes a built-in fitting procedure, spin tracking many Monte Carlo processes. The accuracy of the integration method makes it an efficient tool for multi-turn tracking in periodic machines. Energy loss by synchrotron radiation, based on Monte Carlo techniques, had been introduced in Zgoubi in the early 2000s for studies regarding the linear collider beam delivery system. However, only recently has this Monte Carlo tool been used for systematic beam dynamics and spin diffusion studies in rings, including eRHIC electron-ion collider project at the Brookhaven National Laboratory. Some beam dynamics aspects of this recent use of Zgoubi capabilities, including considerations of accuracy as well as further benchmarking in the presence of synchrotron radiation in rings, are reported here.
Detecting vapour bubbles in simulations of metastable water
González, Miguel A.; Abascal, Jose L. F.; Valeriani, Chantal E-mail: cvaleriani@quim.ucm.es; Menzl, Georg; Geiger, Philipp; Dellago, Christoph E-mail: cvaleriani@quim.ucm.es; Aragones, Juan L.; Caupin, Frederic
2014-11-14
The investigation of cavitation in metastable liquids with molecular simulations requires an appropriate definition of the volume of the vapour bubble forming within the metastable liquid phase. Commonly used approaches for bubble detection exhibit two significant flaws: first, when applied to water they often identify the voids within the hydrogen bond network as bubbles thus masking the signature of emerging bubbles and, second, they lack thermodynamic consistency. Here, we present two grid-based methods, the M-method and the V-method, to detect bubbles in metastable water specifically designed to address these shortcomings. The M-method incorporates information about neighbouring grid cells to distinguish between liquid- and vapour-like cells, which allows for a very sensitive detection of small bubbles and high spatial resolution of the detected bubbles. The V-method is calibrated such that its estimates for the bubble volume correspond to the average change in system volume and are thus thermodynamically consistent. Both methods are computationally inexpensive such that they can be used in molecular dynamics and Monte Carlo simulations of cavitation. We illustrate them by computing the free energy barrier and the size of the critical bubble for cavitation in water at negative pressure.
San Carlos Apache Tribe- 2012 Project
Broader source: Energy.gov [DOE]
Under this project, the San Carlos Apache Tribe will study the feasibility of solar energy projects within the reservation with the potential to generate a minimum of 1 megawatt (MW).
ARM - Carlos Sousa Interview (English Version)
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
DeployementCarlos Sousa Interview (English Version) Azores Deployment AMF Home Graciosa Island Home Data Plots and Baseline Instruments Satellite Retrievals Experiment Planning CAP-MBL Proposal Abstract and Related Campaigns Science Questions Science Plan (PDF, 4.4M) Rob Wood Website Outreach Backgrounders English Version (PDF, 363K) Portuguese Version (PDF, 327K) AMF Posters, 2009 English Version Portuguese Version Education Flyers English Version Portuguese Version News Campaign Images Carlos
Gheisari, R.; Firoozabadi, M. M.; Mohammadi, H.
2014-01-15
A new idea to calculate ultracold neutron (UCN) production by using Monte Carlo simulation method to calculate the cold neutron (CN) flux and an analytical approach to calculate the UCN production from the simulated CN flux was given. A super-thermal source (UCN source) was modeled based on an arrangement of D{sub 2}O and solid D{sub 2} (sD{sub 2}). The D{sub 2}O was investigated as the neutron moderator, and sD{sub 2} as the converter. In order to determine the required parameters, a two-dimensional (2D) neutron balance equation written in Matlab was combined with the MCNPX simulation code. The 2D neutron-transport equation in cylindrical (? ? z) geometry was considered for 330 neutron energy groups in the sD{sub 2}. The 2D balance equation for UCN and CN was solved using simulated CN flux as boundary value. The UCN source dimensions were calculated for the development of the next UCN source. In the optimal condition, the UCN flux and the UCN production rate (averaged over the sD{sub 2} volume) equal to 6.79?×?10{sup 6} cm{sup ?2}s{sup ?1} and 2.20 ×10{sup 5} cm{sup ?3}s{sup ?1}, respectively.
Particle simulation of collision dynamics for ion beam injection into a rarefied gas
Giuliano, Paul N.; Boyd, Iain D.
2013-03-15
This study details a comparison of ion beam simulations with experimental data from a simplified plasma test cell in order to study and validate numerical models and environments representative of electric propulsion devices and their plumes. The simulations employ a combination of the direct simulation Monte Carlo and particle-in-cell methods representing xenon ions and atoms as macroparticles. An anisotropic collision model is implemented for momentum exchange and charge exchange interactions between atoms and ions in order to validate the post-collision scattering behaviors of dominant collision mechanisms. Cases are simulated in which the environment is either collisionless or non-electrostatic in order to prove that the collision models are the dominant source of low- and high-angle particle scattering and current collection within this environment. Additionally, isotropic cases are run in order to show the importance of anisotropy in these collision models. An analysis of beam divergence leads to better characterization of the ion beam, a parameter that requires careful analysis. Finally, suggestions based on numerical results are made to help guide the experimental design in order to better characterize the ion environment.
Mattsson, Thomas R.; Root, Seth; Mattsson, Ann E.; Shulenburger, Luke; Magyar, Rudolph J.; Flicker, Dawn G.
2014-11-11
We use Sandia's Z machine and magnetically accelerated flyer plates to shock compress liquid krypton to 850 GPa and compare with results from density-functional theory (DFT) based simulations using the AM05 functional. We also employ quantum Monte Carlo calculations to motivate the choice of AM05. We conclude that the DFT results are sensitive to the quality of the pseudopotential in terms of scattering properties at high energy/temperature. A new Kr projector augmented wave potential was constructed with improved scattering properties which resulted in excellent agreement with the experimental results to 850 GPa and temperatures above 10 eV (110 kK). In conclusion, we present comparisons of our data from the Z experiments and DFT calculations to current equation of state models of krypton to determine the best model for high energy-density applications.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Mattsson, Thomas R.; Root, Seth; Mattsson, Ann E.; Shulenburger, Luke; Magyar, Rudolph J.; Flicker, Dawn G.
2014-11-11
We use Sandia's Z machine and magnetically accelerated flyer plates to shock compress liquid krypton to 850 GPa and compare with results from density-functional theory (DFT) based simulations using the AM05 functional. We also employ quantum Monte Carlo calculations to motivate the choice of AM05. We conclude that the DFT results are sensitive to the quality of the pseudopotential in terms of scattering properties at high energy/temperature. A new Kr projector augmented wave potential was constructed with improved scattering properties which resulted in excellent agreement with the experimental results to 850 GPa and temperatures above 10 eV (110 kK). InmoreÂ Â» conclusion, we present comparisons of our data from the Z experiments and DFT calculations to current equation of state models of krypton to determine the best model for high energy-density applications.Â«Â less
A Hybrid Method for Accelerated Simulation of Coulomb Collisions in a Plasma
Caflisch, R; Wang, C; Dimarco, G; Cohen, B; Dimits, A
2007-10-09
If the collisional time scale for Coulomb collisions is comparable to the characteristic time scales for a plasma, then simulation of Coulomb collisions may be important for computation of kinetic plasma dynamics. This can be a computational bottleneck because of the large number of simulated particles and collisions (or phase-space resolution requirements in continuum algorithms), as well as the wide range of collision rates over the velocity distribution function. This paper considers Monte Carlo simulation of Coulomb collisions using the binary collision models of Takizuka & Abe and Nanbu. It presents a hybrid method for accelerating the computation of Coulomb collisions. The hybrid method represents the velocity distribution function as a combination of a thermal component (a Maxwellian distribution) and a kinetic component (a set of discrete particles). Collisions between particles from the thermal component preserve the Maxwellian; collisions between particles from the kinetic component are performed using the method of or Nanbu. Collisions between the kinetic and thermal components are performed by sampling a particle from the thermal component and selecting a particle from the kinetic component. Particles are also transferred between the two components according to thermalization and dethermalization probabilities, which are functions of phase space.
Physical layer simulation study for the coexistence of WLAN standards
Howlader, M. K.; Keiger, C.; Ewing, P. D.; Govan, T. V.
2006-07-01
This paper presents the results of a study on the performance of wireless local area network (WLAN) devices in the presence of interference from other wireless devices. To understand the coexistence of these wireless protocols, simplified physical-layer-system models were developed for the Bluetooth, Wireless Fidelity (WiFi), and Zigbee devices, all of which operate within the 2.4-GHz frequency band. The performances of these protocols were evaluated using Monte-Carlo simulations under various interference and channel conditions. The channel models considered were basic additive white Gaussian noise (AWGN), Rayleigh fading, and site-specific fading. The study also incorporated the basic modulation schemes, multiple access techniques, and channel allocations of the three protocols. This research is helping the U.S. Nuclear Regulatory Commission (NRC) understand the coexistence issues associated with deploying wireless devices and could prove useful in the development of a technical basis for guidance to address safety-related issues with the implementation of wireless systems in nuclear facilities. (authors)
Event-by-Event Simulation of Induced Fission
Vogt, R; Randrup, J
2007-12-13
We are developing a novel code that treats induced fission by statistical (or Monte-Carlo) simulation of individual decay chains. After its initial excitation, the fissionable compound nucleus may either deexcite by evaporation or undergo binary fission into a large number of fission channels each with different energetics involving both energy dissipation and deformed scission prefragments. After separation and Coulomb acceleration, each fission fragment undergoes a succession of individual (neutron) evaporations, leading to two bound but still excited fission products (that may further decay electromagnetically and, ultimately, weakly), as well as typically several neutrons. (The inclusion of other possible ejectiles is planned.) This kind of approach makes it possible to study more detailed observables than could be addressed with previous treatments which have tended to focus on average quantities. In particular, any type of correlation observable can readily be extracted from a generated set of events. With a view towards making the code practically useful in a variety of applications, emphasis is being put on making it numerically efficient so that large event samples can be generated quickly. In its present form, the code can generate one million full events in about 12 seconds on a MacBook laptop computer. The development of this qualitatively new tool is still at an early stage and quantitative reproduction of existing data should not be expected until a number of detailed refinement have been implemented.
Muon simulations for Super-Kamiokande, KamLAND, and CHOOZ
Tang, Alfred; Horton-Smith, Glenn; Kudryavtsev, Vitaly A.; Tonazzo, Alessandra
2006-09-01
Muon backgrounds at Super-Kamiokande, KamLAND, and CHOOZ are calculated using MUSIC. A modified version of the Gaisser sea-level muon distribution and a well-tested Monte Carlo integration method are introduced. Average muon energy, flux, and rate are tabulated. Plots of average energy and angular distributions are given. Implications for muon tracker design in future experiments are discussed.
Consortium for Advanced Simulation of Light Water Reactors (CASL)
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
VERA VERA.edu Software availability U.S. Citizens and most LPRs as limited by U.S. export control regulations Students, Faculty MOC radiation transport included included Sn and SPn radiation transport included Not included Monte Carlo radiation transport included included Integrated cross-section library included Limited functionality Integrated depletion library included Limited functionality Subchannel thermal-hydraulics included included Fuel performance included included Coolant chemistry
White, Claire; Bloomer, Breaunnah E.; Provis, John L.; Henson, Neil J.; Page, Katharine L.
2012-05-16
With the ever increasing demands for technologically advanced structural materials, together with emerging environmental consciousness due to climate change, geopolymer cement is fast becoming a viable alternative to traditional cements due to proven mechanical engineering characteristics and the reduction in CO2 emitted (approximately 80% less CO2 emitted compared to ordinary Portland cement). Nevertheless, much remains unknown regarding the kinetics of the molecular changes responsible for nanostructural evolution during the geopolymerization process. Here, in-situ total scattering measurements in the form of X-ray pair distribution function (PDF) analysis are used to quantify the extent of reaction of metakaolin/slag alkali-activated geopolymer binders, including the effects of various activators (alkali hydroxide/silicate) on the kinetics of the geopolymerization reaction. Restricting quantification of the kinetics to the initial ten hours of reaction does not enable elucidation of the true extent of the reaction, but using X-ray PDF data obtained after 128 days of reaction enables more accurate determination of the initial extent of reaction. The synergies between the in-situ X-ray PDF data and simulations conducted by multiscale density functional theory-based coarse-grained Monte Carlo analysis are outlined, particularly with regard to the potential for the X-ray data to provide a time scale for kinetic analysis of the extent of reaction obtained from the multiscale simulation methodology.
Differential Die-Away Instrument: Report on Initial Simulations of Spent Fuel Experiment
Goodsell, Alison V.; Henzl, Vladimir; Swinhoe, Martyn T.
2014-04-01
New Monte Carlo simulations of the differential die-away (DDA) instrument response to the assay of spent and fresh fuel helped to redefine the signal-to-Background ratio and the effects of source neutron tailoring on the system performance. Previously, burst neutrons from the neutron generator together with all neutrons from a fission chain started by a fast fission of ^{238}U were considered to contribute to active background counts. However, through additional simulations, the magnitude of the ^{238}U first fission contribution was found to not affect the DDA performance in reconstructing ^{239}Pu_{eff}. As a result, the newly adopted DDA active background definition considers now any neutrons within a branch of the fission chain that does not include at least one fission event induced by a thermal neutron, before being detected, to be the active background. The active background, consisting thus of neutrons from a fission chain or its individual branches composed entirely of sequence of fast fissions on any fissile or fissionable nuclei, is not expected to change significantly with different fuel assemblies. Additionally, while source tailoring materials surrounding the neutron generator were found to influence and possibly improve the instrument performance, the effect was not substantial.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Barrows, Wesley; Dingreville, RÃ©mi; Spearot, Douglas
2015-10-19
A statistical approach combined with molecular dynamics simulations is used to study the influence of hydrogen on intergranular decohesion. This methodology is applied to a Ni Î£3(112)[11Â¯0] symmetric tilt grain boundary. Hydrogenated grain boundaries with different H concentrations are constructed using an energy minimization technique with initial H atom positions guided by Monte Carlo simulation results. Decohesion behavior is assessed through extraction of a tractionâ€“separation relationship during steady-state crack propagation in a statistically meaningful approach, building upon prior work employing atomistic cohesive zone volume elements (CZVEs). A sensitivity analysis is performed on the numerical approach used to extract the tractionâ€“separationmoreÂ Â» relationships, clarifying the role of CZVE size, threshold parameters necessary to differentiate elastic and decohesion responses, and the numerical averaging technique. Results show that increasing H coverage at the Ni Î£3(112)[11Â¯0] grain boundary asymmetrically influences the crack tip velocity during propagation, leads to a general decrease in the work of separation required for crack propagation, and provides a reduction in the peak stress in the extracted tractionâ€“separation relationship. Furthermore the present framework offers a meaningful vehicle to pass atomistically derived interfacial behavior to higher length scale formulations for intergranular fracture.Â«Â less
A spacecraft's own ambient environment: The role of simulation-based research
Ketsdever, Andrew D.; Gimelshein, Sergey
2014-12-09
Spacecraft contamination has long been a subject of study in the rarefied gas dynamics community. Professor Mikhail Ivanov coined the term a spacecraft's 'own ambient environment' to describe the effects of natural and satellite driven processes on the conditions encountered by a spacecraft in orbit. Outgassing, thruster firings, and gas and liquid dumps all contribute to the spacecraft's contamination environment. Rarefied gas dynamic modeling techniques, such as Direct Simulation Monte Carlo, are well suited to investigate these spacebased environments. However, many advances were necessary to fully characterize the extent of this problem. A better understanding of modeling flows over large pressure ranges, for example hybrid continuum and rarefied numerical schemes, were required. Two-phase flow modeling under rarefied conditions was necessary. And the ability to model plasma flows for a new era of propulsion systems was also required. Through the work of Professor Ivanov and his team, we now have a better understanding of processes that create a spacecraft's own ambient environment and are able to better characterize these environments. Advances in numerical simulation have also spurred on the development of experimental facilities to study these effects. The relationship between numerical results and experimental advances will be explored in this manuscript.
Optimization of Depletion Modeling and Simulation for the High Flux Isotope Reactor
Betzler, Benjamin R; Ade, Brian J; Chandler, David; Ilas, Germina; Sunny, Eva E
2015-01-01
Monte Carlo based depletion tools used for the high-fidelity modeling and simulation of the High Flux Isotope Reactor (HFIR) come at a great computational cost; finding sufficient approximations is necessary to make the use of these tools feasible. The optimization of the neutronics and depletion model for the HFIR is based on two factors: (i) the explicit representation of the involute fuel plates with sets of polyhedra and (ii) the treatment of depletion mixtures and control element position during depletion calculations. A very fine representation (i.e., more polyhedra in the involute plate approximation) does not significantly improve simulation accuracy. The recommended representation closely represents the physical plates and ensures sufficient fidelity in regions with high flux gradients. Including the fissile targets in the central flux trap of the reactor as depletion mixtures has the greatest effect on the calculated cycle length, while localized effects (e.g., the burnup of specific isotopes or the power distribution evolution over the cycle) are more noticeable consequences of including a critical control element search or depleting burnable absorbers outside the fuel region.
Barrows, Wesley; Dingreville, Rémi; Spearot, Douglas
2015-10-19
A statistical approach combined with molecular dynamics simulations is used to study the influence of hydrogen on intergranular decohesion. This methodology is applied to a Ni ?3(112)[11¯0] symmetric tilt grain boundary. Hydrogenated grain boundaries with different H concentrations are constructed using an energy minimization technique with initial H atom positions guided by Monte Carlo simulation results. Decohesion behavior is assessed through extraction of a traction–separation relationship during steady-state crack propagation in a statistically meaningful approach, building upon prior work employing atomistic cohesive zone volume elements (CZVEs). A sensitivity analysis is performed on the numerical approach used to extract the traction–separation relationships, clarifying the role of CZVE size, threshold parameters necessary to differentiate elastic and decohesion responses, and the numerical averaging technique. Results show that increasing H coverage at the Ni ?3(112)[11¯0] grain boundary asymmetrically influences the crack tip velocity during propagation, leads to a general decrease in the work of separation required for crack propagation, and provides a reduction in the peak stress in the extracted traction–separation relationship. Furthermore the present framework offers a meaningful vehicle to pass atomistically derived interfacial behavior to higher length scale formulations for intergranular fracture.
Nguyen, Van T.; Nguyen, Phuong T.; Dang, Liem X.; Mei, Donghai; Wick, Collin D.; Do, Duong D.
2014-09-15
Grand Canonical Monte Carlo (GCMC) simulations were carried out to study the equilibrium adsorption concentration of methanol and water in all-silica zeolite BEA over the wide temperature and pressure ranges. For both water and methanol, their adsorptive capacity increases with increasing pressure and decreasing temperature. The onset of methanol adsorption occurs at much lower pressures than water adsorption at all temperatures. Our GCMC simulation results also indicate that the adsorption isotherms of methanol exhibit a gradual change with pressure while water adsorption shows a sharp first-order phase transition at low temperatures. To explore the effects of Si/Al ratio on adsorption, a series of GCMC simulations of water and methanol adsorption in zeolites HBEA with Si/Al=7, 15, 31, 63 were performed. As the Si/Al ratio decreases, the onsets of both water and methanol adsorption dramatically shift to lower pressures. The type V isotherm obtained for water adsorption in hydrophobic BEA progressively changes to type I isotherm with decreasing Si/Al ratio in hydrophilic HBEA. This work was supported by the US Department of Energy, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. Pacific Northwest National Laboratory (PNNL) is a multiprogram national laboratory operated for DOE by Battelle.
An adaptive multi-level simulation algorithm for stochastic biological systems
Lester, C. Giles, M. B.; Baker, R. E.; Yates, C. A.
2015-01-14
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of ?. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where ? is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.
Fundamental Science-Based Simulation of Nuclear Waste Forms
Devanathan, Ramaswami; Gao, Fei; Sun, Xin; Khaleel, Mohammad A.
2010-10-04
This report presents a hierarchical multiscale modeling scheme based on two-way information exchange. To account for all essential phenomena in waste forms over geological time scales, the models have to span length scales from nanometer to kilometer and time scales from picoseconds to millenia. A single model cannot cover this wide range and a multi-scale approach that integrates a number of different at-scale models is called for. The approach outlined here involves integration of quantum mechanical calculations, classical molecular dynamics simulations, kinetic Monte Carlo and phase field methods at the mesoscale, and continuum models. The ultimate aim is to provide science-based input in the form of constitutive equations to integrated codes. The atomistic component of this scheme is demonstrated in the promising waste form xenotime. Density functional theory calculations have yielded valuable information about defect formation energies. This data can be used to develop interatomic potentials for molecular dynamics simulations of radiation damage. Potentials developed in the present work show a good match for the equilibrium lattice constants, elastic constants and thermal expansion of xenotime. In novel waste forms, such as xenotime, a considerable amount of data needed to validate the models is not available. Integration of multiscale modeling with experimental work is essential to generate missing data needed to validate the modeling scheme and the individual models. Density functional theory can also be used to fill knowledge gaps. Key challenges lie in the areas of uncertainty quantification, verification and validation, which must be performed at each level of the multiscale model and across scales. The approach used to exchange information between different levels must also be rigorously validated. The outlook for multiscale modeling of wasteforms is quite promising.
Bergstrom, Paul M. (Livermore, CA); Daly, Thomas P. (Livermore, CA); Moses, Edward I. (Livermore, CA); Patterson, Jr., Ralph W. (Livermore, CA); Schach von Wittenau, Alexis E. (Livermore, CA); Garrett, Dewey N. (Livermore, CA); House, Ronald K. (Tracy, CA); Hartmann-Siantar, Christine L. (Livermore, CA); Cox, Lawrence J. (Los Alamos, NM); Fujino, Donald H. (San Leandro, CA)
2000-01-01
A system and method is disclosed for radiation dose calculation within sub-volumes of a particle transport grid. In a first step of the method voxel volumes enclosing a first portion of the target mass are received. A second step in the method defines dosel volumes which enclose a second portion of the target mass and overlap the first portion. A third step in the method calculates common volumes between the dosel volumes and the voxel volumes. A fourth step in the method identifies locations in the target mass of energy deposits. And, a fifth step in the method calculates radiation doses received by the target mass within the dosel volumes. A common volume calculation module inputs voxel volumes enclosing a first portion of the target mass, inputs voxel mass densities corresponding to a density of the target mass within each of the voxel volumes, defines dosel volumes which enclose a second portion of the target mass and overlap the first portion, and calculates common volumes between the dosel volumes and the voxel volumes. A dosel mass module, multiplies the common volumes by corresponding voxel mass densities to obtain incremental dosel masses, and adds the incremental dosel masses corresponding to the dosel volumes to obtain dosel masses. A radiation transport module identifies locations in the target mass of energy deposits. And, a dose calculation module, coupled to the common volume calculation module and the radiation transport module, for calculating radiation doses received by the target mass within the dosel volumes.
Carlos Hernandez Faham LBNL NERSC@40
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Hernandez Faham LBNL NERSC@40 Feb 5, 2014 The Large Underground Xenon (LUX) experiment and NERSC NERSC@40 Feb 5, 2014 Carlos Faham 2 Then and now... The Malloc machine, 1933 Edison, 2014 Solved 10 simultaneous differential equations Can do that, too When researchers talk about neutron stars, dark matter and gravitational lenses, they all start the same way: "Zwicky noticed this problem in the 1930s. Back then, nobody listened . . ." Stephen Maurer "Who the devil are you?" *
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
kinetic equation in the sheath and presheath region. The materials modeling is based on molecular dynamics, accelerated molecular dynamics, and kinetic Monte Carlo simulations....
Neutron Production by Muon Spallation I: Theory (Technical Report...
Office of Scientific and Technical Information (OSTI)
Monte Carlo package MCNPX. We calculate simulated energy spectra, multiplicities, and angular distributions of direct neutrons and pions from muon spallation. Authors: Luu, T ;...
Microsoft Word - NRAP-TRS-III-002-2012_Modeling the Performance...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
as a function of time; it illustrates the considerable variability in results stemming from the uncertainty in model inputs. Figure 5: Monte-Carlo simulation results; time...
Search for: All records | SciTech Connect
Office of Scientific and Technical Information (OSTI)
The distribution of galaxies mapped by the Sloan Digital Sky Survey shows that this region ... 'proton-dominated' GRBs in the internal shock scenario through Monte Carlo simulations, ...
High Island Densities and Long Range Repulsive Interactions:...
Office of Scientific and Technical Information (OSTI)
long range repulsive interactions. Kinetic Monte Carlo simulations and density functional theory calculations support this conclusion. In addition to answering an outstanding...
Numerical evaluation of effective unsaturated hydraulic properties...
Office of Scientific and Technical Information (OSTI)
To represent a heterogeneous unsaturated fractured rock by its homogeneous equivalent, Monte Carlo simulations are used to obtain upscaled (effective) flow properties. In this ...
Probabilistic evaluation of shallow groundwater resources at...
Office of Scientific and Technical Information (OSTI)
atmosphere. This study first develops an integrated Monte Carlo method for simulating CO2 and brine leakage from carbon sequestration and subsequent geochemical interactions in...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
constraint) Recognition of cluster (fragment) formation (R- 2.4 fm) Simulation of a large number of events (Monte Carlo approach) 1 M. Papa, A. Bonasera et...
ARM - Publications: Science Team Meeting Documents
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
low sun elevations. Simulations were made for different aerosol models using Monte Carlo method. It was found that a simple relation exists between the products of aerosol optical...
San Carlos, California: Energy Resources | Open Energy Information
Energy Companies in San Carlos, California Cleeves Engines, Inc. LiveFuels Inc Tesla Motors Tesla Motors Inc References US Census Bureau Incorporated place and minor...
Mont Vernon, New Hampshire: Energy Resources | Open Energy Information
Mont Vernon, New Hampshire: Energy Resources Jump to: navigation, search Equivalent URI DBpedia Coordinates 42.8945294, -71.6742393 Show Map Loading map......
South El Monte, California: Energy Resources | Open Energy Information
El Monte, California: Energy Resources Jump to: navigation, search Equivalent URI DBpedia Coordinates 34.0519548, -118.0467339 Show Map Loading map... "minzoom":false,"mapping...
North El Monte, California: Energy Resources | Open Energy Information
El Monte, California: Energy Resources Jump to: navigation, search Equivalent URI DBpedia Coordinates 34.1027861, -118.0242333 Show Map Loading map... "minzoom":false,"mapping...
A Many-Task Parallel Approach for Multiscale Simulations of Subsurface Flow and Reactive Transport
Scheibe, Timothy D.; Yang, Xiaofan; Schuchardt, Karen L.; Agarwal, Khushbu; Chase, Jared M.; Palmer, Bruce J.; Tartakovsky, Alexandre M.
2014-12-16
Continuum-scale models have long been used to study subsurface flow, transport, and reactions but lack the ability to resolve processes that are governed by pore-scale mixing. Recently, pore-scale models, which explicitly resolve individual pores and soil grains, have been developed to more accurately model pore-scale phenomena, particularly reaction processes that are controlled by local mixing. However, pore-scale models are prohibitively expensive for modeling application-scale domains. This motivates the use of a hybrid multiscale approach in which continuum- and pore-scale codes are coupled either hierarchically or concurrently within an overall simulation domain (time and space). This approach is naturally suited to an adaptive, loosely-coupled many-task methodology with three potential levels of concurrency. Each individual code (pore- and continuum-scale) can be implemented in parallel; multiple semi-independent instances of the pore-scale code are required at each time step providing a second level of concurrency; and Monte Carlo simulations of the overall system to represent uncertainty in material property distributions provide a third level of concurrency. We have developed a hybrid multiscale model of a mixing-controlled reaction in a porous medium wherein the reaction occurs only over a limited portion of the domain. Loose, minimally-invasive coupling of pre-existing parallel continuum- and pore-scale codes has been accomplished by an adaptive script-based workflow implemented in the Swift workflow system. We describe here the methods used to create the model system, adaptively control multiple coupled instances of pore- and continuum-scale simulations, and maximize the scalability of the overall system. We present results of numerical experiments conducted on NERSC supercomputing systems; our results demonstrate that loose many-task coupling provides a scalable solution for multiscale subsurface simulations with minimal overhead.
Simulation of gross and net erosion of high-Z materials in the DIII-D divertor
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Wampler, William R.; Ding, R.; Stangeby, P. C.; Elder, J. D.; Tskhakaya, D.; Kirschner, A.; Guo, H. Y.; Chan, V. S.; McLean, A. G.; Snyder, P. B.; et al
2015-12-17
The three-dimensional Monte Carlo code ERO has been used to simulate dedicated DIII-D experiments in which Mo and W samples with different sizes were exposed to controlled and well-diagnosed divertor plasma conditions to measure the gross and net erosion rates. Experimentally, the net erosion rate is significantly reduced due to the high local redeposition probability of eroded high-Z materials, which according to the modelling is mainly controlled by the electric field and plasma density within the Chodura sheath. Similar redeposition ratios were obtained from ERO modelling with three different sheath models for small angles between the magnetic field and themoreÂ Â» material surface, mainly because of their similar mean ionization lengths. The modelled redeposition ratios are close to the measured value. Decreasing the potential drop across the sheath can suppress both gross and net erosion because sputtering yield is decreased due to lower incident energy while the redeposition ratio is not reduced owing to the higher electron density in the Chodura sheath. Taking into account material mixing in the ERO surface model, the net erosion rate of high-Z materials is shown to be strongly dependent on the carbon impurity concentration in the background plasma; higher carbon concentration can suppress net erosion. As a result, the principal experimental results such as net erosion rate and profile and redeposition ratio are well reproduced by the ERO simulations.Â«Â less
Dykin, V.; Pazsit, I.
2012-07-01
A possibility to reconstruct the axial void profile from the simulated in-core neutron noise which is caused by density fluctuations in a Boiling Water Reactor (BWR) heated channel is considered. For this purpose, a self-contained model of the two-phase flow regime is constructed which has quantitatively and qualitatively similar properties to those observed in real BWRs. The model is subsequently used to simulate the signals of neutron detectors induced by the corresponding perturbations in the flow density. The bubbles are generated randomly in both space and time using Monte-Carlo techniques. The axial distribution of the bubble production is chosen such that the mean axial void fraction and void velocity follow the actual values of BWRs. The induced neutron noise signals are calculated and then processed by the standard signal analysis methods such as Auto-Power Spectral Density (APSD) and Cross-Power Spectral Density (CPSD). Two methods for axial void and velocity profiles reconstruction are discussed: the first one is based on the change of the break frequency of the neutron auto-power spectrum with axial core elevation, while the second refers to the estimation of transit times of propagating steam fluctuations between different axial detector positions. This paper summarizes the principles of the model and presents a numerical testing of the qualitative applicability to estimate the required parameters for the reconstruction of the void fraction profile from the neutron noise measurements. (authors)
San Carlos Apache Tribe - Energy Organizational Analysis
Rapp, James; Albert, Steve
2012-04-01
The San Carlos Apache Tribe (SCAT) was awarded $164,000 in late-2011 by the U.S. Department of Energy (U.S. DOE) Tribal Energy Program's "First Steps Toward Developing Renewable Energy and Energy Efficiency on Tribal Lands" Grant Program. This grant funded: ? The analysis and selection of preferred form(s) of tribal energy organization (this Energy Organization Analysis, hereinafter referred to as "EOA"). ? Start-up staffing and other costs associated with the Phase 1 SCAT energy organization. ? An intern program. ? Staff training. ? Tribal outreach and workshops regarding the new organization and SCAT energy programs and projects, including two annual tribal energy summits (2011 and 2012). This report documents the analysis and selection of preferred form(s) of a tribal energy organization.
Electrolyte pore/solution partitioning by expanded grand canonical ensemble
Office of Scientific and Technical Information (OSTI)
Monte Carlo simulation (Journal Article) | SciTech Connect Title: Electrolyte pore/solution partitioning by expanded grand canonical ensemble Monte Carlo simulation Using a newly developed grand canonical Monte Carlo approach based on fractional exchanges of dissolved ions and water molecules, we studied equilibrium partitioning of both components between laterally extended apolar confinements and surrounding electrolyte solution. Accurate calculations of the Hamiltonian and tensorial
Not Available
1992-12-01
The effort of the experimental group has been concentrated on the CERN ALEPH and FERMILAB D0 collider experiments and completion of two fixed target experiments. The BNL fixed target experiment 771 took the world`s largest sample of D(1285) and E/iota(1420) events, using pion, kaon and antiproton beams. Observing the following resonances: 0{sup {minus_plus}} [1280], 1{sup {plus}{plus}} [1280], 0{sup {minus_plus}} [1420], 0{sup {minus_plus}} [1470], 1{sup {plus_minus}} [1415]. The Fermilab fixed target experiment E711, dihadron production in pN interactions at 800 GeV, completed data reduction and analysis. The atomic weight dependence, when parameterized as {sigma}(A) = {sigma}{sub o}A{sup {alpha}}, yielded a value of {alpha} = 1.043 {plus_minus} 0.011 {plus_minus} .012. The cross section per nucleon and angular distributions was also measured as a function of two particle mass and agrees very well with QCD calculations. The D0 Fermilab Collider Experiment E740 began its first data taking run in April 1992. The CERN collider experiment ALEPH at LEP is presently taking more data. The Z mass and width, the couplings to the upper and lower components of the hadronic isospin doublet, forward-backward asymmetries of hadronic events, and measurements of the fragmentation process have been made. The effort of detector development for the SSC has substantially increased with particular emphasis on scintillators, both in fibers and plates. Work has continued on higher-order QCD calculations using the Monte Carlo technique developed previously. This year results for WW, ZZ, WZ, and {sub {gamma}{gamma}} production have been published. A method for incorporating parton showering in such calculations was developed and applied to W production. The multicanonical Monte Carlo algorithm has stood up to the promises anticipated; it was used in multicanonical simulations of first-order phase transitions and for spin glass systems.
MCNP6 Simulation of Light and Medium Nuclei Fragmentation at Intermediate Energies
Mashnik, Stepan Georgievich; Kerby, Leslie Marie
2015-05-22
MCNP6, the latest and most advanced LANL Monte Carlo transport code, representing a merger of MCNP5 and MCNPX, is actually much more than the sum of those two computer codes; MCNP6 is available to the public via RSICC at Oak Ridge, TN, USA. In the present work, MCNP6 was validated and verified (V&V) against different experimental data on intermediate-energy fragmentation reactions, and results by several other codes, using mainly the latest modifications of the Cascade-Exciton Model (CEM) and of the Los Alamos version of the Quark-Gluon String Model (LAQGSM) event generators CEM03.03 and LAQGSM03.03. It was found that MCNP6 using CEM03.03 and LAQGSM03.03 describes well fragmentation reactions induced on light and medium target nuclei by protons and light nuclei of energies around 1 GeV/nucleon and below, and can serve as a reliable simulation tool for different applications, like cosmic-ray-induced single event upsets (SEUâ€™s), radiation protection, and cancer therapy with proton and ion beams, to name just a few. Future improvements of the predicting capabilities of MCNP6 for such reactions are possible, and are discussed in this work.
Thermal performance simulation of a solar cavity receiver under windy conditions
Fang, J.B.; Wei, J.J.; Dong, X.W.; Wang, Y.S.
2011-01-15
Solar cavity receiver plays a dominant role in the light-heat conversion. Its performance can directly affect the efficiency of the whole power generation system. A combined calculation method for evaluating the thermal performance of the solar cavity receiver is raised in this paper. This method couples the Monte-Carlo method, the correlations of the flow boiling heat transfer, and the calculation of air flow field. And this method can ultimately figure out the surface heat flux inside the cavity, the wall temperature of the boiling tubes, and the heat loss of the solar receiver with an iterative solution. With this method, the thermal performance of a solar cavity receiver, a saturated steam receiver, is simulated under different wind environments. The highest wall temperature of the boiling tubes is about 150 C higher than the water saturation temperature. And it appears in the upper middle parts of the absorbing panels. Changing the wind angle or velocity can obviously affect the air velocity inside the receiver. The air velocity reaches the maximum value when the wind comes from the side of the receiver (flow angle {alpha} = 90 ). The heat loss of the solar cavity receiver also reaches a maximum for the side-on wind. (author)
Burst wait time simulation of CALIBAN reactor at delayed super-critical state
Humbert, P.; Authier, N.; Richard, B.; Grivot, P.; Casoli, P.
2012-07-01
In the past, the super prompt critical wait time probability distribution was measured on CALIBAN fast burst reactor [4]. Afterwards, these experiments were simulated with a very good agreement by solving the non-extinction probability equation [5]. Recently, the burst wait time probability distribution has been measured at CEA-Valduc on CALIBAN at different delayed super-critical states [6]. However, in the delayed super-critical case the non-extinction probability does not give access to the wait time distribution. In this case it is necessary to compute the time dependent evolution of the full neutron count number probability distribution. In this paper we present the point model deterministic method used to calculate the probability distribution of the wait time before a prescribed count level taking into account prompt neutrons and delayed neutron precursors. This method is based on the solution of the time dependent adjoint Kolmogorov master equations for the number of detections using the generating function methodology [8,9,10] and inverse discrete Fourier transforms. The obtained results are then compared to the measurements and Monte-Carlo calculations based on the algorithm presented in [7]. (authors)
Large-scale Nanostructure Simulations from X-ray Scattering Data On Graphics Processor Clusters
Sarje, Abhinav; Pien, Jack; Li, Xiaoye; Chan, Elaine; Chourou, Slim; Hexemer, Alexander; Scholz, Arthur; Kramer, Edward
2012-01-15
X-ray scattering is a valuable tool for measuring the structural properties of materialsused in the design and fabrication of energy-relevant nanodevices (e.g., photovoltaic, energy storage, battery, fuel, and carbon capture andsequestration devices) that are key to the reduction of carbon emissions. Although today's ultra-fast X-ray scattering detectors can provide tremendousinformation on the structural properties of materials, a primary challenge remains in the analyses of the resulting data. We are developing novelhigh-performance computing algorithms, codes, and software tools for the analyses of X-ray scattering data. In this paper we describe two such HPCalgorithm advances. Firstly, we have implemented a flexible and highly efficient Grazing Incidence Small Angle Scattering (GISAXS) simulation code based on theDistorted Wave Born Approximation (DWBA) theory with C++/CUDA/MPI on a cluster of GPUs. Our code can compute the scattered light intensity from any givensample in all directions of space; thus allowing full construction of the GISAXS pattern. Preliminary tests on a single GPU show speedups over 125x compared tothe sequential code, and almost linear speedup when executing across a GPU cluster with 42 nodes, resulting in an additional 40x speedup compared to usingone GPU node. Secondly, for the structural fitting problems in inverse modeling, we have implemented a Reverse Monte Carlo simulation algorithm with C++/CUDAusing one GPU. Since there are large numbers of parameters for fitting in the in X-ray scattering simulation model, the earlier single CPU code required weeks ofruntime. Deploying the AccelerEyes Jacket/Matlab wrapper to use GPU gave around 100x speedup over the pure CPU code. Our further C++/CUDA optimization deliveredan additional 9x speedup.
Hou, Zhangshuan; Huang, Maoyi; Leung, Lai-Yung R.; Lin, Guang; Ricciuto, Daniel M.
2012-08-10
Uncertainties in hydrologic parameters could have significant impacts on the simulated water and energy fluxes and land surface states, which will in turn affect atmospheric processes and the carbon cycle. Quantifying such uncertainties is an important step toward better understanding and quantification of uncertainty of integrated earth system models. In this paper, we introduce an uncertainty quantification (UQ) framework to analyze sensitivity of simulated surface fluxes to selected hydrologic parameters in the Community Land Model (CLM4) through forward modeling. Thirteen flux tower footprints spanning a wide range of climate and site conditions were selected to perform sensitivity analyses by perturbing the parameters identified. In the UQ framework, prior information about the parameters was used to quantify the input uncertainty using the Minimum-Relative-Entropy approach. The quasi-Monte Carlo approach was applied to generate samples of parameters on the basis of the prior pdfs. Simulations corresponding to sampled parameter sets were used to generate response curves and response surfaces and statistical tests were used to rank the significance of the parameters for output responses including latent (LH) and sensible heat (SH) fluxes. Overall, the CLM4 simulated LH and SH show the largest sensitivity to subsurface runoff generation parameters. However, study sites with deep root vegetation are also affected by surface runoff parameters, while sites with shallow root zones are also sensitive to the vadose zone soil water parameters. Generally, sites with finer soil texture and shallower rooting systems tend to have larger sensitivity of outputs to the parameters. Our results suggest the necessity of and possible ways for parameter inversion/calibration using available measurements of latent/sensible heat fluxes to obtain the optimal parameter set for CLM4. This study also provided guidance on reduction of parameter set dimensionality and parameter calibration framework design for CLM4 and other land surface models under different hydrologic and climatic regimes.
Project Reports for San Carlos Apache Tribe- 2012 Project
Broader source: Energy.gov [DOE]
Under this project, the San Carlos Apache Tribe will study the feasibility of solar energy projects within the reservation with the potential to generate a minimum of 1 megawatt (MW).
VWA-0021- In the Matter of Carlos M. Castillo
Broader source: Energy.gov [DOE]
This Decision involves a complaint filed by Carlos M. Castillo (Castillo or â€œthe complainantâ€) under the Department of Energy (DOE) Contractor Employee Protection Program, 10 C.F.R. Part 708....
Duo at Santa Fe's Monte del Sol Charter
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Duo at Santa Fe's Monte del Sol Charter School takes top award in 25th New Mexico Supercomputing Challenge April 21, 2015 Using nanotechnology robots to kill cancer cells LOS...
Jefferson Lab finds its man Mont (Inside Business) | Jefferson Lab
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
https://www.jlab.org/news/articles/jefferson-lab-finds-its-man-mont-inside-business Jefferson Lab finds its man Mont Hugh Montgomery Hugh Montgomery, a British nuclear physicist selected from more than 50 candidates, takes over as lab director Sept. 2. By Michael Schwartz, Inside Business April 14, 2008 Replacing the head of a world-renowned nuclear physics facility is no easy feat. When Christoph Leemann announced his desire to retire in 2007 as director of the Department of Energy's Thomas
The first-principle coupled calculations using TMCC and CFX for the pin-wise simulation of LWR
Li, L.; Wang, K.
2012-07-01
The coupling of neutronics and thermal-hydraulics plays an important role in the reactor safety, core design and operation of nuclear power facilities. This paper introduces the research on the coupling of Monte Carlo method and CFD method, specifically using TMCC and CFX. The methods of the coupling including the coupling approach, data transfer, mesh mapping and transient coupling scheme are studied firstly. The coupling of TMCC and CFX for the steady state calculations is studied and described for the single rod model and the 3 x 3 Rod Bundle model. The calculation results prove that the coupling method is feasible and the coupled calculation can be used for steady state calculations. However, the oscillation which occurs during the coupled calculation indicates that this method still needs to be improved for the accuracy. Then the coupling for the transient calculations is also studied and tested by two cases of the steady state and the lost of heat sink. The preliminary results of the transient coupled calculations indicates that the transient coupling with TMCC and CFX is able to simulate the transients but instabilities are occurring. It is also concluded that the transient coupling of TMCC and CFX needs to be improved due to the limitation of computational resource and the difference of time scales. (authors)
Ovanesyan, Zaven; Marucho, Marcelo; Medasani, Bharat; Fenley, Marcia O.; Guerrero-García, Guillermo Iván; Olvera de la Cruz, Mónica
2014-12-14
The ionic atmosphere around a nucleic acid regulates its stability in aqueous salt solutions. One major source of complexity in biological activities involving nucleic acids arises from the strong influence of the surrounding ions and water molecules on their structural and thermodynamic properties. Here, we implement a classical density functional theory for cylindrical polyelectrolytes embedded in aqueous electrolytes containing explicit (neutral hard sphere) water molecules at experimental solvent concentrations. Our approach allows us to include ion correlations as well as solvent and ion excluded volume effects for studying the structural and thermodynamic properties of highly charged cylindrical polyelectrolytes. Several models of size and charge asymmetric mixtures of aqueous electrolytes at physiological concentrations are studied. Our results are in good agreement with Monte Carlo simulations. Our numerical calculations display significant differences in the ion density profiles for the different aqueous electrolyte models studied. However, similar results regarding the excess number of ions adsorbed to the B-DNA molecule are predicted by our theoretical approach for different aqueous electrolyte models. These findings suggest that ion counting experimental data should not be used alone to validate the performance of aqueous DNA-electrolyte models.
Unwin, Stephen D.; Layton, Robert F.; Johnson, Kenneth I.; Lowry, Peter P.
2012-06-25
Abstract: The Next Generation Systems Analysis Code - referred to as R7 - is reactor systems simulation software being developed to support the Risk-Informed Safety Margin Characterization Pathway of the U.S. Department of Energy's Light Water Reactor Sustainability Program. It will provide an integrated multi-physics environment, implemented in an uncertainty quantification (UQ) framework that can produce risk and other performance insights on long-term reactor operations. An element of this simulation environment will be the performance of passive components and materials. Conventional models of component reliability are largely parametric, relying on plant service data to estimate component lifetimes and failure rates. This type of model has limited usefulness in the R7 environment where the intent is to explicitly determine the influence of physical stressors on component degradation. In this paper, we describe a new class of multi-state physics-based component models designed to be R7-compatible. These models capture the physics of materials degradation while also incorporating the effects of interventions and component rejuvenation. The models are implemented in a cumulative damage framework that allows the impact of an evolving physical environment to be addressed without recourse to resampling within the Monte Carlo-based UQ framework. The paper describes an application to stress corrosion cracking in dissimilar metal welds - a principal contributor to potential loss of coolant accidents. So while R7 will have the more conventional capability of reactor simulation codes to model the impact of degraded components and systems on plant performance, the methodology described here allows R7 to model the inverse effect; the impact of the physical environment on component degradation and performance.
Physics validation studies for muon collider detector background simulations
Morris, Aaron Owen; /Northern Illinois U.
2011-07-01
Within the broad discipline of physics, the study of the fundamental forces of nature and the most basic constituents of the universe belongs to the field of particle physics. While frequently referred to as 'high-energy physics,' or by the acronym 'HEP,' particle physics is not driven just by the quest for ever-greater energies in particle accelerators. Rather, particle physics is seen as having three distinct areas of focus: the cosmic, intensity, and energy frontiers. These three frontiers all provide different, but complementary, views of the basic building blocks of the universe. Currently, the energy frontier is the realm of hadron colliders like the Tevatron at Fermi National Accelerator Laboratory (Fermilab) or the Large Hadron Collider (LHC) at CERN. While the LHC is expected to be adequate for explorations up to 14 TeV for the next decade, the long development lead time for modern colliders necessitates research and development efforts in the present for the next generation of colliders. This paper focuses on one such next-generation machine: a muon collider. Specifically, this paper focuses on Monte Carlo simulations of beam-induced backgrounds vis-a-vis detector region contamination. Initial validation studies of a few muon collider physics background processes using G4beamline have been undertaken and results presented. While these investigations have revealed a number of hurdles to getting G4beamline up to the level of more established simulation suites, such as MARS, the close communication between us, as users, and the G4beamline developer, Tom Roberts, has allowed for rapid implementation of user-desired features. The main example of user-desired feature implementation, as it applies to this project, is Bethe-Heitler muon production. Regarding the neutron interaction issues, we continue to study the specifics of how GEANT4 implements nuclear interactions. The GEANT4 collaboration has been contacted regarding the minor discrepancies in the neutron interaction cross sections for boron. While corrections to the data files themselves are simple to implement and distribute, it is quite possible, however, that coding changes may be required in G4beamline or even in GEANT4 to fully correct nuclear interactions. Regardless, these studies are ongoing and future results will be reflected in updated releases of G4beamline.
Duo at Santa Fe's Monte del Sol Charter
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Duo at Santa Fe's Monte del Sol Charter School takes top award in 25th New Mexico Supercomputing Challenge April 21, 2015 Using nanotechnology robots to kill cancer cells LOS ALAMOS, N.M., April 21, 2015-Meghan Hill and Katelynn James of Santa Fe's Monte del Sol Charter Sol took the top prize in the 25 th New Mexico Supercomputing Challenge Tuesday at Los Alamos National Laboratory for their research project, "Using Concentrated Heat Systems to Shock the P53 Protein to Direct Cancer into
Soci t d exploitation du parc olien de Mont d H z cques SARL...
Soci t d exploitation du parc olien de Mont d H z cques SARL Jump to: navigation, search Name: Socit d'exploitation du parc olien de Mont d'Hzcques SARL Place:...
Shimojo, Fuyuki; Hattori, Shinnosuke [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States) [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States); Department of Physics, Kumamoto University, Kumamoto 860-8555 (Japan); Kalia, Rajiv K.; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Rajak, Pankaj; Vashishta, Priya [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States)] [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States); Kunaseth, Manaschai [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States) [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States); National Nanotechnology Center, Pathumthani 12120 (Thailand); Ohmura, Satoshi [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States) [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States); Department of Physics, Kumamoto University, Kumamoto 860-8555 (Japan); Department of Physics, Kyoto University, Kyoto 606-8502 (Japan); Shimamura, Kohei [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States) [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States); Department of Physics, Kumamoto University, Kumamoto 860-8555 (Japan); Department of Applied Quantum Physics and Nuclear Engineering, Kyushu University, Fukuoka 819-0395 (Japan)
2014-05-14
We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786?432 cores for a 50.3 × 10{sup 6}-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16?661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of techniques are employed for efficiently calculating the long-range exact exchange correction and excited-state forces. The NAQMD trajectories are analyzed to extract the rates of various excitonic processes, which are then used in KMC simulation to study the dynamics of the global exciton flow network. This has allowed the study of large-scale photoexcitation dynamics in 6400-atom amorphous molecular solid, reaching the experimental time scales.
Testa, Paola [Smithsonian Astrophysical Observatory, 60 Garden Street, MS 58, Cambridge, MA 02138 (United States); De Pontieu, Bart; Martinez-Sykora, Juan [Lockheed Martin Solar and Astrophysics Laboratory, Org. A021S, Building 252, 3251 Hanover Street, Palo Alto, CA 94304 (United States); Hansteen, Viggo; Carlsson, Mats, E-mail: ptesta@cfa.harvard.edu [Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029, Blindern, NO-0315 Oslo (Norway)
2012-10-10
Determining the temperature distribution of coronal plasmas can provide stringent constraints on coronal heating. Current observations with the Extreme ultraviolet Imaging Spectrograph (EIS) on board Hinode and the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory provide diagnostics of the emission measure distribution (EMD) of the coronal plasma. Here we test the reliability of temperature diagnostics using three-dimensional radiative MHD simulations. We produce synthetic observables from the models and apply the Monte Carlo Markov chain EMD diagnostic. By comparing the derived EMDs with the 'true' distributions from the model, we assess the limitations of the diagnostics as a function of the plasma parameters and the signal-to-noise ratio of the data. We find that EMDs derived from EIS synthetic data reproduce some general characteristics of the true distributions, but usually show differences from the true EMDs that are much larger than the estimated uncertainties suggest, especially when structures with significantly different density overlap along the line of sight. When using AIA synthetic data the derived EMDs reproduce the true EMDs much less accurately, especially for broad EMDs. The differences between the two instruments are due to the: (1) smaller number of constraints provided by AIA data and (2) broad temperature response function of the AIA channels which provide looser constraints to the temperature distribution. Our results suggest that EMDs derived from current observatories may often show significant discrepancies from the true EMDs, rendering their interpretation fraught with uncertainty. These inherent limitations to the method should be carefully considered when using these distributions to constrain coronal heating.
Lyalpha RADIATIVE TRANSFER WITH DUST: ESCAPE FRACTIONS FROM SIMULATED HIGH-REDSHIFT GALAXIES
Laursen, Peter; Sommer-Larsen, Jesper; Andersen, Anja C. E-mail: jslarsen@astro.ku.d
2009-10-20
The Lyalpha emission line is an essential diagnostic tool for probing galaxy formation and evolution. Not only is it commonly the strongest observable line from high-redshift galaxies, but from its shape detailed information about its host galaxy can be revealed. However, due to the scattering nature of Lyalpha photons increasing their path length in a nontrivial way, if dust is present in the galaxy, the line may be severely suppressed and its shape altered. In order to interpret observations correctly, it is thus of crucial significance to know how much of the emitted light actually escapes the galaxy. In the present work, using a combination of high-resolution cosmological hydrosimulations and an adaptively refinable Monte Carlo Lyalpha radiative transfer code including an environment dependent model of dust, the escape fractions f {sub esc} of Lyalpha radiation from high-redshift (z = 3.6) galaxies are calculated. In addition to the average escape fraction, the variation of f {sub esc} in different directions and from different parts of the galaxies is investigated, as well as the effect on the emergent spectrum. Escape fractions from a sample of simulated galaxies of representative physical properties are found to decrease for increasing galaxy virial mass M {sub vir}, from f {sub esc} approaching unity for M {sub vir} approx 10{sup 9} M {sub sun} to f {sub esc} less than 10% for M {sub vir} approx 10{sup 12} M {sub sun}. In spite of dust being almost gray, it is found that the emergent spectrum is affected nonuniformly, with the escape fraction of photons close to the line center being much higher than of those in the wings, thus effectively narrowing the Lyalpha line.
Modeling and Simulations for the High Flux Isotope Reactor Cycle 400
Ilas, Germina; Chandler, David; Ade, Brian J; Sunny, Eva E; Betzler, Benjamin R; Pinkston, Daniel
2015-03-01
A concerted effort over the past few years has been focused on enhancing the core model for the High Flux Isotope Reactor (HFIR), as part of a comprehensive study for HFIR conversion from high-enriched uranium (HEU) to low-enriched uranium (LEU) fuel. At this time, the core model used to perform analyses in support of HFIR operation is an MCNP model for the beginning of Cycle 400, which was documented in detail in a 2005 technical report. A HFIR core depletion model that is based on current state-of-the-art methods and nuclear data was needed to serve as reference for the design of an LEU fuel for HFIR. The recent enhancements in modeling and simulations for HFIR that are discussed in the present report include: (1) revision of the 2005 MCNP model for the beginning of Cycle 400 to improve the modeling data and assumptions as necessary based on appropriate primary reference sources HFIR drawings and reports; (2) improvement of the fuel region model, including an explicit representation for the involute fuel plate geometry that is characteristic to HFIR fuel; and (3) revision of the Monte Carlo-based depletion model for HFIR in use since 2009 but never documented in detail, with the development of a new depletion model for the HFIR explicit fuel plate representation. The new HFIR models for Cycle 400 are used to determine various metrics of relevance to reactor performance and safety assessments. The calculated metrics are compared, where possible, with measurement data from preconstruction critical experiments at HFIR, data included in the current HFIR safety analysis report, and/or data from previous calculations performed with different methods or codes. The results of the analyses show that the models presented in this report provide a robust and reliable basis for HFIR analyses.
Report on International Collaboration Involving the FE Heater and HG-A Tests at Mont Terri
Houseworth, Jim; Rutqvist, Jonny; Asahina, Daisuke; Chen, Fei; Vilarrasa, Victor; Liu, Hui-Hai; Birkholzer, Jens
2013-11-06
Nuclear waste programs outside of the US have focused on different host rock types for geological disposal of high-level radioactive waste. Several countries, including France, Switzerland, Belgium, and Japan are exploring the possibility of waste disposal in shale and other clay-rich rock that fall within the general classification of argillaceous rock. This rock type is also of interest for the US program because the US has extensive sedimentary basins containing large deposits of argillaceous rock. LBNL, as part of the DOE-NE Used Fuel Disposition Campaign, is collaborating on some of the underground research laboratory (URL) activities at the Mont Terri URL near Saint-Ursanne, Switzerland. The Mont Terri project, which began in 1995, has developed a URL at a depth of about 300 m in a stiff clay formation called the Opalinus Clay. Our current collaboration efforts include two test modeling activities for the FE heater test and the HG-A leak-off test. This report documents results concerning our current modeling of these field tests. The overall objectives of these activities include an improved understanding of and advanced relevant modeling capabilities for EDZ evolution in clay repositories and the associated coupled processes, and to develop a technical basis for the maximum allowable temperature for a clay repository. The R&D activities documented in this report are part of the work package of natural system evaluation and tool development that directly supports the following Used Fuel Disposition Campaign (UFDC) objectives: ? Develop a fundamental understanding of disposal-system performance in a range of environments for potential wastes that could arise from future nuclear-fuel-cycle alternatives through theory, simulation, testing, and experimentation. ? Develop a computational modeling capability for the performance of storage and disposal options for a range of fuel-cycle alternatives, evolving from generic models to more robust models of performance assessment. For the purpose of validating modeling capabilities for thermal-hydro-mechanical (THM) processes, we developed a suite of simulation models for the planned full-scale FE Experiment to be conducted in the Mont Terri URL, including a full three-dimensional model that will be used for direct comparison to experimental data once available. We performed for the first time a THM analysis involving the Barcelona Basic Model (BBM) in a full three-dimensional field setting for modeling the geomechanical behavior of the buffer material and its interaction with the argillaceous host rock. We have simulated a well defined benchmark that will be used for codeto- code verification against modeling results from other international modeling teams. The analysis highlights the complex coupled geomechanical behavior in the buffer and its interaction with the surrounding rock and the importance of a well characterized buffer material in terms of THM properties. A new geomechanical fracture-damage model, TOUGH-RBSN, was applied to investigate damage behavior in the ongoing HG-A test at Mont Terri URL. Two model modifications have been implemented so that the Rigid-Body-Spring-Network (RBSN) model can be used for analysis of fracturing around the HG-A microtunnel. These modifications are (1) a methodology to compute fracture generation under compressive stress conditions and (2) a method to represent anisotropic elastic and strength properties. The method for computing fracture generation under compressive load produces results that roughly follow trends expected for homogeneous and layered systems. Anisotropic properties for the bulk rock were represented in the RBSN model using layered heterogeneity and gave bulk material responses in line with expectations. These model improvements were implemented for an initial model of fracture damage at the HG-A test. While the HG-A test model results show some similarities with the test observations, differences between the model results and observations remain.
San Carlos Apache Tribe Set to Break Ground on New Solar Project |
Department of Energy San Carlos Apache Tribe Set to Break Ground on New Solar Project San Carlos Apache Tribe Set to Break Ground on New Solar Project March 13, 2014 - 1:05pm Addthis The San Carlos Apache Tribe is making use of its extensive solar resources to power tribal facilities, including this 10-kilowatt (kW) solar PV system, which generates energy to run the tribal radio tower. Photo from San Carlos Apache Tribe, NREL 29202 The San Carlos Apache Tribe is making use of its extensive
Carlos Duarte Priya Gandhi Antony Kim Jared Landsman
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Carlos Duarte Priya Gandhi Antony Kim Jared Landsman Luis Santos Sara Tepfer Taoning Wang Team Negawatt Broader context Selected site Los Angeles, CA (Koreatown District) Built in 1916 & Designated a Historical Monument in 1998 3450 ft 2 Single Family Dwelling Project site CZ9 weather station CZ8 weather station Climate Climate zone 9 Climate zone 8 Climate zone 6 â— Increase urban density â— Rehab an existing building â— Maintain historical preservation status â— Zero Net Energy (ZNE)
Solar Feasibility Study May 2013 - San Carlos Apache Tribe
Rapp, Jim; Duncan, Ken; Albert, Steve
2013-05-01
The San Carlos Apache Tribe (Tribe) in the interests of strengthening tribal sovereignty, becoming more energy self-sufficient, and providing improved services and economic opportunities to tribal members and San Carlos Apache Reservation (Reservation) residents and businesses, has explored a variety of options for renewable energy development. The development of renewable energy technologies and generation is consistent with the Tribeâ€™s 2011 Strategic Plan. This Study assessed the possibilities for both commercial-scale and community-scale solar development within the southwestern portions of the Reservation around the communities of San Carlos, Peridot, and Cutter, and in the southeastern Reservation around the community of Bylas. Based on the lack of any commercial-scale electric power transmission between the Reservation and the regional transmission grid, Phase 2 of this Study greatly expanded consideration of community-scale options. Three smaller sites (Point of Pines, Dudleyville/Winkleman, and Seneca Lake) were also evaluated for community-scale solar potential. Three building complexes were identified within the Reservation where the development of site-specific facility-scale solar power would be the most beneficial and cost-effective: Apache Gold Casino/Resort, Tribal College/Skill Center, and the Dudleyville (Winkleman) Casino.
Georgescu, Ionu? Mandelshtam, Vladimir A.; Jitomirskaya, Svetlana
2013-11-28
Given a quantum many-body system, the Self-Consistent Phonons (SCP) method provides an optimal harmonic approximation by minimizing the free energy. In particular, the SCP estimate for the vibrational ground state (zero temperature) appears to be surprisingly accurate. We explore the possibility of going beyond the SCP approximation by considering the system Hamiltonian evaluated in the harmonic eigenbasis of the SCP Hamiltonian. It appears that the SCP ground state is already uncoupled to all singly- and doubly-excited basis functions. So, in order to improve the SCP result at least triply-excited states must be included, which then reduces the error in the ground state estimate substantially. For a multidimensional system two numerical challenges arise, namely, evaluation of the potential energy matrix elements in the harmonic basis, and handling and diagonalizing the resulting Hamiltonian matrix, whose size grows rapidly with the dimensionality of the system. Using the example of water hexamer we demonstrate that such calculation is feasible, i.e., constructing and diagonalizing the Hamiltonian matrix in a triply-excited SCP basis, without any additional assumptions or approximations. Our results indicate particularly that the ground state energy differences between different isomers (e.g., cage and prism) of water hexamer are already quite accurate within the SCP approximation.
Energy Science and Technology Software Center (OSTI)
2008-12-31
The Software consists of a spreadsheet written in Microsoft Excel that provides an hourly simulation of a wind energy system, which includes a calculation of wind turbine output as a power-curve fit of wind speed.
Unlted States Environmental Protection Agency Enwronmental Mont!orlng
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Environmental Protection Agency Enwronmental Mont!orlng Systems Laboratory P.O. Box 93478 Las Veaa.s NV 89193-5478 EPA/ 600 '4-88j.021 DOE/DPXIO539/060 June 1988 GEPA Research and Development Off-Site Environmental Monitoring Report Radiation Monitoring Around United States p /-I &--L--J :"/> Nuclear Test Areas _ -1 1987 prepared for the '1 " United States Department of Energy / under Interagency Agreement Number DE-AI08-86NV10622 EPA-600/4-88-021 DOE/DP/0053?-060 May 1988
Hooper, David A; Henkel, James J; Whitaker, Michael
2012-01-01
This paper presents research into the adaptation of monitoring techniques from maintainability and reliability (M&R) engineering for remote unattended monitoring of gas centrifuge enrichment plants (GCEPs) for international safeguards. Two categories of techniques are discussed: the sequential probability ratio test (SPRT) for diagnostic monitoring, and sequential Monte Carlo (SMC or, more commonly, particle filtering ) for prognostic monitoring. Development and testing of the application of condition-based monitoring (CBM) techniques was performed on the Oak Ridge Mock Feed and Withdrawal (F&W) facility as a proof of principle. CBM techniques have been extensively developed for M&R assessment of physical processes, such as manufacturing and power plants. These techniques are normally used to locate and diagnose the effects of mechanical degradation of equipment to aid in planning of maintenance and repair cycles. In a safeguards environment, however, the goal is not to identify mechanical deterioration, but to detect and diagnose (and potentially predict) attempts to circumvent normal, declared facility operations, such as through protracted diversion of enriched material. The CBM techniques are first explained from the traditional perspective of maintenance and reliability engineering. The adaptation of CBM techniques to inspector monitoring is then discussed, focusing on the unique challenges of decision-based effects rather than equipment degradation effects. These techniques are then applied to the Oak Ridge Mock F&W facility a water-based physical simulation of a material feed and withdrawal process used at enrichment plants that is used to develop and test online monitoring techniques for fully information-driven safeguards of GCEPs. Advantages and limitations of the CBM approach to online monitoring are discussed, as well as the potential challenges of adapting CBM concepts to safeguards applications.
On the electric micro-field in plasmas: statistics of the spatial derivatives
Guerricha, S.; Chihi, S.; Meftah, M. T.
2008-10-22
Using the Monte-Carlo simulation we calculated for some specific plasmas, the distribution functions of the derivatives of the micro-field components. Some of them are compared to those calculated earlier by other authors.
DEGAS 2 Daren Stotler and Charles Karney | Princeton Plasma Physics Lab
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
DEGAS 2 Daren Stotler and Charles Karney This invention is a Monte Carlo simulation code designed to study the behavior of neutral particles in plasmas with an emphasis on fusion applications. No.: M-807 Inventor(s): Daren P Stotler
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
a function of time. The fits to a constant and a line with a slope are consistent with a constant rate. which are simulated with the MiniBooNE detector Monte Carlo. Figure 3...
Electrolyte pore/solution partitioning by expanded grand canonical ensemble
Office of Scientific and Technical Information (OSTI)
Monte Carlo simulation (Journal Article) | SciTech Connect This content will become publicly available on March 27, 2016 Title: Electrolyte pore/solution partitioning by expanded grand canonical ensemble Monte Carlo simulation Authors: Moucka, Filip [1] ; Bratko, Dusan [2] ; Luzar, Alenka [2] Search SciTech Connect for author "Luzar, Alenka" Search SciTech Connect for ORCID "000000031640191X" Search orcid.org for ORCID "000000031640191X" + Show Author
Energy Science and Technology Software Center (OSTI)
2005-10-15
HybSim (short for Hybrid Simulator) is a flexible, easy to use screening tool that allows the user to quanti the technical and economic benefits of installing a village hybrid generating system and simulates systems with any combination of Â—Diesel generator sets Â—Photovoltaic arrays -Wind Turbines and -Battery energy storage systems Most village systems (or small population sites such as villages, remote military bases, small communities, independent or isolated buildings or centers) depend on diesel generationmoreÂ Â» systems for their source of energy. HybSim allows the user to determine other "sources" of energy that can greatly reduce the dollar to kilo-watt hour ratio. Supported by the DOE, Energy Storage Program, HybSim was initially developed to help analyze the benefits of energy storage systems in Alaskan villages. Soon after its development, other sources of energy were added providing the user with a greater range of analysis opportunities and providing the village with potentially added savings. In addition to village systems, HybSim has generated interest for use from military institutions in energy provisions and USAID for international village analysis.Â«Â less
Duo at Santa Fe's Monte del Sol Charter School takes top award in 25th
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
New Mexico Supercomputing Challenge 5th New Mexico Supercomputing Challenge Duo at Santa Fe's Monte del Sol Charter School takes top award in 25th New Mexico Supercomputing Challenge Meghan Hill and Katelynn James took the top prize for their research project April 21, 2015 Katelynn James, left, and Meghan Hill of Monte del Sol Charter School in Santa Fe. Katelynn James, left, and Meghan Hill of Monte del Sol Charter School in Santa Fe. Contact Los Alamos National Laboratory Steve Sandoval
Duo at Santa Fe's Monte del Sol Charter School takes top award...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
5th New Mexico Supercomputing Challenge Duo at Santa Fe's Monte del Sol Charter School takes top award in 25th New Mexico Supercomputing Challenge Meghan Hill and Katelynn James...
Kwon, Kyung; Fan, Liang-Shih; Zhou, Qiang; Yang, Hui
2014-09-30
A new and efficient direct numerical method with second-order convergence accuracy was developed for fully resolved simulations of incompressible viscous flows laden with rigid particles. The method combines the state-of-the-art immersed boundary method (IBM), the multi-direct forcing method, and the lattice Boltzmann method (LBM). First, the multi-direct forcing method is adopted in the improved IBM to better approximate the no-slip/no-penetration (ns/np) condition on the surface of particles. Second, a slight retraction of the Lagrangian grid from the surface towards the interior of particles with a fraction of the Eulerian grid spacing helps increase the convergence accuracy of the method. An over-relaxation technique in the procedure of multi-direct forcing method and the classical fourth order Runge-Kutta scheme in the coupled fluid-particle interaction were applied. The use of the classical fourth order Runge-Kutta scheme helps the overall IB-LBM achieve the second order accuracy and provides more accurate predictions of the translational and rotational motion of particles. The preexistent code with the first-order convergence rate is updated so that the updated new code can resolve the translational and rotational motion of particles with the second-order convergence rate. The updated code has been validated with several benchmark applications. The efficiency of IBM and thus the efficiency of IB-LBM were improved by reducing the number of the Lagragian markers on particles by using a new formula for the number of Lagrangian markers on particle surfaces. The immersed boundary-lattice Boltzmann method (IBLBM) has been shown to predict correctly the angular velocity of a particle. Prior to examining drag force exerted on a cluster of particles, the updated IB-LBM code along with the new formula for the number of Lagrangian markers has been further validated by solving several theoretical problems. Moreover, the unsteadiness of the drag force is examined when a fluid is accelerated from rest by a constant average pressure gradient toward a steady Stokes flow. The simulation results agree well with the theories for the short- and long-time behavior of the drag force. Flows through non-rotational and rotational spheres in simple cubic arrays and random arrays are simulated over the entire range of packing fractions, and both low and moderate particle Reynolds numbers to compare the simulated results with the literature results and develop a new drag force formula, a new lift force formula, and a new torque formula. Random arrays of solid particles in fluids are generated with Monte Carlo procedure and Zinchenko's method to avoid crystallization of solid particles over high solid volume fractions. A new drag force formula was developed with extensive simulated results to be closely applicable to real processes over the entire range of packing fractions and both low and moderate particle Reynolds numbers. The simulation results indicate that the drag force is barely affected by rotational Reynolds numbers. Drag force is basically unchanged as the angle of the rotating axis varies.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
A GENERAL TRANSFORM FOR VARIANCE REDUCTION IN MONTE CARLO SIMULATIONS T.L. Becker Knolls Atomic Power Laboratory Schenectady, New York 12301 troy.becker@unnpp.gov E.W. Larsen Department of Nuclear Engineering University of Michigan Ann Arbor, Michigan 48109-2104 edlarsen@umich.edu ABSTRACT This paper describes a general transform to reduce the variance of the Monte Carlo estimate of some desired solution, such as flux or biological dose. This transform implicitly includes many standard variance
A Generalized Boltzmann Fokker-Planck Method for Coupled Charged...
Office of Scientific and Technical Information (OSTI)
Monte Carlo methods but also the established condensed history Monte Carlo technique. ... Resource Type: Technical Report Research Org: University of New Mexico Sponsoring Org: ...
Weaver, Brian Phillip; Williams, Brian J.
2015-10-06
The purpose of this manuscript is to illustrate how to use the simulator we have developed to generate counts from simulated spectra.
Reframing Accelerator Simulations
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Simulations Mori-1.png Key Challenges: Use advanced simulation tools to study the feasibility of plasma-based linear colliders and to optimize conceptual designs. Much of the...
I T E L I N E S S Carlos Saenz Makes the Ultimate Sacrifice
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Carlos Saenz Makes the Ultimate Sacrifice 1 Agencies Collaborate to Tackle Fire Season 2 NTS Groups Garner P2 Best-in-Class Awards 3 NTS Security Contract Awarded to WSI 4 Offsites .... "Go Long-Term!" 5 E-mentors Meet and Greet E-Mentees 5 Occupational Medicine Focuses on Heat Stroke 6 Milestones 7 Calendar 8 In This Issue A publication for all members of the NNSA/NSO family Issue 117 June 2006 S adly, on May 5, 2006, Wackenhut Services, Inc. - Nevada (WSI-NV) was informed that Carlos
Carbon Capture Simulation Initiative
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Capture Simulation Initiative Fact sheet More Information Research Team Members Key Contacts Carbon Capture Simulation Initiative The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry, and academic institutions that is developing, demonstrating and deploying state-of-the-art computational modeling and simulation tools to accelerate the development of carbon capture technologies from discovery to development, demonstration, and ultimately the
Preliminary Benchmarking Efforts and MCNP Simulation Results for Homeland Security
Robert Hayes
2008-04-18
It is shown in this work that basic measurements made from well defined source detector configurations can be readily converted in to benchmark quality results by which Monte Carlo N-Particle (MCNP) input stacks can be validated. Specifically, a recent measurement made in support of national security at the Nevada Test Site (NTS) is described with sufficient detail to be submitted to the American Nuclear Society’s (ANS) Joint Benchmark Committee (JBC) for consideration as a radiation measurement benchmark. From this very basic measurement, MCNP input stacks are generated and validated both in predicted signal amplitude and spectral shape. Not modeled at this time are those perturbations from the more recent pulse height light (PHL) tally feature, although what spectral deviations are seen can be largely attributed to not including this small correction. The value of this work is as a proof-of-concept demonstration that with well documented historical testing can be converted into formal radiation measurement benchmarks. This effort would support virtual testing of algorithms and new detector configurations.
Electrical Circuit Simulation Code
Energy Science and Technology Software Center (OSTI)
2001-08-09
Massively-Parallel Electrical Circuit Simulation Code. CHILESPICE is a massively-arallel distributed-memory electrical circuit simulation tool that contains many enhanced radiation, time-based, and thermal features and models. Large scale electronic circuit simulation. Shared memory, parallel processing, enhance convergence. Sandia specific device models.
Burr, Melvin J. (Westminster, CO)
1990-01-30
An arc voltage simulator for an arc welder permits the welder response to a variation in arc voltage to be standardized. The simulator uses a linear potentiometer connected to the electrode to provide a simulated arc voltage at the electrode that changes as a function of electrode position.
San Carlos Apache Tribe Energy Organization Analysis & Solar Feasibility Study
Energy Organization Analysis & Solar Feasibility Study 2012 funded by grants from the US Department of Energy Tribal Energy Program . San Carlos Apache Mission Statement The Apache People will live a balanced life in harmony with spirituality, culture, language, and family unity in an ever-changing world and shall create a strategic framework for our tribe to grow and prosper. Reservation Boundary The Tribe and Reservation * 90 miles from Phoenix. * 2,400' to 8,300' elevation. * 1.83
Geiger, K.; Longacre, R.; Srivastava, D.K.
1999-02-01
VNI is a general-purpose Monte-Carlo event-generator, which includes the simulation of lepton-lepton, lepton-hadron, lepton-nucleus, hadron-hadron, hadron-nucleus, and nucleus-nucleus collisions. It uses the real-time evolution of parton cascades in conjunction with a self-consistent hadronization scheme, as well as the development of hadron cascades after hadronization. The causal evolution from a specific initial state (determined by the colliding beam particles) is followed by the time-development of the phase-space densities of partons, pre-hadronic parton clusters, and final-state hadrons, in position-space, momentum-space and color-space. The parton-evolution is described in terms of a space-time generalization of the familiar momentum-space description of multiple (semi)hard interactions in QCD, involving 2 {r_arrow} 2 parton collisions, 2 {r_arrow} 1 parton fusion processes, and 1 {r_arrow} 2 radiation processes. The formation of color-singlet pre-hadronic clusters and their decays into hadrons, on the other hand, is treated by using a spatial criterion motivated by confinement and a non-perturbative model for hadronization. Finally, the cascading of produced prehadronic clusters and of hadrons includes a multitude of 2 {r_arrow} n processes, and is modeled in parallel to the parton cascade description. This paper gives a brief review of the physics underlying VNI, as well as a detailed description of the program itself. The latter program description emphasizes easy-to-use pragmatism and explains how to use the program (including simple examples), annotates input and control parameters, and discusses output data provided by it.
Reactor refueling machine simulator
Rohosky, T.L.; Swidwa, K.J.
1987-10-13
This patent describes in combination: a nuclear reactor; a refueling machine having a bridge, trolley and hoist each driven by a separate motor having feedback means for generating a feedback signal indicative of movement thereof. The motors are operable to position the refueling machine over the nuclear reactor for refueling the same. The refueling machine also has a removable control console including means for selectively generating separate motor signals for operating the bridge, trolley and hoist motors and for processing the feedback signals to generate an indication of the positions thereof, separate output leads connecting each of the motor signals to the respective refueling machine motor, and separate input leads for connecting each of the feedback means to the console; and a portable simulator unit comprising: a single simulator motor; a single simulator feedback signal generator connected to the simulator motor for generating a simulator feedback signal in response to operation of the simulator motor; means for selectively connecting the output leads of the console to the simulator unit in place of the refueling machine motors, and for connecting the console input leads to the simulator unit in place of the refueling machine motor feedback means; and means for driving the single simulator motor in response to any of the bridge, trolley or hoist motor signals generated by the console and means for applying the simulator feedback signal to the console input lead associated with the motor signal being generated by the control console.
Energy Science and Technology Software Center (OSTI)
002763MLTPL00 Quantum Process Matrix Computation by Monte Carlo https://development.sandia.gov/-kyoung/
OSTI, US Dept of Energy, Office of Scientific and Technical Information |
Office of Scientific and Technical Information (OSTI)
Speeding access to science information from DOE and Beyond Monte Carlo
Fast Analysis and Simulation Team | NISAC
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
NISACFast Analysis and Simulation Team
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Modeling & Simulation Modeling & Simulation Research into alternative forms of energy, especially energy security, is one of the major national security imperatives of this century. Get Expertise David Harradine Physical Chemistry and Applied Spectroscopy Email Josh Smith Chemistry Communications Email The inherent knowledge of transformation has beguiled sorcerers and scientists alike. Data Analysis and Modeling & Simulation for the Chemical Sciences Project Description Almos every
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
PAZ0036_v2.jpg Device Simulation Tool Research Why Solar Fuels Goals & Objectives Thrust 1 Thrust 2 Thrust 3 Thrust 4 Publications Research Highlights Videos Innovations User Facilities Expert Team Benchmarking Database Device Simulation Tool XPS Spectral Database Research Introduction Why Solar Fuels? Goals & Objectives Thrusts Thrust 1 Thrust 2 Thrust 3 Thrust 4 Library Publications Research Highlights Videos Resources User Facilities Expert Team Benchmarking Database Device Simulation
Modeling & Simulation publications
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Modeling & Simulation Â» Modeling & Simulation Publications Modeling & Simulation publications Research into alternative forms of energy, especially energy security, is one of the major national security imperatives of this century. Get Expertise David Harradine Physical Chemistry and Applied Spectroscopy Email Josh Smith Chemistry Email The inherent knowledge of transformation has beguiled sorcerers and scientists alike. D.A. Horner, F. Lambert, J.D. Kress, and L.A. Collins,
Advanced Simulation Capability for
Office of Environmental Management (EM)
Advanced Simulation & Computing pro- grams as well as collaborating with the Offices of Science, Fossil Energy, and Nuclear Energy. Challenge Current groundwater and soil...
Whole Building Energy Simulation
Broader source: Energy.gov [DOE]
Whole building energy simulation, also referred to as energy modeling, can and should be incorporated early during project planning to provide energy impact feedback for which design considerations...
Wallin, Erik; Gonoskov, Arkady; Marklund, Mattias
2015-03-15
We model the emission of high energy photons due to relativistic charged particle motion in intense laser-plasma interactions. This is done within a particle-in-cell code, for which high frequency radiation normally cannot be resolved due to finite time steps and grid size. A simple expression for the synchrotron radiation spectra is used together with a Monte-Carlo method for the emittance. We extend previous work by allowing for arbitrary fields, considering the particles to be in instantaneous circular motion due to an effective magnetic field. Furthermore, we implement noise reduction techniques and present validity estimates of the method. Finally, we perform a rigorous comparison to the mechanism of radiation reaction, and find the emitted energy to be in excellent agreement with the losses calculated using radiation reaction.
Radiation detector spectrum simulator
Wolf, M.A.; Crowell, J.M.
1985-04-09
A small battery operated nuclear spectrum simulator having a noise source generates pulses with a Gaussian distribution of amplitudes. A switched dc bias circuit cooperating therewith to generate several nominal amplitudes of such pulses and a spectral distribution of pulses that closely simulates the spectrum produced by a radiation source such as Americium 241.
Radio Channel Simulator (RCSM)
Energy Science and Technology Software Center (OSTI)
2007-01-31
This is a simulation package for making site specific predictions of radio signal strength. The software computes received power at discrete grid points as a function of the transmitter location and propagation environment. It is intended for use with wireless network simulation packages and to support wireless network deployments.
Radiation detector spectrum simulator
Wolf, Michael A. (Los Alamos, NM); Crowell, John M. (Los Alamos, NM)
1987-01-01
A small battery operated nuclear spectrum simulator having a noise source nerates pulses with a Gaussian distribution of amplitudes. A switched dc bias circuit cooperating therewith generates several nominal amplitudes of such pulses and a spectral distribution of pulses that closely simulates the spectrum produced by a radiation source such as Americium 241.
Energy Science and Technology Software Center (OSTI)
2014-04-01
Damselfly is a model-based parallel network simulator. It can simulate communication patterns of High Performance Computing applications on different network topologies. It outputs steady-state network traffic for a communication pattern, which can help in studying network congestion and its impact on performance.
Converting DYNAMO simulations to Powersim Studio simulations
Walker, La Tonya Nicole; Malczynski, Leonard A.
2014-02-01
DYNAMO is a computer program for building and running 'continuous' simulation models. It was developed by the Industrial Dynamics Group at the Massachusetts Institute of Technology for simulating dynamic feedback models of business, economic, and social systems. The history of the system dynamics method since 1957 includes many classic models built in DYANMO. It was not until the late 1980s that software was built to take advantage of the rise of personal computers and graphical user interfaces that DYNAMO was supplanted. There is much learning and insight to be gained from examining the DYANMO models and their accompanying research papers. We believe that it is a worthwhile exercise to convert DYNAMO models to more recent software packages. We have made an attempt to make it easier to turn these models into a more current system dynamics software language, Powersim © Studio produced by Powersim AS^{2} of Bergen, Norway. This guide shows how to convert DYNAMO syntax into Studio syntax.
2008 - 2011 Energy Program Review & 2011 - 2012 ENERGY ORGANIZATION ANALYSIS Burden Basket San Carlos Apache Mission Statement The Apache People will live a balanced life in harmony with spirituality, culture, language, and family unity in an ever-changing world. The Apache People shall create a strategic framework for our tribe to grow and prosper. The Tribe and Reservation * 90 miles east of Phoenix. * 2,400' to 8,300'+. * 1.83 million acres. * 800,000+ acres wooded/forested. * 1M ac.
Energy Science and Technology Software Center (OSTI)
2015-10-29
GFS is a simulation engine that is used for the characterization of Accelerator performance parameters based on the machine layout, configuration and noise sources. It combines extensively tested Feedback models with a longitudinal phase space tracking simulator along with the interaction between the two via beam-based feedback using a computationally efficient simulation engine. The models include beam instrumentation, considerations on loop delays for in both the R and beam-based feedback loops, as well as themoreÂ Â» ability to inject noise (both correlated and uncorrelated) at different points of the machine including a full characterization of the electron gun performance parameters.Â«Â less
2015-10-29
GFS is a simulation engine that is used for the characterization of Accelerator performance parameters based on the machine layout, configuration and noise sources. It combines extensively tested Feedback models with a longitudinal phase space tracking simulator along with the interaction between the two via beam-based feedback using a computationally efficient simulation engine. The models include beam instrumentation, considerations on loop delays for in both the R and beam-based feedback loops, as well as the ability to inject noise (both correlated and uncorrelated) at different points of the machine including a full characterization of the electron gun performance parameters.
Energy Science and Technology Software Center (OSTI)
2015-09-14
GridDyn is a part of power grid simulation toolkit. The code is designed using modern object oriented C++ methods utilizing C++11 and recent Boost libraries to ensure compatibility with multiple operating systems and environments.
Compressible Astrophysics Simulation Code
Energy Science and Technology Software Center (OSTI)
2007-07-18
This is an astrophysics simulation code involving a radiation diffusion module developed at LLNL coupled to compressible hydrodynamics and adaptive mesh infrastructure developed at LBNL. One intended application is to neutrino diffusion in core collapse supernovae.
Fundamentals of plasma simulation
Forslund, D.W.
1985-01-01
With the increasing size and speed of modern computers, the incredibly complex nonlinear properties of plasmas in the laboratory and in space are being successfully explored in increasing depth. Of particular importance have been numerical simulation techniques involving finite size particles on a discrete mesh. After discussing the importance of this means of understanding a variety of nonlinear plasma phenomena, we describe the basic elements of particle-in-cell simulation and their limitations and advantages. The differencing techniques, stability and accuracy issues, data management and optimization issues are discussed by means of a simple example of a particle-in-cell code. Recent advances in simulation methods allowing large space and time scales to be treated with minimal sacrifice in physics are reviewed. Various examples of nonlinear processes successfully studied by plasma simulation will be given.
Theory Modeling and Simulation
Shlachter, Jack
2012-08-23
Los Alamos has a long history in theory, modeling and simulation. We focus on multidisciplinary teams that tackle complex problems. Theory, modeling and simulation are tools to solve problems just like an NMR spectrometer, a gas chromatograph or an electron microscope. Problems should be used to define the theoretical tools needed and not the other way around. Best results occur when theory and experiments are working together in a team.
Xyce parallel electronic simulator.
Keiter, Eric Richard; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd Stirling; Pawlowski, Roger Patrick; Santarelli, Keith R.
2010-05-01
This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide.
Energy Simulation Games Lesson
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Ken Walz Unit Title: Energy Efficiency and Renewable Energy (EERE) Subject: Physical, Env, and Social Sciences Lesson Title: Energy Simulation Games Grade Level(s): 6-12 Lesson Length: 1 hours (+ optional time outside class) Date(s): 7/14/2014 * Learning Goal(s) By the end of this lesson, students will have a deeper understanding of Energy Management, Policy, and Decision Making. * Connection to Energy/ Renewable Energy In this assignment you will be using two different energy simulation tools
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
NISACModeling & Simulation content top Overview Posted by Admin on Feb 13, 2012 in | Comments 0 comments NISAC experts analyze-using modeling and simulation capabilities-critical infrastructure, along with their interdependencies, vulnerabilities, and complexities. Their analyses are used to aid decisionmakers with policy assessment, mitigation planning, education, and training and provide near-real-time assistance to crisis-response organizations. Infrastructure systems are large, complex,
Advanced Simulation and Computing
National Nuclear Security Administration (NNSA)
NA-ASC-117R-09-Vol.1-Rev.0 Advanced Simulation and Computing PROGRAM PLAN FY09 October 2008 ASC Focal Point Robert Meisner, Director DOE/NNSA NA-121.2 202-586-0908 Program Plan Focal Point for NA-121.2 Njema Frazier DOE/NNSA NA-121.2 202-586-5789 A Publication of the Office of Advanced Simulation & Computing, NNSA Defense Programs i Contents Executive Summary ----------------------------------------------------------------------------------------------- 1 I. Introduction
Estimation of the Dynamic States of Synchronous Machines Using an Extended Particle Filter
Zhou, Ning; Meng, Da; Lu, Shuai
2013-11-11
In this paper, an extended particle filter (PF) is proposed to estimate the dynamic states of a synchronous machine using phasor measurement unit (PMU) data. A PF propagates the mean and covariance of states via Monte Carlo simulation, is easy to implement, and can be directly applied to a non-linear system with non-Gaussian noise. The extended PF modifies a basic PF to improve robustness. Using Monte Carlo simulations with practical noise and model uncertainty considerations, the extended PF’s performance is evaluated and compared with the basic PF and an extended Kalman filter (EKF). The extended PF results showed high accuracy and robustness against measurement and model noise.
CASL-8-2015-0160-000 Ben Forget, C.Josey, P.Ducru, J.Walsh Massachusetts Institute of Technology
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
60-000 Ben Forget, C.Josey, P.Ducru, J.Walsh Massachusetts Institute of Technology March 31, 2015 Progress on the Implementation and Testing of On-The-Fly Doppler Broadening for Monte Carlo Simulations CASL-U-2015-0160-000 CASL Milestone report L3:RTM.MCH.P10.02 B. Forget, C.Josey, P.Ducru, J.Walsh MIT Abstract This report documents the progress made in the implementation and testing of on-the-fly Doppler broadening for Monte Carlo simulations. The content is a summary of work reported in three
Computer simulation | Open Energy Information
Computer simulation Jump to: navigation, search OpenEI Reference LibraryAdd to library Web Site: Computer simulation Author wikipedia Published wikipedia, 2013 DOI Not Provided...
Computation & Simulation > Theory & Computation > Research >...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
it. Click above to view. computational2 computational3 In This Section Computation & Simulation Computation & Simulation Extensive combinatorial results and ongoing basic...
Liu, Dajiang [Ames Laboratory; Evans, James W. [Ames Laboratory
2013-12-01
A realistic molecular-level description of catalytic reactions on single-crystal metal surfaces can be provided by stochastic multisite lattice-gas (msLG) models. This approach has general applicability, although in this report, we will focus on the example of CO-oxidation on the unreconstructed fcc metal (100) or M(100) surfaces of common catalyst metals M = Pd, Rh, Pt and Ir (i.e., avoiding regimes where Pt and Ir reconstruct). These models can capture the thermodynamics and kinetics of adsorbed layers for the individual reactants species, such as CO/M(100) and O/M(100), as well as the interaction and reaction between different reactant species in mixed adlayers, such as (CO + O)/M(100). The msLG models allow population of any of hollow, bridge, and top sites. This enables a more flexible and realistic description of adsorption and adlayer ordering, as well as of reaction configurations and configuration-dependent barriers. Adspecies adsorption and interaction energies, as well as barriers for various processes, constitute key model input. The choice of these energies is guided by experimental observations, as well as by extensive Density Functional Theory analysis. Model behavior is assessed via Kinetic Monte Carlo (KMC) simulation. We also address the simulation challenges and theoretical ramifications associated with very rapid diffusion and local equilibration of reactant adspecies such as CO. These msLG models are applied to describe adsorption, ordering, and temperature programmed desorption (TPD) for individual CO/M(100) and O/M(100) reactant adlayers. In addition, they are also applied to predict mixed (CO + O)/M(100) adlayer structure on the nanoscale, the complete bifurcation diagram for reactive steady-states under continuous flow conditions, temperature programmed reaction (TPR) spectra, and titration reactions for the CO-oxidation reaction. Extensive and reasonably successful comparison of model predictions is made with experimental data. Furthermore, we discuss the possible transition from traditional mean-field-type bistability and reaction kinetics for lower-pressure to multistability and enhanced fluctuation effects for moderate- or higher-pressure. Behavior in the latter regime reflects a stronger influence of adspecies interactions and also lower diffusivity in the higher-coverage mixed adlayer. We also analyze mesoscale spatiotemporal behavior including the propagation of reaction diffusion fronts between bistable reactive and inactive states, and associated nucleation-mediated transitions between these states. This behavior is controlled by complex surface mass transport processes, specifically chemical diffusion in mixed reactant adlayers for which we provide a precise theoretical formulation. The msLG models together with an appropriate treatment of chemical diffusivity enable equation-free heterogeneous coupled lattice-gas (HCLG) simulations of spatiotemporal behavior. In addition, msLG + HCLG modeling can describe coverage variations across polycrystalline catalysts surfaces, pressure variations across catalyst surfaces in microreactors, and could be incorporated into a multiphysics framework to describe mass and heat transfer limitations for high-pressure catalysis. (C) 2013 Elsevier Ltd. All rights reserved.
Simple Electric Vehicle Simulation
Energy Science and Technology Software Center (OSTI)
1993-07-29
SIMPLEV2.0 is an electric vehicle simulation code which can be used with any IBM compatible personal computer. This general purpose simulation program is useful for performing parametric studies of electric and series hybrid electric vehicle performance on user input driving cycles.. The program is run interactively and guides the user through all of the necessary inputs. Driveline components and the traction battery are described and defined by ASCII files which may be customized by themoreÂ Â» user. Scaling of these components is also possible. Detailed simulation results are plotted on the PC monitor and may also be printed on a printer attached to the PC.Â«Â less
Experiments ? Simulations = Better Nuclear Power Research
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Experiments + Simulations Better Nuclear Power Research Experiments Simulations Better Nuclear Power Research Atomic Level Simulations Enhance Characterization of Radiation...
Simulating neural systems with Xyce.
Schiek, Richard Louis; Thornquist, Heidi K.; Mei, Ting; Warrender, Christina E.; Aimone, James Bradley; Teeter, Corinne; Duda, Alex M.
2012-12-01
Sandia's parallel circuit simulator, Xyce, can address large scale neuron simulations in a new way extending the range within which one can perform high-fidelity, multi-compartment neuron simulations. This report documents the implementation of neuron devices in Xyce, their use in simulation and analysis of neuron systems.
2014-09-15
Two simulations show the differences between a battery being drained at a slower rate, over a full hour, versus a faster rate, only six minutes (a tenth of an hour). In both cases battery particles go from being fully charged (green) to fully drained (red), but there are significant differences in the patterns of discharge based on the rate.
Parallel Dislocation Simulator
Energy Science and Technology Software Center (OSTI)
2006-10-30
ParaDiS is software capable of simulating the motion, evolution, and interaction of dislocation networks in single crystals using massively parallel computer architectures. The software is capable of outputting the stress-strain response of a single crystal whose plastic deformation is controlled by the dislocation processes.
Silica separation from reinjection brines at Monte Amiata geothermal plants, Italy
Vitolo, S.; Cialdella, M.L. . Dipartimento di Ingegneria Chimica)
1994-06-01
A process for the separation of silica from geothermal reinjection brines is reported, in which the phases of coagulation, sedimentation and filtration of silica are involved. The effectiveness of lime and calcium chloride as coagulating agents has been investigated and the separating operations have been set out. Attention has been focused on Monte Amiata reinjection geothermal brines, whose scaling effect causes serious problems in the operation and maintenance of reinjection facilities. The study has been conducted using different amounts of added coagulants and at different temperatures, to determine optimal operating conditions. Though calcium chloride was revealed to be effective as a coagulant of the polymeric silica fraction, lime has also proved capable of removing monomeric dissolved silica at high dosages. Investigation on the behavior of coagulated brine has revealed the feasibility of separating the coagulated silica by sedimentation and filtration.
Loop-cluster simulation of the zero- and one-hole sectors of the t-J model on the honeycomb lattice
Jiang, F.-J.; Kaempfer, F.; Nyfeler, M.; Wiese, U.-J.
2008-12-01
Inspired by the unhydrated variant of the superconducting material Na{sub x}CoO{sub 2}{center_dot}yH{sub 2}O at x=(1/3), we study the t-J model on a honeycomb lattice by using an efficient loop-cluster algorithm. The low-energy physics of the undoped system and of the single-hole sector is described by a systematic low-energy effective field theory. The staggered magnetization per spin M-tilde{sub s}=0.2688(3), the spin stiffness {rho}{sub s}=0.102(2)J, the spin-wave velocity c=1.297(16)Ja, and the kinetic mass M{sup '} of a hole are obtained by fitting the numerical Monte Carlo data to the effective field theory predictions.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
COER HYDRODYNAMIC MODELING COMPETITION: MODELING THE DYNAMIC RESPONSE OF A FLOATING BODY USING THE WEC-SIM AND FAST SIMULATION TOOLS Michael Lawson Braulio Barahona Garzon Fabian Wendt Yi-Hsiang Yu National Renewable Energy Laboratory Golden, Colorado, USA Carlos Michelen Sandia National Laboratories Albuquerque, New Mexico, USA ABSTRACT The Center for Ocean Energy Research (COER) at the University of Maynooth in Ireland organized a hydrodynamic modeling competition in conjunction with OMAE2015.
PEBBLES Mechanics Simulation Speedup
Joshua J. Cogliati; Abderrafi M. Ougouag
2010-05-01
Pebble bed reactors contain large numbers of spherical fuel elements arranged randomly. Determining the motion and location of these fuel elements is required for calculating certain parameters of pebble bed reactor operation. These simulations involve hundreds of thousands of pebbles and involve determining the entire core motion as pebbles are recirculated. Single processor algorithms for this are insufficient since they would take decades to centuries of wall-clock time. This paper describes the process of parallelizing and speeding up the PEBBLES pebble mechanics simulation code. Both shared memory programming with the Open Multi-Processing API and distributed memory programming with the Message Passing Interface API are used in simultaneously in this process. A new shared memory lock-less linear time collision detection algorithm is described. This method allows faster detection of pebbles in contact than generic methods. These combine to make full recirculations on AVR sized reactors possible in months of wall clock time.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
scramjet engine simulations - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Numerical Simulation - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced