Quantum Monte Carlo methods for nuclear physics
Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.
2015-09-09
Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Quantum Monte Carlo methods for nuclear physics
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.
2014-10-19
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-bodymore » interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less
Quantum Monte Carlo methods for nuclear physics
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.
2015-09-09
Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit,more » and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less
Quantum Monte Carlo methods for nuclear physics
Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.
2014-10-19
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
DOE Science Showcase - Monte Carlo Methods | OSTI, US Dept of...
Office of Scientific and Technical Information (OSTI)
Learn about the ways these methods are used in DOE's research endeavors today in "Monte Carlo Methods" by Dr. William Watson, Physicist, OSTI staff. Image credit: Sandia National ...
Calculations of pair production by Monte Carlo methods
Bottcher, C.; Strayer, M.R.
1991-01-01
We describe some of the technical design issues associated with the production of particle-antiparticle pairs in very large accelerators. To answer these questions requires extensive calculation of Feynman diagrams, in effect multi-dimensional integrals, which we evaluate by Monte Carlo methods on a variety of supercomputers. We present some portable algorithms for generating random numbers on vector and parallel architecture machines. 12 refs., 14 figs.
Energy Science and Technology Software Center (OSTI)
2010-10-20
The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.
Modeling granular phosphor screens by Monte Carlo methods
Liaparinos, Panagiotis F.; Kandarakis, Ioannis S.; Cavouras, Dionisis A.; Delis, Harry B.; Panayiotakis, George S.
2006-12-15
The intrinsic phosphor properties are of significant importance for the performance of phosphor screens used in medical imaging systems. In previous analytical-theoretical and Monte Carlo studies on granular phosphor materials, values of optical properties, and light interaction cross sections were found by fitting to experimental data. These values were then employed for the assessment of phosphor screen imaging performance. However, it was found that, depending on the experimental technique and fitting methodology, the optical parameters of a specific phosphor material varied within a wide range of values, i.e., variations of light scattering with respect to light absorption coefficients were often observed for the same phosphor material. In this study, x-ray and light transport within granular phosphor materials was studied by developing a computational model using Monte Carlo methods. The model was based on the intrinsic physical characteristics of the phosphor. Input values required to feed the model can be easily obtained from tabulated data. The complex refractive index was introduced and microscopic probabilities for light interactions were produced, using Mie scattering theory. Model validation was carried out by comparing model results on x-ray and light parameters (x-ray absorption, statistical fluctuations in the x-ray to light conversion process, number of emitted light photons, output light spatial distribution) with previous published experimental data on Gd{sub 2}O{sub 2}S:Tb phosphor material (Kodak Min-R screen). Results showed the dependence of the modulation transfer function (MTF) on phosphor grain size and material packing density. It was predicted that granular Gd{sub 2}O{sub 2}S:Tb screens of high packing density and small grain size may exhibit considerably better resolution and light emission properties than the conventional Gd{sub 2}O{sub 2}S:Tb screens, under similar conditions (x-ray incident energy, screen thickness)
Energy Science and Technology Software Center (OSTI)
2006-05-09
The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.
MARKOV CHAIN MONTE CARLO POSTERIOR SAMPLING WITH THE HAMILTONIAN METHOD
K. HANSON
2001-02-01
The Markov Chain Monte Carlo technique provides a means for drawing random samples from a target probability density function (pdf). MCMC allows one to assess the uncertainties in a Bayesian analysis described by a numerically calculated posterior distribution. This paper describes the Hamiltonian MCMC technique in which a momentum variable is introduced for each parameter of the target pdf. In analogy to a physical system, a Hamiltonian H is defined as a kinetic energy involving the momenta plus a potential energy {var_phi}, where {var_phi} is minus the logarithm of the target pdf. Hamiltonian dynamics allows one to move along trajectories of constant H, taking large jumps in the parameter space with relatively few evaluations of {var_phi} and its gradient. The Hamiltonian algorithm alternates between picking a new momentum vector and following such trajectories. The efficiency of the Hamiltonian method for multidimensional isotropic Gaussian pdfs is shown to remain constant at around 7% for up to several hundred dimensions. The Hamiltonian method handles correlations among the variables much better than the standard Metropolis algorithm. A new test, based on the gradient of {var_phi}, is proposed to measure the convergence of the MCMC sequence.
Quantum Monte Carlo methods and lithium cluster properties. [Atomic clusters
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) (0.1981), 0.1895(9) (0.1874(4)), 0.1530(34) (0.1599(73)), 0.1664(37) (0.1724(110)), 0.1613(43) (0.1675(110)) Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) (0.0203(12)), 0.0188(10) (0.0220(21)), 0.0247(8) (0.0310(12)), 0.0253(8) (0.0351(8)) Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
Quantum Monte Carlo methods and lithium cluster properties
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) [0.1981], 0.1895(9) [0.1874(4)], 0.1530(34) [0.1599(73)], 0.1664(37) [0.1724(110)], 0.1613(43) [0.1675(110)] Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) [0.0203(12)], 0.0188(10) [0.0220(21)], 0.0247(8) [0.0310(12)], 0.0253(8) [0.0351(8)] Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
A Geant4 Implementation of a Novel Single-Event Monte Carlo Method for
Office of Scientific and Technical Information (OSTI)
Electron Dose Calculations. (Conference) | SciTech Connect A Geant4 Implementation of a Novel Single-Event Monte Carlo Method for Electron Dose Calculations. Citation Details In-Document Search Title: A Geant4 Implementation of a Novel Single-Event Monte Carlo Method for Electron Dose Calculations. Abstract not provided. Authors: Franke, Brian Claude ; Dixon, David A. ; Prinja, Anil K. Publication Date: 2013-11-01 OSTI Identifier: 1118160 Report Number(s): SAND2013-9631C 481400 DOE Contract
Alcouffe, R.E.
1985-01-01
A difficult class of problems for the discrete-ordinates neutral particle transport method is to accurately compute the flux due to a spatially localized source. Because the transport equation is solved for discrete directions, the so-called ray effect causes the flux at space points far from the source to be inaccurate. Thus, in general, discrete ordinates would not be the method of choice to solve such problems. It is better suited for calculating problems with significant scattering. The Monte Carlo method is suited to localized source problems, particularly if the amount of collisional interactions in minimal. However, if there are many scattering collisions and the flux at all space points is desired, then the Monte Carlo method becomes expensive. To take advantage of the attributes of both approaches, we have devised a first collision source method to combine the Monte Carlo and discrete-ordinates solutions. That is, particles are tracked from the source to their first scattering collision and tallied to produce a source for the discrete-ordinates calculation. A scattered flux is then computed by discrete ordinates, and the total flux is the sum of the Monte Carlo and discrete ordinates calculated fluxes. In this paper, we present calculational results using the MCNP and TWODANT codes for selected two-dimensional problems that show the effectiveness of this method.
On-the-fly nuclear data processing methods for Monte Carlo simulations of fast spectrum systems
Walsh, Jon
2015-08-31
The presentation summarizes work performed over summer 2015 related to Monte Carlo simulations. A flexible probability table interpolation scheme has been implemented and tested with results comparing favorably to the continuous phase-space on-the-fly approach.
Markov Chain Monte Carlo Sampling Methods for 1D Seismic and EM Data Inversion
Energy Science and Technology Software Center (OSTI)
2008-09-22
This software provides several Markov chain Monte Carlo sampling methods for the Bayesian model developed for inverting 1D marine seismic and controlled source electromagnetic (CSEM) data. The current software can be used for individual inversion of seismic AVO and CSEM data and for joint inversion of both seismic and EM data sets. The structure of the software is very general and flexible, and it allows users to incorporate their own forward simulation codes and rockmore » physics model codes easily into this software. Although the softwae was developed using C and C++ computer languages, the user-supplied codes can be written in C, C++, or various versions of Fortran languages. The software provides clear interfaces for users to plug in their own codes. The output of this software is in the format that the R free software CODA can directly read to build MCMC objects.« less
Isotropic Monte Carlo Grain Growth
Energy Science and Technology Software Center (OSTI)
2013-04-25
IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.
Wagner, John C; Mosher, Scott W; Evans, Thomas M; Peplow, Douglas E.; Turner, John A
2011-01-01
This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform real commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the gold standard for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method, which attempts to achieve uniform statistical uncertainty throughout a designated problem space. The MC DD development is being implemented in conjunction with the Denovo deterministic radiation transport package to have direct access to the 3-D, massively parallel discrete-ordinates solver (to support the hybrid method) and the associated parallel routines and structure. This paper describes the hybrid method, its implementation, and initial testing results for a realistic 2-D quarter core pressurized-water reactor model and also describes the MC DD algorithm and its implementation.
Wagner, John C; Mosher, Scott W; Evans, Thomas M; Peplow, Douglas E.; Turner, John A
2010-01-01
This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform ''real'' commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the ''gold standard'' for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method, which attempts to achieve uniform statistical uncertainty throughout a designated problem space. The MC DD development is being implemented in conjunction with the Denovo deterministic radiation transport package to have direct access to the 3-D, massively parallel discrete-ordinates solver (to support the hybrid method) and the associated parallel routines and structure. This paper describes the hybrid method, its implementation, and initial testing results for a realistic 2-D quarter core pressurized-water reactor model and also describes the MC DD algorithm and its implementation.
Feasibility of a Monte Carlo-deterministic hybrid method for fast reactor analysis
Heo, W.; Kim, W.; Kim, Y.; Yun, S.
2013-07-01
A Monte Carlo and deterministic hybrid method is investigated for the analysis of fast reactors in this paper. Effective multi-group cross sections data are generated using a collision estimator in the MCNP5. A high order Legendre scattering cross section data generation module was added into the MCNP5 code. Both cross section data generated from MCNP5 and TRANSX/TWODANT using the homogeneous core model were compared, and were applied to DIF3D code for fast reactor core analysis of a 300 MWe SFR TRU burner core. For this analysis, 9 groups macroscopic-wise data was used. In this paper, a hybrid calculation MCNP5/DIF3D was used to analyze the core model. The cross section data was generated using MCNP5. The k{sub eff} and core power distribution were calculated using the 54 triangle FDM code DIF3D. A whole core calculation of the heterogeneous core model using the MCNP5 was selected as a reference. In terms of the k{sub eff}, 9-group MCNP5/DIF3D has a discrepancy of -154 pcm from the reference solution, 9-group TRANSX/TWODANT/DIF3D analysis gives -1070 pcm discrepancy. (authors)
Forward treatment planning for modulated electron radiotherapy (MERT) employing Monte Carlo methods
Henzen, D. Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Lssl, K.; Aebersold, D. M.; Fix, M. K.; Neuenschwander, H.; Stampanoni, M. F. M.
2014-03-15
Purpose: This paper describes the development of a forward planning process for modulated electron radiotherapy (MERT). The approach is based on a previously developed electron beam model used to calculate dose distributions of electron beams shaped by a photon multi leaf collimator (pMLC). Methods: As the electron beam model has already been implemented into the Swiss Monte Carlo Plan environment, the Eclipse treatment planning system (Varian Medical Systems, Palo Alto, CA) can be included in the planning process for MERT. In a first step, CT data are imported into Eclipse and a pMLC shaped electron beam is set up. This initial electron beam is then divided into segments, with the electron energy in each segment chosen according to the distal depth of the planning target volume (PTV) in beam direction. In order to improve the homogeneity of the dose distribution in the PTV, a feathering process (Gaussian edge feathering) is launched, which results in a number of feathered segments. For each of these segments a dose calculation is performed employing the in-house developed electron beam model along with the macro Monte Carlo dose calculation algorithm. Finally, an automated weight optimization of all segments is carried out and the total dose distribution is read back into Eclipse for display and evaluation. One academic and two clinical situations are investigated for possible benefits of MERT treatment compared to standard treatments performed in our clinics and treatment with a bolus electron conformal (BolusECT) method. Results: The MERT treatment plan of the academic case was superior to the standard single segment electron treatment plan in terms of organs at risk (OAR) sparing. Further, a comparison between an unfeathered and a feathered MERT plan showed better PTV coverage and homogeneity for the feathered plan, with V{sub 95%} increased from 90% to 96% and V{sub 107%} decreased from 8% to nearly 0%. For a clinical breast boost irradiation, the MERT plan led to a similar homogeneity in the PTV compared to the standard treatment plan while the mean body dose was lower for the MERT plan. Regarding the second clinical case, a whole breast treatment, MERT resulted in a reduction of the lung volume receiving more than 45% of the prescribed dose when compared to the standard plan. On the other hand, the MERT plan leads to a larger low-dose lung volume and a degraded dose homogeneity in the PTV. For the clinical cases evaluated in this work, treatment plans using the BolusECT technique resulted in a more homogenous PTV and CTV coverage but higher doses to the OARs than the MERT plans. Conclusions: MERT treatments were successfully planned for phantom and clinical cases, applying a newly developed intuitive and efficient forward planning strategy that employs a MC based electron beam model for pMLC shaped electron beams. It is shown that MERT can lead to a dose reduction in OARs compared to other methods. The process of feathering MERT segments results in an improvement of the dose homogeneity in the PTV.
Correlated electron dynamics with time-dependent quantum Monte Carlo:
Office of Scientific and Technical Information (OSTI)
Three-dimensional helium (Journal Article) | SciTech Connect Correlated electron dynamics with time-dependent quantum Monte Carlo: Three-dimensional helium Citation Details In-Document Search Title: Correlated electron dynamics with time-dependent quantum Monte Carlo: Three-dimensional helium Here the recently proposed time-dependent quantum Monte Carlo method is applied to three dimensional para- and ortho-helium atoms subjected to an external electromagnetic field with amplitude sufficient
Chorin, Alexandre J.
2007-12-12
A sampling method for spin systems is presented. The spin lattice is written as the union of a nested sequence of sublattices, all but the last with conditionally independent spins, which are sampled in succession using their marginals. The marginals are computed concurrently by a fast algorithm; errors in the evaluation of the marginals are offset by weights. There are no Markov chains and each sample is independent of the previous ones; the cost of a sample is proportional to the number of spins (but the number of samples needed for good statistics may grow with array size). The examples include the Edwards-Anderson spin glass in three dimensions.
Analysis of Radiation Effects in Silicon using Kinetic Monte Carlo Methods
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Hehr, Brian Douglas
2014-11-25
The transient degradation of semiconductor device performance under irradiation has long been an issue of concern. Neutron irradiation can instigate the formation of quasi-stable defect structures, thereby introducing new energy levels into the bandgap that alter carrier lifetimes and give rise to such phenomena as gain degradation in bipolar junction transistors. Normally, the initial defect formation phase is followed by a recovery phase in which defect-defect or defect-dopant interactions modify the characteristics of the damaged structure. A kinetic Monte Carlo (KMC) code has been developed to model both thermal and carrier injection annealing of initial defect structures in semiconductor materials.more » The code is employed to investigate annealing in electron-irradiated, p-type silicon as well as the recovery of base current in silicon transistors bombarded with neutrons at the Los Alamos Neutron Science Center (LANSCE) “Blue Room” facility. Our results reveal that KMC calculations agree well with these experiments once adjustments are made, within the appropriate uncertainty bounds, to some of the sensitive defect parameters.« less
Analysis of Radiation Effects in Silicon using Kinetic Monte Carlo Methods
Hehr, Brian Douglas
2014-11-25
The transient degradation of semiconductor device performance under irradiation has long been an issue of concern. Neutron irradiation can instigate the formation of quasi-stable defect structures, thereby introducing new energy levels into the bandgap that alter carrier lifetimes and give rise to such phenomena as gain degradation in bipolar junction transistors. Normally, the initial defect formation phase is followed by a recovery phase in which defect-defect or defect-dopant interactions modify the characteristics of the damaged structure. A kinetic Monte Carlo (KMC) code has been developed to model both thermal and carrier injection annealing of initial defect structures in semiconductor materials. The code is employed to investigate annealing in electron-irradiated, p-type silicon as well as the recovery of base current in silicon transistors bombarded with neutrons at the Los Alamos Neutron Science Center (LANSCE) Blue Room facility. Our results reveal that KMC calculations agree well with these experiments once adjustments are made, within the appropriate uncertainty bounds, to some of the sensitive defect parameters.
Energy Monte Carlo (EMCEE) | Open Energy Information
with a specific set of distributions. Both programs run as spreadsheet workbooks in Microsoft Excel. EMCEE and Emc2 require Crystal Ball, a commercially available Monte Carlo...
Quantum Monte Carlo by message passing
Bonca, J.; Gubernatis, J.E.
1993-01-01
We summarize results of quantum Monte Carlo simulations of the degenerate single-impurity Anderson model using the impurity algorithm of Hirsch and Fye. Using methods of Bayesian statistical inference, coupled with the principle of maximum entropy, we extracted the single-particle spectral density from the imaginary-time Green's function. The variations of resulting spectral densities with model parameters agree qualitatively with the spectral densities predicted by NCA calculations. All the simulations were performed on a cluster of 16 IBM R6000/560 workstations under the control of the message-passing software PVM. We described the trivial parallelization of our quantum Monte Carlo code both for the cluster and the CM-5 computer. Other issues for effective parallelization of the impurity algorithm are also discussed.
Quantum Monte Carlo by message passing
Bonca, J.; Gubernatis, J.E.
1993-05-01
We summarize results of quantum Monte Carlo simulations of the degenerate single-impurity Anderson model using the impurity algorithm of Hirsch and Fye. Using methods of Bayesian statistical inference, coupled with the principle of maximum entropy, we extracted the single-particle spectral density from the imaginary-time Green`s function. The variations of resulting spectral densities with model parameters agree qualitatively with the spectral densities predicted by NCA calculations. All the simulations were performed on a cluster of 16 IBM R6000/560 workstations under the control of the message-passing software PVM. We described the trivial parallelization of our quantum Monte Carlo code both for the cluster and the CM-5 computer. Other issues for effective parallelization of the impurity algorithm are also discussed.
Wirawan, Rahadi; Waris, Abdul; Djamal, Mitra; Handayani, Gunawan
2015-04-16
The spectrum of gamma energy absorption in the NaI crystal (scintillation detector) is the interaction result of gamma photon with NaI crystal, and it’s associated with the photon gamma energy incoming to the detector. Through a simulation approach, we can perform an early observation of gamma energy absorption spectrum in a scintillator crystal detector (NaI) before the experiment conducted. In this paper, we present a simulation model result of gamma energy absorption spectrum for energy 100-700 keV (i.e. 297 keV, 400 keV and 662 keV). This simulation developed based on the concept of photon beam point source distribution and photon cross section interaction with the Monte Carlo method. Our computational code has been successfully predicting the multiple energy peaks absorption spectrum, which derived from multiple photon energy sources.
Generalizing the self-healing diffusion Monte Carlo approach...
Office of Scientific and Technical Information (OSTI)
Generalizing the self-healing diffusion Monte Carlo approach to finite temperature: A path ... Title: Generalizing the self-healing diffusion Monte Carlo approach to finite temperature: ...
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M.S.; Ricketson, L.F.; Dimits, A.M.; Caflisch, R.E.; Cohen, B.I.
2014-10-01
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the LandauFokkerPlanck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ?, the computational cost of the method is O(?{sup ?2}) or O(?{sup ?2}(ln?){sup 2}), depending on the underlying discretization, Milstein or EulerMaruyama respectively. This is to be contrasted with a cost of O(?{sup ?3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lvy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ?=10{sup ?5}. We discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.
Multiscale MonteCarlo equilibration: Pure Yang-Mills theory
Endres, Michael G.; Brower, Richard C.; Orginos, Kostas; Detmold, William; Pochinsky, Andrew V.
2015-12-29
In this study, we present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
Kuss, M.; Markel, T.; Kramer, W.
2011-01-01
Concentrated purchasing patterns of plug-in vehicles may result in localized distribution transformer overload scenarios. Prolonged periods of transformer overloading causes service life decrements, and in worst-case scenarios, results in tripped thermal relays and residential service outages. This analysis will review distribution transformer load models developed in the IEC 60076 standard, and apply the model to a neighborhood with plug-in hybrids. Residential distribution transformers are sized such that night-time cooling provides thermal recovery from heavy load conditions during the daytime utility peak. It is expected that PHEVs will primarily be charged at night in a residential setting. If not managed properly, some distribution transformers could become overloaded, leading to a reduction in transformer life expectancy, thus increasing costs to utilities and consumers. A Monte-Carlo scheme simulated each day of the year, evaluating 100 load scenarios as it swept through the following variables: number of vehicle per transformer, transformer size, and charging rate. A general method for determining expected transformer aging rate will be developed, based on the energy needs of plug-in vehicles loading a residential transformer.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Betzler, Benjamin R.; Kiedrowski, Brian C.; Brown, Forrest B.; Martin, William R.
2015-08-28
The time-dependent behavior of the energy spectrum in neutron transport was investigated with a formulation, based on continuous-time Markov processes, for computing α eigenvalues and eigenvectors in an infinite medium. In this study, a research Monte Carlo code called “TORTE” (To Obtain Real Time Eigenvalues) was created and used to estimate elements of a transition rate matrix. TORTE is capable of using both multigroup and continuous-energy nuclear data, and verification was performed. Eigenvalue spectra for infinite homogeneous mixtures were obtained, and an eigenfunction expansion was used to investigate transient behavior of the neutron energy spectrum.
Wang, L; Fourkal, E; Hayes, S; Jin, L; Ma, C
2014-06-01
Purpose: To study the dosimetric difference resulted in using the pencil beam algorithm instead of Monte Carlo (MC) methods for tumors adjacent to the skull. Methods: We retrospectively calculated the dosimetric differences between RT and MC algorithms for brain tumors treated with CyberKnife located adjacent to the skull for 18 patients (total of 27 tumors). The median tumor sizes was 0.53-cc (range 0.018-cc to 26.2-cc). The absolute mean distance from the tumor to the skull was 2.11 mm (range - 17.0 mm to 9.2 mm). The dosimetric variables examined include the mean, maximum, and minimum doses to the target, the target coverage (TC) and conformality index. The MC calculation used the same MUs as the RT dose calculation without further normalization and 1% statistical uncertainty. The differences were analyzed by tumor size and distance from the skull. Results: The TC was generally reduced with the MC calculation (24 out of 27 cases). The average difference in TC between RT and MC was 3.3% (range 0.0% to 23.5%). When the TC was deemed unacceptable, the plans were re-normalized in order to increase the TC to 99%. This resulted in a 6.9% maximum change in the prescription isodose line. The maximum changes in the mean, maximum, and minimum doses were 5.4 %, 7.7%, and 8.4%, respectively, before re-normalization. When the TC was analyzed with regards to target size, it was found that the worst coverage occurred with the smaller targets (0.018-cc). When the TC was analyzed with regards to the distance to the skull, there was no correlation between proximity to the skull and TC between the RT and MC plans. Conclusions: For smaller targets (< 4.0-cc), MC should be used to re-evaluate the dose coverage after RT is used for the initial dose calculation in order to ensure target coverage.
Status of Monte-Carlo Event Generators
Hoeche, Stefan; /SLAC
2011-08-11
Recent progress on general-purpose Monte-Carlo event generators is reviewed with emphasis on the simulation of hard QCD processes and subsequent parton cascades. Describing full final states of high-energy particle collisions in contemporary experiments is an intricate task. Hundreds of particles are typically produced, and the reactions involve both large and small momentum transfer. The high-dimensional phase space makes an exact solution of the problem impossible. Instead, one typically resorts to regarding events as factorized into different steps, ordered descending in the mass scales or invariant momentum transfers which are involved. In this picture, a hard interaction, described through fixed-order perturbation theory, is followed by multiple Bremsstrahlung emissions off initial- and final-state and, finally, by the hadronization process, which binds QCD partons into color-neutral hadrons. Each of these steps can be treated independently, which is the basic concept inherent to general-purpose event generators. Their development is nowadays often focused on an improved description of radiative corrections to hard processes through perturbative QCD. In this context, the concept of jets is introduced, which allows to relate sprays of hadronic particles in detectors to the partons in perturbation theory. In this talk, we briefly review recent progress on perturbative QCD in event generation. The main focus lies on the general-purpose Monte-Carlo programs HERWIG, PYTHIA and SHERPA, which will be the workhorses for LHC phenomenology. A detailed description of the physics models included in these generators can be found in [8]. We also discuss matrix-element generators, which provide the parton-level input for general-purpose Monte Carlo.
Monte Carlo simulation for the transport beamline
Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A.; Attili, A.; Marchetto, F.; Russo, G.; Cirrone, G. A. P.; Schillaci, F.; Scuderi, V.; Carpinelli, M.
2013-07-26
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.
A Monte Carlo algorithm for degenerate plasmas
Turrell, A.E. Sherlock, M.; Rose, S.J.
2013-09-15
A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the FermiDirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electronion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Hybrid Deterministic/Monte Carlo Solutions to the Neutron Transport k-Eigenvalue Problem with a Comparison to Pure Monte Carlo Solutions Jeffrey A. Willert Los Alamos National Laboratory September 16, 2013 Joint work with: Dana Knoll (LANL), Ryosuke Park (LANL), and C. T. Kelley (NCSU) Jeffrey A. Willert Hybrid k-Eigenvalue Methods September 16, 2013 1 / 25 CASL-U-2013-0309-000 1 Introduction 2 Nonlinear Diffusion Acceleration for k-Eigenvalue Problems 3 Hybrid Methods 4 Classic Monte Carlo
A Fast Monte Carlo Simulation for the International Linear Collider
Office of Scientific and Technical Information (OSTI)
Detector (Technical Report) | SciTech Connect A Fast Monte Carlo Simulation for the International Linear Collider Detector Citation Details In-Document Search Title: A Fast Monte Carlo Simulation for the International Linear Collider Detector The following paper contains details concerning the motivation for, implementation and performance of a Java-based fast Monte Carlo simulation for a detector designed to be used in the International Linear Collider. This simulation, presently included
Tests of Monte Carlo Independent Column Approximation in the...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Meteorological Institute Jarvinen, Heikki Finnish Meteorological Institute Category: Modeling The Monte Carlo Independent Column Approximation (McICA) was recently introduced...
HILO: Quasi Diffusion Accelerated Monte Carlo on Hybrid Architectures
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
fidelity simulation of a diverse range of kinetic systems. Available for thumbnail of Feynman Center (505) 665-9090 Email HILO: Quasi Diffusion Accelerated Monte Carlo on Hybrid...
Evaluation of Monte Carlo Electron-Transport Algorithms in the...
Office of Scientific and Technical Information (OSTI)
Evaluation of Monte Carlo Electron-Transport Algorithms in the Integrated Tiger Series Codes for Stochastic-Media Simulations. Citation Details In-Document Search Title: Evaluation...
Cluster expansion modeling and Monte Carlo simulation of alnico...
Office of Scientific and Technical Information (OSTI)
Accepted Manuscript: Cluster expansion modeling and Monte Carlo simulation of alnico 5-7 permanent magnets This content will become publicly available on March 5, 2016 Prev Next...
Sepehri, Aliasghar; Loeffler, Troy D.; Chen, Bin
2014-08-21
A new method has been developed to generate bending angle trials to improve the acceptance rate and the speed of configurational-bias Monte Carlo. Whereas traditionally the trial geometries are generated from a uniform distribution, in this method we attempt to use the exact probability density function so that each geometry generated is likely to be accepted. In actual practice, due to the complexity of this probability density function, a numerical representation of this distribution function would be required. This numerical table can be generated a priori from the distribution function. This method has been tested on a united-atom model of alkanes including propane, 2-methylpropane, and 2,2-dimethylpropane, that are good representatives of both linear and branched molecules. It has been shown from these test cases that reasonable approximations can be made especially for the highly branched molecules to reduce drastically the dimensionality and correspondingly the amount of the tabulated data that is needed to be stored. Despite these approximations, the dependencies between the various geometrical variables can be still well considered, as evident from a nearly perfect acceptance rate achieved. For all cases, the bending angles were shown to be sampled correctly by this method with an acceptance rate of at least 96% for 2,2-dimethylpropane to more than 99% for propane. Since only one trial is required to be generated for each bending angle (instead of thousands of trials required by the conventional algorithm), this method can dramatically reduce the simulation time. The profiling results of our Monte Carlo simulation code show that trial generation, which used to be the most time consuming process, is no longer the time dominating component of the simulation.
Astrakharchik, G. E.; Boronat, J.; Casulleras, J.; Kurbakov, I. L.; Lozovik, Yu. E.
2009-05-15
The equation of state of a weakly interacting two-dimensional Bose gas is studied at zero temperature by means of quantum Monte Carlo methods. Going down to as low densities as na{sup 2}{proportional_to}10{sup -100} permits us to obtain agreement on beyond mean-field level between predictions of perturbative methods and direct many-body numerical simulation, thus providing an answer to the fundamental question of the equation of state of a two-dimensional dilute Bose gas in the universal regime (i.e., entirely described by the gas parameter na{sup 2}). We also show that the measure of the frequency of a breathing collective oscillation in a trap at very low densities can be used to test the universal equation of state of a two-dimensional Bose gas.
Exploring theory space with Monte Carlo reweighting
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun
2014-10-13
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists andmore » experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less
Exploring theory space with Monte Carlo reweighting
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun
2014-10-13
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. In particular, we suggest procedures that allow more efficient collaboration between theoristsmoreand experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.less
Exploring theory space with Monte Carlo reweighting
Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun
2014-10-13
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists and experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.
SU-E-T-188: Film Dosimetry Verification of Monte Carlo Generated Electron Treatment Plans
Enright, S; Asprinio, A; Lu, L
2014-06-01
Purpose: The purpose of this study was to compare dose distributions from film measurements to Monte Carlo generated electron treatment plans. Irradiation with electrons offers the advantages of dose uniformity in the target volume and of minimizing the dose to deeper healthy tissue. Using the Monte Carlo algorithm will improve dose accuracy in regions with heterogeneities and irregular surfaces. Methods: Dose distributions from GafChromic{sup } EBT3 films were compared to dose distributions from the Electron Monte Carlo algorithm in the Eclipse{sup } radiotherapy treatment planning system. These measurements were obtained for 6MeV, 9MeV and 12MeV electrons at two depths. All phantoms studied were imported into Eclipse by CT scan. A 1 cm thick solid water template with holes for bonelike and lung-like plugs was used. Different configurations were used with the different plugs inserted into the holes. Configurations with solid-water plugs stacked on top of one another were also used to create an irregular surface. Results: The dose distributions measured from the film agreed with those from the Electron Monte Carlo treatment plan. Accuracy of Electron Monte Carlo algorithm was also compared to that of Pencil Beam. Dose distributions from Monte Carlo had much higher pass rates than distributions from Pencil Beam when compared to the film. The pass rate for Monte Carlo was in the 80%99% range, where the pass rate for Pencil Beam was as low as 10.76%. Conclusion: The dose distribution from Monte Carlo agreed with the measured dose from the film. When compared to the Pencil Beam algorithm, pass rates for Monte Carlo were much higher. Monte Carlo should be used over Pencil Beam for regions with heterogeneities and irregular surfaces.
Molecular Monte Carlo Simulations Using Graphics Processing Units: To Waste
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Recycle or Not? | Center for Gas SeparationsRelevant to Clean Energy Technologies | Blandine Jerome Monte Carlo Simulations Using Graphics Processing Units: To Waste Recycle or Not? Previous Next List Jihan Kim, Jocelyn M. Rodgers, Manuel Athènes, and Berend Smit, J. Chem. Theory Comput., 2011, 7 (10), pp 3208-3222 DOI: 10.1021/ct200474j Figure Abstract: In the waste recycling Monte Carlo (WRMC) algorithm, multiple trial states may be simultaneously generated and utilized during Monte Carlo
Properties of reactive oxygen species by quantum Monte Carlo
Zen, Andrea; Trout, Bernhardt L.; Guidoni, Leonardo
2014-07-07
The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N{sup 3} ? N{sup 4}, where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.
Monte Carlo Hybrid Applied to Binary Stochastic Mixtures
Energy Science and Technology Software Center (OSTI)
2008-08-11
The purpose of this set of codes isto use an inexpensive, approximate deterministic flux distribution to generate weight windows, wihich will then be used to bound particle weights for the Monte Carlo code run. The process is not automated; the user must run the deterministic code and use the output file as a command-line argument for the Monte Carlo code. Two sets of text input files are included as test problems/templates.
Quantum Monte Carlo Calculations of Light Nuclei Using Chiral Potentials
Office of Scientific and Technical Information (OSTI)
(Journal Article) | SciTech Connect Quantum Monte Carlo Calculations of Light Nuclei Using Chiral Potentials Citation Details In-Document Search Title: Quantum Monte Carlo Calculations of Light Nuclei Using Chiral Potentials Authors: Lynn, J. E. ; Carlson, J. ; Epelbaum, E. ; Gandolfi, S. ; Gezerlis, A. ; Schwenk, A. Publication Date: 2014-11-04 OSTI Identifier: 1181024 Grant/Contract Number: AC02-05CH11231 Type: Publisher's Accepted Manuscript Journal Name: Physical Review Letters
Fast Monte Carlo for radiation therapy: the PEREGRINE Project (Conference)
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
| SciTech Connect Search Results Conference: Fast Monte Carlo for radiation therapy: the PEREGRINE Project Citation Details In-Document Search Title: Fast Monte Carlo for radiation therapy: the PEREGRINE Project × You are accessing a document from the Department of Energy's (DOE) SciTech Connect. This site is a product of DOE's Office of Scientific and Technical Information (OSTI) and is provided as a public service. Visit OSTI to utilize additional information resources in energy science
Efficient Monte Carlo Simulations of Gas Molecules Inside Porous Materials
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
| Center for Gas SeparationsRelevant to Clean Energy Technologies | Blandine Jerome Efficient Monte Carlo Simulations of Gas Molecules Inside Porous Materials Previous Next List J. Kim and B. Smit, J. Chem. Theory Comput. 8 (7), 2336 (2012) DOI: 10.1021/ct3003699 Abstract: Monte Carlo (MC) simulations are commonly used to obtain adsorption properties of gas molecules inside porous materials. In this work, we discuss various optimization strategies that lead to faster MC simulations with CO2
Jung, Jae Won; Kim, Jong Oh; Yeo, Inhwan Jason; Cho, Young-Bin; Kim, Sun Mo; DiBiase, Steven
2012-12-15
Purpose: Fast and accurate transit portal dosimetry was investigated by developing a density-scaled layer model of electronic portal imaging device (EPID) and applying it to a clinical environment. Methods: The model was developed for fast Monte Carlo dose calculation. The model was validated through comparison with measurements of dose on EPID using first open beams of varying field sizes under a 20-cm-thick flat phantom. After this basic validation, the model was further tested by applying it to transit dosimetry and dose reconstruction that employed our predetermined dose-response-based algorithm developed earlier. The application employed clinical intensity-modulated beams irradiated on a Rando phantom. The clinical beams were obtained through planning on pelvic regions of the Rando phantom simulating prostate and large pelvis intensity modulated radiation therapy. To enhance agreement between calculations and measurements of dose near penumbral regions, convolution conversion of acquired EPID images was alternatively used. In addition, thickness-dependent image-to-dose calibration factors were generated through measurements of image and calculations of dose in EPID through flat phantoms of various thicknesses. The factors were used to convert acquired images in EPID into dose. Results: For open beam measurements, the model showed agreement with measurements in dose difference better than 2% across open fields. For tests with a Rando phantom, the transit dosimetry measurements were compared with forwardly calculated doses in EPID showing gamma pass rates between 90.8% and 98.8% given 4.5 mm distance-to-agreement (DTA) and 3% dose difference (DD) for all individual beams tried in this study. The reconstructed dose in the phantom was compared with forwardly calculated doses showing pass rates between 93.3% and 100% in isocentric perpendicular planes to the beam direction given 3 mm DTA and 3% DD for all beams. On isocentric axial planes, the pass rates varied between 95.8% and 99.9% for all individual beams and they were 98.2% and 99.9% for the composite beams of the small and large pelvis cases, respectively. Three-dimensional gamma pass rates were 99.0% and 96.4% for the small and large pelvis cases, respectively. Conclusions: The layer model of EPID built for Monte Carlo calculations offered fast (less than 1 min) and accurate calculation for transit dosimety and dose reconstruction.
Multiscale Monte Carlo equilibration: Pure Yang-Mills theory
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Endres, Michael G.; Brower, Richard C.; Orginos, Kostas; Detmold, William; Pochinsky, Andrew V.
2015-12-29
In this study, we present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
Monte Carlo Modeling of High-Energy Film Radiography (Journal Article) |
Office of Scientific and Technical Information (OSTI)
SciTech Connect Monte Carlo Modeling of High-Energy Film Radiography Citation Details In-Document Search Title: Monte Carlo Modeling of High-Energy Film Radiography High-energy film radiography methods, adapted in the past to performing specific tasks, must now meet increasing demands to identify defects and perform critical measurements in a wide variety of manufacturing processes. Although film provides unequaled resolution for most components and assemblies, image quality must be enhanced
Coupled Monte Carlo neutronics and thermal hydraulics for power reactors
Bernnat, W.; Buck, M.; Mattes, M.; Zwermann, W.; Pasichnyk, I.; Velkov, K.
2012-07-01
The availability of high performance computing resources enables more and more the use of detailed Monte Carlo models even for full core power reactors. The detailed structure of the core can be described by lattices, modeled by so-called repeated structures e.g. in Monte Carlo codes such as MCNP5 or MCNPX. For cores with mainly uniform material compositions, fuel and moderator temperatures, there is no problem in constructing core models. However, when the material composition and the temperatures vary strongly a huge number of different material cells must be described which complicate the input and in many cases exceed code or memory limits. The second problem arises with the preparation of corresponding temperature dependent cross sections and thermal scattering laws. Only if these problems can be solved, a realistic coupling of Monte Carlo neutronics with an appropriate thermal-hydraulics model is possible. In this paper a method for the treatment of detailed material and temperature distributions in MCNP5 is described based on user-specified internal functions which assign distinct elements of the core cells to material specifications (e.g. water density) and temperatures from a thermal-hydraulics code. The core grid itself can be described with a uniform material specification. The temperature dependency of cross sections and thermal neutron scattering laws is taken into account by interpolation, requiring only a limited number of data sets generated for different temperatures. Applications will be shown for the stationary part of the Purdue PWR benchmark using ATHLET for thermal- hydraulics and for a generic Modular High Temperature reactor using THERMIX for thermal- hydraulics. (authors)
Bergstrom, Paul M. (Livermore, CA); Daly, Thomas P. (Livermore, CA); Moses, Edward I. (Livermore, CA); Patterson, Jr., Ralph W. (Livermore, CA); Schach von Wittenau, Alexis E. (Livermore, CA); Garrett, Dewey N. (Livermore, CA); House, Ronald K. (Tracy, CA); Hartmann-Siantar, Christine L. (Livermore, CA); Cox, Lawrence J. (Los Alamos, NM); Fujino, Donald H. (San Leandro, CA)
2000-01-01
A system and method is disclosed for radiation dose calculation within sub-volumes of a particle transport grid. In a first step of the method voxel volumes enclosing a first portion of the target mass are received. A second step in the method defines dosel volumes which enclose a second portion of the target mass and overlap the first portion. A third step in the method calculates common volumes between the dosel volumes and the voxel volumes. A fourth step in the method identifies locations in the target mass of energy deposits. And, a fifth step in the method calculates radiation doses received by the target mass within the dosel volumes. A common volume calculation module inputs voxel volumes enclosing a first portion of the target mass, inputs voxel mass densities corresponding to a density of the target mass within each of the voxel volumes, defines dosel volumes which enclose a second portion of the target mass and overlap the first portion, and calculates common volumes between the dosel volumes and the voxel volumes. A dosel mass module, multiplies the common volumes by corresponding voxel mass densities to obtain incremental dosel masses, and adds the incremental dosel masses corresponding to the dosel volumes to obtain dosel masses. A radiation transport module identifies locations in the target mass of energy deposits. And, a dose calculation module, coupled to the common volume calculation module and the radiation transport module, for calculating radiation doses received by the target mass within the dosel volumes.
Fang Yuan; Badal, Andreu; Allec, Nicholas; Karim, Karim S.; Badano, Aldo
2012-01-15
Purpose: The authors describe a detailed Monte Carlo (MC) method for the coupled transport of ionizing particles and charge carriers in amorphous selenium (a-Se) semiconductor x-ray detectors, and model the effect of statistical variations on the detected signal. Methods: A detailed transport code was developed for modeling the signal formation process in semiconductor x-ray detectors. The charge transport routines include three-dimensional spatial and temporal models of electron-hole pair transport taking into account recombination and trapping. Many electron-hole pairs are created simultaneously in bursts from energy deposition events. Carrier transport processes include drift due to external field and Coulombic interactions, and diffusion due to Brownian motion. Results: Pulse-height spectra (PHS) have been simulated with different transport conditions for a range of monoenergetic incident x-ray energies and mammography radiation beam qualities. Two methods for calculating Swank factors from simulated PHS are shown, one using the entire PHS distribution, and the other using the photopeak. The latter ignores contributions from Compton scattering and K-fluorescence. Comparisons differ by approximately 2% between experimental measurements and simulations. Conclusions: The a-Se x-ray detector PHS responses simulated in this work include three-dimensional spatial and temporal transport of electron-hole pairs. These PHS were used to calculate the Swank factor and compare it with experimental measurements. The Swank factor was shown to be a function of x-ray energy and applied electric field. Trapping and recombination models are all shown to affect the Swank factor.
Fission matrix-based Monte Carlo criticality analysis of fuel storage pools
Farlotti, M.; Larsen, E. W.
2013-07-01
Standard Monte Carlo transport procedures experience difficulties in solving criticality problems in fuel storage pools. Because of the strong neutron absorption between fuel assemblies, source convergence can be very slow, leading to incorrect estimates of the eigenvalue and the eigenfunction. This study examines an alternative fission matrix-based Monte Carlo transport method that takes advantage of the geometry of a storage pool to overcome this difficulty. The method uses Monte Carlo transport to build (essentially) a fission matrix, which is then used to calculate the criticality and the critical flux. This method was tested using a test code on a simple problem containing 8 assemblies in a square pool. The standard Monte Carlo method gave the expected eigenfunction in 5 cases out of 10, while the fission matrix method gave the expected eigenfunction in all 10 cases. In addition, the fission matrix method provides an estimate of the error in the eigenvalue and the eigenfunction, and it allows the user to control this error by running an adequate number of cycles. Because of these advantages, the fission matrix method yields a higher confidence in the results than standard Monte Carlo. We also discuss potential improvements of the method, including the potential for variance reduction techniques. (authors)
Glaser, R E; Johannesson, G; Sengupta, S; Kosovic, B; Carle, S; Franz, G A; Aines, R D; Nitao, J J; Hanley, W G; Ramirez, A L; Newmark, R L; Johnson, V M; Dyer, K M; Henderson, K A; Sugiyama, G A; Hickling, T L; Pasyanos, M E; Jones, D A; Grimm, R J; Levine, R A
2004-03-11
Accurate prediction of complex phenomena can be greatly enhanced through the use of data and observations to update simulations. The ability to create these data-driven simulations is limited by error and uncertainty in both the data and the simulation. The stochastic engine project addressed this problem through the development and application of a family of Markov Chain Monte Carlo methods utilizing importance sampling driven by forward simulators to minimize time spent search very large state spaces. The stochastic engine rapidly chooses among a very large number of hypothesized states and selects those that are consistent (within error) with all the information at hand. Predicted measurements from the simulator are used to estimate the likelihood of actual measurements, which in turn reduces the uncertainty in the original sample space via a conditional probability method called Bayesian inferencing. This highly efficient, staged Metropolis-type search algorithm allows us to address extremely complex problems and opens the door to solving many data-driven, nonlinear, multidimensional problems. A key challenge has been developing representation methods that integrate the local details of real data with the global physics of the simulations, enabling supercomputers to efficiently solve the problem. Development focused on large-scale problems, and on examining the mathematical robustness of the approach in diverse applications. Multiple data types were combined with large-scale simulations to evaluate systems with {approx}{sup 10}20,000 possible states (detecting underground leaks at the Hanford waste tanks). The probable uses of chemical process facilities were assessed using an evidence-tree representation and in-process updating. Other applications included contaminant flow paths at the Savannah River Site, locating structural flaws in buildings, improving models for seismic travel times systems used to monitor nuclear proliferation, characterizing the source of indistinct atmospheric plumes, and improving flash radiography. In the course of developing these applications, we also developed new methods to cluster and analyze the results of the state-space searches, as well as a number of algorithms to improve the search speed and efficiency. Our generalized solution contributes both a means to make more informed predictions of the behavior of very complex systems, and to improve those predictions as events unfold, using new data in real time.
Brachytherapy structural shielding calculations using Monte Carlo generated, monoenergetic data
Zourari, K.; Peppa, V.; Papagiannis, P.; Ballester, Facundo; Siebert, Frank-Andr
2014-04-15
Purpose: To provide a method for calculating the transmission of any broad photon beam with a known energy spectrum in the range of 201090 keV, through concrete and lead, based on the superposition of corresponding monoenergetic data obtained from Monte Carlo simulation. Methods: MCNP5 was used to calculate broad photon beam transmission data through varying thickness of lead and concrete, for monoenergetic point sources of energy in the range pertinent to brachytherapy (201090 keV, in 10 keV intervals). The three parameter empirical model introduced byArcher et al. [Diagnostic x-ray shielding design based on an empirical model of photon attenuation, Health Phys. 44, 507517 (1983)] was used to describe the transmission curve for each of the 216 energy-material combinations. These three parameters, and hence the transmission curve, for any polyenergetic spectrum can then be obtained by superposition along the lines of Kharrati et al. [Monte Carlo simulation of x-ray buildup factors of lead and its applications in shielding of diagnostic x-ray facilities, Med. Phys. 34, 13981404 (2007)]. A simple program, incorporating a graphical user interface, was developed to facilitate the superposition of monoenergetic data, the graphical and tabular display of broad photon beam transmission curves, and the calculation of material thickness required for a given transmission from these curves. Results: Polyenergetic broad photon beam transmission curves of this work, calculated from the superposition of monoenergetic data, are compared to corresponding results in the literature. A good agreement is observed with results in the literature obtained from Monte Carlo simulations for the photon spectra emitted from bare point sources of various radionuclides. Differences are observed with corresponding results in the literature for x-ray spectra at various tube potentials, mainly due to the different broad beam conditions or x-ray spectra assumed. Conclusions: The data of this work allow for the accurate calculation of structural shielding thickness, taking into account the spectral variation with shield thickness, and broad beam conditions, in a realistic geometry. The simplicity of calculations also obviates the need for the use of crude transmission data estimates such as the half and tenth value layer indices. Although this study was primarily designed for brachytherapy, results might also be useful for radiology and nuclear medicine facility design, provided broad beam conditions apply.
Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry
Bostani, Maryam McMillan, Kyle; Cagnon, Chris H.; McNitt-Gray, Michael F.; Mueller, Jonathon W.; Cody, Dianna D.; DeMarco, John J.
2015-02-15
Purpose: The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. Methods: MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for all exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. Results: The calculated mean percent difference between TLD measurements and Monte Carlo simulations was ?4.9% with standard deviation of 8.7% and a range of ?22.7% to 5.7%. Conclusions: The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.
Monte Carlo event generators for hadron-hadron collisions
Knowles, I.G.; Protopopescu, S.D.
1993-06-01
A brief review of Monte Carlo event generators for simulating hadron-hadron collisions is presented. Particular emphasis is placed on comparisons of the approaches used to describe physics elements and identifying their relative merits and weaknesses. This review summarizes a more detailed report.
Crossing the mesoscale no-mans land via parallel kinetic Monte Carlo.
Garcia Cardona, Cristina (San Diego State University); Webb, Edmund Blackburn, III; Wagner, Gregory John; Tikare, Veena; Holm, Elizabeth Ann; Plimpton, Steven James; Thompson, Aidan Patrick; Slepoy, Alexander (U. S. Department of Energy, NNSA); Zhou, Xiao Wang; Battaile, Corbett Chandler; Chandross, Michael Evan
2009-10-01
The kinetic Monte Carlo method and its variants are powerful tools for modeling materials at the mesoscale, meaning at length and time scales in between the atomic and continuum. We have completed a 3 year LDRD project with the goal of developing a parallel kinetic Monte Carlo capability and applying it to materials modeling problems of interest to Sandia. In this report we give an overview of the methods and algorithms developed, and describe our new open-source code called SPPARKS, for Stochastic Parallel PARticle Kinetic Simulator. We also highlight the development of several Monte Carlo models in SPPARKS for specific materials modeling applications, including grain growth, bubble formation, diffusion in nanoporous materials, defect formation in erbium hydrides, and surface growth and evolution.
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, William P. (Tracy, CA); Hartmann-Siantar, Christine L. (San Ramon, CA); Rathkopf, James A. (Livermore, CA)
1999-01-01
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, W.P.; Hartmann-Siantar, C.L.; Rathkopf, J.A.
1999-02-09
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media. 57 figs.
Non-adiabatic molecular dynamics by accelerated semiclassical Monte Carlo
White, Alexander J.; Gorshkov, Vyacheslav N.; Tretiak, Sergei; Mozyrsky, Dmitry
2015-07-07
Non-adiabatic dynamics, where systems non-radiatively transition between electronic states, plays a crucial role in many photo-physical processes, such as fluorescence, phosphorescence, and photoisomerization. Methods for the simulation of non-adiabatic dynamics are typically either numerically impractical, highly complex, or based on approximations which can result in failure for even simple systems. Recently, the Semiclassical Monte Carlo (SCMC) approach was developed in an attempt to combine the accuracy of rigorous semiclassical methods with the efficiency and simplicity of widely used surface hopping methods. However, while SCMC was found to be more efficient than other semiclassical methods, it is not yet as efficient as is needed to be used for large molecular systems. Here, we have developed two new methods: the accelerated-SCMC and the accelerated-SCMC with re-Gaussianization, which reduce the cost of the SCMC algorithm up to two orders of magnitude for certain systems. In many cases shown here, the new procedures are nearly as efficient as the commonly used surface hopping schemes, with little to no loss of accuracy. This implies that these modified SCMC algorithms will be of practical numerical solutions for simulating non-adiabatic dynamics in realistic molecular systems.
Non-adiabatic molecular dynamics by accelerated semiclassical Monte Carlo
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
White, Alexander J.; Gorshkov, Vyacheslav N.; Tretiak, Sergei; Mozyrsky, Dmitry
2015-07-07
Non-adiabatic dynamics, where systems non-radiatively transition between electronic states, plays a crucial role in many photo-physical processes, such as fluorescence, phosphorescence, and photoisomerization. Methods for the simulation of non-adiabatic dynamics are typically either numerically impractical, highly complex, or based on approximations which can result in failure for even simple systems. Recently, the Semiclassical Monte Carlo (SCMC) approach was developed in an attempt to combine the accuracy of rigorous semiclassical methods with the efficiency and simplicity of widely used surface hopping methods. However, while SCMC was found to be more efficient than other semiclassical methods, it is not yet as efficientmore » as is needed to be used for large molecular systems. Here, we have developed two new methods: the accelerated-SCMC and the accelerated-SCMC with re-Gaussianization, which reduce the cost of the SCMC algorithm up to two orders of magnitude for certain systems. In many cases shown here, the new procedures are nearly as efficient as the commonly used surface hopping schemes, with little to no loss of accuracy. This implies that these modified SCMC algorithms will be of practical numerical solutions for simulating non-adiabatic dynamics in realistic molecular systems.« less
CASL-U-2015-0170-000 SHIFT: A Massively Parallel Monte Carlo
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
SHIFT: A Massively Parallel Monte Carlo Radiation Transport Package Tara M. Pandya, Seth R. Johnson, Gregory G. Davidson, Thomas M. Evans, and Steven P. Hamilton Oak Ridge National Laboratory April 19, 2015 CASL-U-2015-0170-000 ANS MC2015 - Joint International Conference on Mathematics and Computation (M&C), Supercomputing in Nuclear Applications (SNA) and the Monte Carlo (MC) Method * Nashville, Tennessee * April 19-23, 2015, on CD-ROM, American Nuclear Society, LaGrange Park, IL (2015)
Quantum Monte Carlo for electronic structure: Recent developments and applications
Rodriquez, M. M.S.
1995-04-01
Quantum Monte Carlo (QMC) methods have been found to give excellent results when applied to chemical systems. The main goal of the present work is to use QMC to perform electronic structure calculations. In QMC, a Monte Carlo simulation is used to solve the Schroedinger equation, taking advantage of its analogy to a classical diffusion process with branching. In the present work the author focuses on how to extend the usefulness of QMC to more meaningful molecular systems. This study is aimed at questions concerning polyatomic and large atomic number systems. The accuracy of the solution obtained is determined by the accuracy of the trial wave function`s nodal structure. Efforts in the group have given great emphasis to finding optimized wave functions for the QMC calculations. Little work had been done by systematically looking at a family of systems to see how the best wave functions evolve with system size. In this work the author presents a study of trial wave functions for C, CH, C{sub 2}H and C{sub 2}H{sub 2}. The goal is to study how to build wave functions for larger systems by accumulating knowledge from the wave functions of its fragments as well as gaining some knowledge on the usefulness of multi-reference wave functions. In a MC calculation of a heavy atom, for reasonable time steps most moves for core electrons are rejected. For this reason true equilibration is rarely achieved. A method proposed by Batrouni and Reynolds modifies the way the simulation is performed without altering the final steady-state solution. It introduces an acceleration matrix chosen so that all coordinates (i.e., of core and valence electrons) propagate at comparable speeds. A study of the results obtained using their proposed matrix suggests that it may not be the optimum choice. In this work the author has found that the desired mixing of coordinates between core and valence electrons is not achieved when using this matrix. A bibliography of 175 references is included.
Green's function Monte Carlo calculation for the ground state of helium trimers
Cabral, F.; Kalos, M.H.
1981-02-01
The ground state energy of weakly bound boson trimers interacting via Lennard-Jones (12,6) pair potentials is calculated using a Monte Carlo Green's Function Method. Threshold coupling constants for self binding are obtained by extrapolation to zero binding.
Ensemble bayesian model averaging using markov chain Monte Carlo sampling
Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P
2008-01-01
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.
Complete Monte Carlo Simulation of Neutron Scattering Experiments
Drosg, M.
2011-12-13
In the far past, it was not possible to accurately correct for the finite geometry and the finite sample size of a neutron scattering set-up. The limited calculation power of the ancient computers as well as the lack of powerful Monte Carlo codes and the limitation in the data base available then prevented a complete simulation of the actual experiment. Using e.g. the Monte Carlo neutron transport code MCNPX [1], neutron scattering experiments can be simulated almost completely with a high degree of precision using a modern PC, which has a computing power that is ten thousand times that of a super computer of the early 1970s. Thus, (better) corrections can also be obtained easily for previous published data provided that these experiments are sufficiently well documented. Better knowledge of reference data (e.g. atomic mass, relativistic correction, and monitor cross sections) further contributes to data improvement. Elastic neutron scattering experiments from liquid samples of the helium isotopes performed around 1970 at LANL happen to be very well documented. Considering that the cryogenic targets are expensive and complicated, it is certainly worthwhile to improve these data by correcting them using this comparatively straightforward method. As two thirds of all differential scattering cross section data of {sup 3}He(n,n){sup 3}He are connected to the LANL data, it became necessary to correct the dependent data measured in Karlsruhe, Germany, as well. A thorough simulation of both the LANL experiments and the Karlsruhe experiment is presented, starting from the neutron production, followed by the interaction in the air, the interaction with the cryostat structure, and finally the scattering medium itself. In addition, scattering from the hydrogen reference sample was simulated. For the LANL data, the multiple scattering corrections are smaller by a factor of five at least, making this work relevant. Even more important are the corrections to the Karlsruhe data due to the inclusion of the missing outgoing self-attenuation that amounts to up to 15%.
The effects of mapping CT images to Monte Carlo materials on GEANT4 proton simulation accuracy
Barnes, Samuel; McAuley, Grant; Slater, James; Wroe, Andrew
2013-04-15
Purpose: Monte Carlo simulations of radiation therapy require conversion from Hounsfield units (HU) in CT images to an exact tissue composition and density. The number of discrete densities (or density bins) used in this mapping affects the simulation accuracy, execution time, and memory usage in GEANT4 and other Monte Carlo code. The relationship between the number of density bins and CT noise was examined in general for all simulations that use HU conversion to density. Additionally, the effect of this on simulation accuracy was examined for proton radiation. Methods: Relative uncertainty from CT noise was compared with uncertainty from density binning to determine an upper limit on the number of density bins required in the presence of CT noise. Error propagation analysis was also performed on continuously slowing down approximation range calculations to determine the proton range uncertainty caused by density binning. These results were verified with Monte Carlo simulations. Results: In the presence of even modest CT noise (5 HU or 0.5%) 450 density bins were found to only cause a 5% increase in the density uncertainty (i.e., 95% of density uncertainty from CT noise, 5% from binning). Larger numbers of density bins are not required as CT noise will prevent increased density accuracy; this applies across all types of Monte Carlo simulations. Examining uncertainty in proton range, only 127 density bins are required for a proton range error of <0.1 mm in most tissue and <0.5 mm in low density tissue (e.g., lung). Conclusions: By considering CT noise and actual range uncertainty, the number of required density bins can be restricted to a very modest 127 depending on the application. Reducing the number of density bins provides large memory and execution time savings in GEANT4 and other Monte Carlo packages.
Communication: Water on hexagonal boron nitride from diffusion Monte Carlo
Al-Hamdani, Yasmine S.; Ma, Ming; Michaelides, Angelos; Alf, Dario; Lilienfeld, O. Anatole von
2015-05-14
Despite a recent flurry of experimental and simulation studies, an accurate estimate of the interaction strength of water molecules with hexagonal boron nitride is lacking. Here, we report quantum Monte Carlo results for the adsorption of a water monomer on a periodic hexagonal boron nitride sheet, which yield a water monomer interaction energy of ?84 5 meV. We use the results to evaluate the performance of several widely used density functional theory (DFT) exchange correlation functionals and find that they all deviate substantially. Differences in interaction energies between different adsorption sites are however better reproduced by DFT.
A Post-Monte-Carlo Sensitivity Analysis Code
Energy Science and Technology Software Center (OSTI)
2000-04-04
SATOOL (Sensitivity Analysis TOOL) is a code for sensitivity analysis, following an uncertainity analysis with Monte Carlo simulations. Sensitivity analysis identifies those input variables, whose variance contributes dominatly to the variance in the output. This analysis can be used to reduce the variance in the output variables by redefining the "sensitive" variables with greater precision, i.e. with lower variance. The code identifies a group of sensitive variables, ranks them in the order of importance andmore » also quantifies the relative importance among the sensitive variables.« less
Element Agglomeration Algebraic Multilevel Monte-Carlo Library
Energy Science and Technology Software Center (OSTI)
2015-02-19
ElagMC is a parallel C++ library for Multilevel Monte Carlo simulations with algebraically constructed coarse spaces. ElagMC enables Multilevel variance reduction techniques in the context of general unstructured meshes by using the specialized element-based agglomeration techniques implemented in ELAG (the Element-Agglomeration Algebraic Multigrid and Upscaling Library developed by U. Villa and P. Vassilevski and currently under review for public release). The ElabMC library can support different type of deterministic problems, including mixed finite element discretizationsmore » of subsurface flow problems.« less
Quantum Monte Carlo Simulation of Overpressurized Liquid {sup 4}He
Vranjes, L.; Boronat, J.; Casulleras, J.; Cazorla, C.
2005-09-30
A diffusion Monte Carlo simulation of superfluid {sup 4}He at zero temperature and pressures up to 275 bar is presented. Increasing the pressure beyond freezing ({approx}25 bar), the liquid enters the overpressurized phase in a metastable state. In this regime, we report results of the equation of state and the pressure dependence of the static structure factor, the condensate fraction, and the excited-state energy corresponding to the roton. Along this large pressure range, both the condensate fraction and the roton energy decrease but do not become zero. The roton energies obtained are compared with recent experimental data in the overpressurized regime.
Reactor physics simulations with coupled Monte Carlo calculation and computational fluid dynamics.
Seker, V.; Thomas, J. W.; Downar, T. J.; Purdue Univ.
2007-01-01
A computational code system based on coupling the Monte Carlo code MCNP5 and the Computational Fluid Dynamics (CFD) code STAR-CD was developed as an audit tool for lower order nuclear reactor calculations. This paper presents the methodology of the developed computer program 'McSTAR'. McSTAR is written in FORTRAN90 programming language and couples MCNP5 and the commercial CFD code STAR-CD. MCNP uses a continuous energy cross section library produced by the NJOY code system from the raw ENDF/B data. A major part of the work was to develop and implement methods to update the cross section library with the temperature distribution calculated by STARCD for every region. Three different methods were investigated and implemented in McSTAR. The user subroutines in STAR-CD are modified to read the power density data and assign them to the appropriate variables in the program and to write an output data file containing the temperature, density and indexing information to perform the mapping between MCNP and STAR-CD cells. Preliminary testing of the code was performed using a 3x3 PWR pin-cell problem. The preliminary results are compared with those obtained from a STAR-CD coupled calculation with the deterministic transport code DeCART. Good agreement in the k{sub eff} and the power profile was observed. Increased computational capabilities and improvements in computational methods have accelerated interest in high fidelity modeling of nuclear reactor cores during the last several years. High-fidelity has been achieved by utilizing full core neutron transport solutions for the neutronics calculation and computational fluid dynamics solutions for the thermal-hydraulics calculation. Previous researchers have reported the coupling of 3D deterministic neutron transport method to CFD and their application to practical reactor analysis problems. One of the principal motivations of the work here was to utilize Monte Carlo methods to validate the coupled deterministic neutron transport and CFD solutions. Previous researchers have successfully performed Monte Carlo calculations with limited thermal feedback. In fact, much of the validation of the deterministic neutronics transport code DeCART in was performed using the Monte Carlo code McCARD which employs a limited thermal feedback model. However, for a broader range of temperature/fluid applications it was desirable to couple Monte Carlo to a more sophisticated temperature fluid solution such as CFD. This paper focuses on the methods used to couple Monte Carlo to CFD and their application to a series of simple test problems.
Applications of FLUKA Monte Carlo Code for Nuclear and Accelerator Physics
Office of Scientific and Technical Information (OSTI)
(Journal Article) | SciTech Connect Applications of FLUKA Monte Carlo Code for Nuclear and Accelerator Physics Citation Details In-Document Search Title: Applications of FLUKA Monte Carlo Code for Nuclear and Accelerator Physics FLUKA is a general purpose Monte Carlo code capable of handling all radiation components from thermal energies (for neutrons) or 1 keV (for all other particles) to cosmic ray energies and can be applied in many different fields. Presently the code is maintained on
Miura, Shinichi [Institute for Molecular Science, 38 Myodaiji, Okazaki 444-8585 (Japan)
2007-03-21
In this paper, we present a path integral hybrid Monte Carlo (PIHMC) method for rotating molecules in quantum fluids. This is an extension of our PIHMC for correlated Bose fluids [S. Miura and J. Tanaka, J. Chem. Phys. 120, 2160 (2004)] to handle the molecular rotation quantum mechanically. A novel technique referred to be an effective potential of quantum rotation is introduced to incorporate the rotational degree of freedom in the path integral molecular dynamics or hybrid Monte Carlo algorithm. For a permutation move to satisfy Bose statistics, we devise a multilevel Metropolis method combined with a configurational-bias technique for efficiently sampling the permutation and the associated atomic coordinates. Then, we have applied the PIHMC to a helium-4 cluster doped with a carbonyl sulfide molecule. The effects of the quantum rotation on the solvation structure and energetics were examined. Translational and rotational fluctuations of the dopant in the superfluid cluster were also analyzed.
Armas-Perez, Julio C.; Londono-Hurtado, Alejandro; Guzman, Orlando; Hernandez-Ortiz, Juan P.; de Pablo, Juan J.
2015-07-27
A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.
Particle-In-Cell/Monte Carlo Simulation of Ion Back Bombardment in Photoinjectors
Qiang, Ji; Corlett, John; Staples, John
2009-03-02
In this paper, we report on studies of ion back bombardment in high average current dc and rf photoinjectors using a particle-in-cell/Monte Carlo method. Using H{sub 2} ion as an example, we observed that the ion density and energy deposition on the photocathode in rf guns are order of magnitude lower than that in a dc gun. A higher rf frequency helps mitigate the ion back bombardment of the cathode in rf guns.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results.« less
Numerical thermalization in particle-in-cell simulations with Monte-Carlo collisions
Lai, P. Y.; Lin, T. Y.; Lin-Liu, Y. R.; Chen, S. H.
2014-12-15
Numerical thermalization in collisional one-dimensional (1D) electrostatic (ES) particle-in-cell (PIC) simulations was investigated. Two collision models, the pitch-angle scattering of electrons by the stationary ion background and large-angle collisions between the electrons and the neutral background, were included in the PIC simulation using Monte-Carlo methods. The numerical results show that the thermalization times in both models were considerably reduced by the additional Monte-Carlo collisions as demonstrated by comparisons with Turner's previous simulation results based on a head-on collision model [M. M. Turner, Phys. Plasmas 13, 033506 (2006)]. However, the breakdown of Dawson's scaling law in the collisional 1D ES PIC simulation is more complicated than that was observed by Turner, and the revised scaling law of the numerical thermalization time with numerical parameters are derived on the basis of the simulation results obtained in this study.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Pandya, Tara M; Johnson, Seth R; Evans, Thomas M; Davidson, Gregory G; Hamilton, Steven P; Godfrey, Andrew T
2016-01-01
This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemorespecific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 R problems. These benchmark and scaling studies show promising results.less
Bauge, E.
2015-01-15
The “Full model” evaluation process, that is used in CEA DAM DIF to evaluate nuclear data in the continuum region, makes extended use of nuclear models implemented in the TALYS code to account for experimental data (both differential and integral) by varying the parameters of these models until a satisfactory description of these experimental data is reached. For the evaluation of the covariance data associated with this evaluated data, the Backward-forward Monte Carlo (BFMC) method was devised in such a way that it mirrors the process of the “Full model” evaluation method. When coupled with the Total Monte Carlo method via the T6 system developed by NRG Petten, the BFMC method allows to make use of integral experiments to constrain the distribution of model parameters, and hence the distribution of derived observables and their covariance matrix. Together, TALYS, TMC, BFMC, and T6, constitute a powerful integrated tool for nuclear data evaluation, that allows for evaluation of nuclear data and the associated covariance matrix, all at once, making good use of all the available experimental information to drive the distribution of the model parameters and the derived observables.
Prez-Andjar, Anglica [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 (United States)] [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 (United States); Zhang, Rui; Newhauser, Wayne [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and The University of Texas Graduate School of Biomedical Sciences at Houston, 6767 Bertner Avenue, Houston, Texas 77030 (United States)] [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and The University of Texas Graduate School of Biomedical Sciences at Houston, 6767 Bertner Avenue, Houston, Texas 77030 (United States)
2013-12-15
Purpose: Stray neutron radiation is of concern after radiation therapy, especially in children, because of the high risk it might carry for secondary cancers. Several previous studies predicted the stray neutron exposure from proton therapy, mostly using Monte Carlo simulations. Promising attempts to develop analytical models have also been reported, but these were limited to only a few proton beam energies. The purpose of this study was to develop an analytical model to predict leakage neutron equivalent dose from passively scattered proton beams in the 100-250-MeV interval.Methods: To develop and validate the analytical model, the authors used values of equivalent dose per therapeutic absorbed dose (H/D) predicted with Monte Carlo simulations. The authors also characterized the behavior of the mean neutron radiation-weighting factor, w{sub R}, as a function of depth in a water phantom and distance from the beam central axis.Results: The simulated and analytical predictions agreed well. On average, the percentage difference between the analytical model and the Monte Carlo simulations was 10% for the energies and positions studied. The authors found that w{sub R} was highest at the shallowest depth and decreased with depth until around 10 cm, where it started to increase slowly with depth. This was consistent among all energies.Conclusion: Simple analytical methods are promising alternatives to complex and slow Monte Carlo simulations to predict H/D values. The authors' results also provide improved understanding of the behavior of w{sub R} which strongly depends on depth, but is nearly independent of lateral distance from the beam central axis.
Monte Carlo Simulation Tool Installation and Operation Guide
Aguayo Navarrete, Estanislao; Ankney, Austin S.; Berguson, Timothy J.; Kouzes, Richard T.; Orrell, John L.; Troy, Meredith D.; Wiseman, Clinton G.
2013-09-02
This document provides information on software and procedures for Monte Carlo simulations based on the Geant4 toolkit, the ROOT data analysis software and the CRY cosmic ray library. These tools have been chosen for its application to shield design and activation studies as part of the simulation task for the Majorana Collaboration. This document includes instructions for installation, operation and modification of the simulation code in a high cyber-security computing environment, such as the Pacific Northwest National Laboratory network. It is intended as a living document, and will be periodically updated. It is a starting point for information collection by an experimenter, and is not the definitive source. Users should consult with one of the authors for guidance on how to find the most current information for their needs.
Improved version of the PHOBOS Glauber Monte Carlo
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Loizides, C.; Nagle, J.; Steinberg, P.
2015-09-01
“Glauber” models are used to calculate geometric quantities in the initial state of heavy ion collisions, such as impact parameter, number of participating nucleons and initial eccentricity. Experimental heavy-ion collaborations, in particular at RHIC and LHC, use Glauber Model calculations for various geometric observables for determination of the collision centrality. In this document, we describe the assumptions inherent to the approach, and provide an updated implementation (v2) of the Monte Carlo based Glauber Model calculation, which originally was used by the PHOBOS collaboration. The main improvement w.r.t. the earlier version (v1) (Alver et al. 2008) is the inclusion of Tritium,more »Helium-3, and Uranium, as well as the treatment of deformed nuclei and Glauber–Gribov fluctuations of the proton in p +A collisions. A users’ guide (updated to reflect changes in v2) is provided for running various calculations.« less
Quantum Monte Carlo simulation of spin-polarized H
Markic, L. Vranjes; Boronat, J.; Casulleras, J.
2007-02-01
The ground-state properties of spin polarized hydrogen H{down_arrow} are obtained by means of diffusion Monte Carlo calculations. Using the most accurate to date ab initio H{down_arrow}-H{down_arrow} interatomic potential we have studied its gas phase, from the very dilute regime until densities above its freezing point. At very small densities, the equation of state of the gas is very well described in terms of the gas parameter {rho}a{sup 3}, with a the s-wave scattering length. The solid phase has also been studied up to high pressures. The gas-solid phase transition occurs at a pressure of 173 bar, a much higher value than suggested by previous approximate descriptions.
Hart, S. W. D.; Maldonado, G. Ivan; Celik, Cihangir; Leal, Luiz C
2014-01-01
For many Monte Carlo codes cross sections are generally only created at a set of predetermined temperatures. This causes an increase in error as one moves further and further away from these temperatures in the Monte Carlo model. This paper discusses recent progress in the Scale Monte Carlo module KENO to create problem dependent, Doppler broadened, cross sections. Currently only broadening the 1D cross sections and probability tables is addressed. The approach uses a finite difference method to calculate the temperature dependent cross-sections for the 1D data, and a simple linear-logarithmic interpolation in the square root of temperature for the probability tables. Work is also ongoing to address broadening theS (alpha , beta) tables. With the current approach the temperature dependent cross sections are Doppler broadened before transport starts, and, for all but a few isotopes, the impact on cross section loading is negligible. Results can be compared with those obtained by using multigroup libraries, as KENO currently does interpolation on the multigroup cross sections to determine temperature dependent cross-sections. Current results compare favorably with these expected results.
Nonequilibrium candidate Monte Carlo: A new tool for efficient equilibrium simulation
Nilmeier, Jerome P.; Crooks, Gavin E.; Minh, David D. L.; Chodera, John D.
2011-11-08
Metropolis Monte Carlo simulation is a powerful tool for studying the equilibrium properties of matter. In complex condensed-phase systems, however, it is difficult to design Monte Carlo moves with high acceptance probabilities that also rapidly sample uncorrelated configurations. Here, we introduce a new class of moves based on nonequilibrium dynamics: candidate configurations are generated through a finite-time process in which a system is actively driven out of equilibrium, and accepted with criteria that preserve the equilibrium distribution. The acceptance rule is similar to the Metropolis acceptance probability, but related to the nonequilibrium work rather than the instantaneous energy difference. Our method is applicable to sampling from both a single thermodynamic state or a mixture of thermodynamic states, and allows both coordinates and thermodynamic parameters to be driven in nonequilibrium proposals. While generating finite-time switching trajectories incurs an additional cost, driving some degrees of freedom while allowing others to evolve naturally can lead to large enhancements in acceptance probabilities, greatly reducing structural correlation times. Using nonequilibrium driven processes vastly expands the repertoire of useful Monte Carlo proposals in simulations of dense solvated systems.
Pilati, S.; Giorgini, S.; Sakkos, K.; Boronat, J.; Casulleras, J.
2006-10-15
By using exact path-integral Monte Carlo methods we calculate the equation of state of an interacting Bose gas as a function of temperature both below and above the superfluid transition. The universal character of the equation of state for dilute systems and low temperatures is investigated by modeling the interatomic interactions using different repulsive potentials corresponding to the same s-wave scattering length. The results obtained for the energy and the pressure are compared to the virial expansion for temperatures larger than the critical temperature. At very low temperatures we find agreement with the ground-state energy calculated using the diffusion Monte Carlo method.
Radiation doses in cone-beam breast computed tomography: A Monte Carlo simulation study
Yi Ying; Lai, Chao-Jen; Han Tao; Zhong Yuncheng; Shen Youtao; Liu Xinming; Ge Shuaiping; You Zhicheng; Wang Tianpeng; Shaw, Chris C.
2011-02-15
Purpose: In this article, we describe a method to estimate the spatial dose variation, average dose and mean glandular dose (MGD) for a real breast using Monte Carlo simulation based on cone beam breast computed tomography (CBBCT) images. We present and discuss the dose estimation results for 19 mastectomy breast specimens, 4 homogeneous breast models, 6 ellipsoidal phantoms, and 6 cylindrical phantoms. Methods: To validate the Monte Carlo method for dose estimation in CBBCT, we compared the Monte Carlo dose estimates with the thermoluminescent dosimeter measurements at various radial positions in two polycarbonate cylinders (11- and 15-cm in diameter). Cone-beam computed tomography (CBCT) images of 19 mastectomy breast specimens, obtained with a bench-top experimental scanner, were segmented and used to construct 19 structured breast models. Monte Carlo simulation of CBBCT with these models was performed and used to estimate the point doses, average doses, and mean glandular doses for unit open air exposure at the iso-center. Mass based glandularity values were computed and used to investigate their effects on the average doses as well as the mean glandular doses. Average doses for 4 homogeneous breast models were estimated and compared to those of the corresponding structured breast models to investigate the effect of tissue structures. Average doses for ellipsoidal and cylindrical digital phantoms of identical diameter and height were also estimated for various glandularity values and compared with those for the structured breast models. Results: The absorbed dose maps for structured breast models show that doses in the glandular tissue were higher than those in the nearby adipose tissue. Estimated average doses for the homogeneous breast models were almost identical to those for the structured breast models (p=1). Normalized average doses estimated for the ellipsoidal phantoms were similar to those for the structured breast models (root mean square (rms) percentage difference=1.7%; p=0.01), whereas those for the cylindrical phantoms were significantly lower (rms percentage difference=7.7%; p<0.01). Normalized MGDs were found to decrease with increasing glandularity. Conclusions: Our results indicate that it is sufficient to use homogeneous breast models derived from CBCT generated structured breast models to estimate the average dose. This investigation also shows that ellipsoidal digital phantoms of similar dimensions (diameter and height) and glandularity to actual breasts may be used to represent a real breast to estimate the average breast dose with Monte Carlo simulation. We have also successfully demonstrated the use of structured breast models to estimate the true MGDs and shown that the normalized MGDs decreased with the glandularity as previously reported by other researchers for CBBCT or mammography.
Study of DCX reaction on medium nuclei with Monte-Carlo Shell Model
Wu, H. C.; Gibbs, W. R.
2010-08-04
In this work a method is introduced to calculate the DCX reaction in the framework of Monte-Carlo Shell Model (MCSM). To facilitate the use of Zero-temperature formalism of MCSM, the Double-Isobaric-Analog State (DIAS) is derived from the ground state by using isospin shifting operator. The validity of this method is tested by comparing the MCSM results to those of the SU(3) symmetry case. Application of this method to DCX on {sup 56}Fe and {sup 93}Nb is discussed.
Theory of melting at high pressures: Amending density functional theory with quantum Monte Carlo
Shulenburger, L.; Desjarlais, M. P.; Mattsson, T. R.
2014-10-01
We present an improved first-principles description of melting under pressure based on thermodynamic integration comparing Density Functional Theory (DFT) and quantum Monte Carlo (QMC) treatments of the system. The method is applied to address the longstanding discrepancy between density functional theory (DFT) calculations and diamond anvil cell (DAC) experiments on the melting curve of xenon, a noble gas solid where van der Waals binding is challenging for traditional DFT methods. The calculations show excellent agreement with data below 20 GPa and that the high-pressure melt curve is well described by a Lindemann behavior up to at least 80 GPa, a finding in stark contrast to DAC data.
Perera, Meewanage Dilina N; Li, Ying Wai; Eisenbach, Markus; Vogel, Thomas; Landau, David P
2015-01-01
We describe the study of thermodynamics of materials using replica-exchange Wang Landau (REWL) sampling, a generic framework for massively parallel implementations of the Wang Landau Monte Carlo method. To evaluate the performance and scalability of the method, we investigate the magnetic phase transition in body-centered cubic (bcc) iron using the classical Heisenberg model parameterized with first principles calculations. We demonstrate that our framework leads to a significant speedup without compromising the accuracy and precision and facilitates the study of much larger systems than is possible with its serial counterpart.
High order Chin actions in path integral Monte Carlo
Sakkos, K.; Casulleras, J.; Boronat, J.
2009-05-28
High order actions proposed by Chin have been used for the first time in path integral Monte Carlo simulations. Contrary to the Takahashi-Imada action, which is accurate to the fourth order only for the trace, the Chin action is fully fourth order, with the additional advantage that the leading fourth-order error coefficients are finely tunable. By optimizing two free parameters entering in the new action, we show that the time step error dependence achieved is best fitted with a sixth order law. The computational effort per bead is increased but the total number of beads is greatly reduced and the efficiency improvement with respect to the primitive approximation is approximately a factor of 10. The Chin action is tested in a one-dimensional harmonic oscillator, a H{sub 2} drop, and bulk liquid {sup 4}He. In all cases a sixth-order law is obtained with values of the number of beads that compare well with the pair action approximation in the stringent test of superfluid {sup 4}He.
Quantum Monte Carlo simulation of a two-dimensional Bose gas
Pilati, S.; Boronat, J.; Casulleras, J.; Giorgini, S.
2005-02-01
The equation of state of a homogeneous two-dimensional Bose gas is calculated using quantum Monte Carlo methods. The low-density universal behavior is investigated using different interatomic model potentials, both finite ranged and strictly repulsive and zero ranged, supporting a bound state. The condensate fraction and the pair distribution function are calculated as a function of the gas parameter, ranging from the dilute to the strongly correlated regime. In the case of the zero-range pseudopotential we discuss the stability of the gaslike state for large values of the two-dimensional scattering length, and we calculate the critical density where the system becomes unstable against cluster formation.
Monte Carlo simulation of PET and SPECT imaging of {sup 90}Y (Journal
Office of Scientific and Technical Information (OSTI)
Article) | SciTech Connect Monte Carlo simulation of PET and SPECT imaging of {sup 90}Y Citation Details In-Document Search Title: Monte Carlo simulation of PET and SPECT imaging of {sup 90}Y Purpose: Yittrium-90 ({sup 90}Y) is traditionally thought of as a pure beta emitter, and is used in targeted radionuclide therapy, with imaging performed using bremsstrahlung single-photon emission computed tomography (SPECT). However, because {sup 90}Y also emits positrons through internal pair
Monte-Carlo particle dynamics in a variable specific impulse magnetoplasma
Office of Scientific and Technical Information (OSTI)
rocket (Journal Article) | SciTech Connect Monte-Carlo particle dynamics in a variable specific impulse magnetoplasma rocket Citation Details In-Document Search Title: Monte-Carlo particle dynamics in a variable specific impulse magnetoplasma rocket The self-consistent mathematical model in a Variable Specific Impulse Magnetoplasma Rocket (VASIMR) is examined. Of particular importance is the effect of a magnetic nozzle in enhancing the axial momentum of the exhaust. Also, different
Monte-Carlo simulation of noise in hard X-ray Transmission Crystal
Office of Scientific and Technical Information (OSTI)
Spectrometers: Identification of contributors to the background noise and shielding optimization (Journal Article) | SciTech Connect Monte-Carlo simulation of noise in hard X-ray Transmission Crystal Spectrometers: Identification of contributors to the background noise and shielding optimization Citation Details In-Document Search Title: Monte-Carlo simulation of noise in hard X-ray Transmission Crystal Spectrometers: Identification of contributors to the background noise and shielding
PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code
Iandola, F N; O'Brien, M J; Procassini, R J
2010-11-29
Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.
MONTE CARLO SIMULATION OF METASTABLE OXYGEN PHOTOCHEMISTRY IN COMETARY ATMOSPHERES
Bisikalo, D. V.; Shematovich, V. I. [Institute of Astronomy of the Russian Academy of Sciences, Moscow (Russian Federation); Grard, J.-C.; Hubert, B. [Laboratory for Planetary and Atmospheric Physics (LPAP), University of Lige, Lige (Belgium); Jehin, E.; Decock, A. [Origines Cosmologiques et Astrophysiques (ORCA), University of Lige (Belgium); Hutsemkers, D. [Extragalactic Astrophysics and Space Observations (EASO), University of Lige (Belgium); Manfroid, J., E-mail: B.Hubert@ulg.ac.be [High Energy Astrophysics Group (GAPHE), University of Lige (Belgium)
2015-01-01
Cometary atmospheres are produced by the outgassing of material, mainly H{sub 2}O, CO, and CO{sub 2} from the nucleus of the comet under the energy input from the Sun. Subsequent photochemical processes lead to the production of other species generally absent from the nucleus, such as OH. Although all comets are different, they all have a highly rarefied atmosphere, which is an ideal environment for nonthermal photochemical processes to take place and influence the detailed state of the atmosphere. We develop a Monte Carlo model of the coma photochemistry. We compute the energy distribution functions (EDF) of the metastable O({sup 1}D) and O({sup 1}S) species and obtain the red (630nm) and green (557.7nm) spectral line shapes of the full coma, consistent with the computed EDFs and the expansion velocity. We show that both species have a severely non-Maxwellian EDF, that results in broad spectral lines and the suprathermal broadening dominates due to the expansion motion. We apply our model to the atmosphere of comet C/1996 B2 (Hyakutake) and 103P/Hartley 2. The computed width of the green line, expressed in terms of speed, is lower than that of the red line. This result is comparable to previous theoretical analyses, but in disagreement with observations. We explain that the spectral line shape does not only depend on the exothermicity of the photochemical production mechanisms, but also on thermalization, due to elastic collisions, reducing the width of the emission line coming from the O({sup 1}D) level, which has a longer lifetime.
Berg, John M.; Veirs, D. Kirk; Vaughn, Randolph B.; Cisneros, Michael R.; Smith, Coleman A.
2000-06-01
Standard modeling approaches can produce the most likely values of the formation constants of metal-ligand complexes if a particular set of species containing the metal ion is known or assumed to exist in solution equilibrium with complexing ligands. Identifying the most likely set of species when more than one set is plausible is a more difficult problem to address quantitatively. A Monte Carlo method of data analysis is described that measures the relative abilities of different speciation models to fit optical spectra of open-shell actinide ions. The best model(s) can be identified from among a larger group of models initially judged to be plausible. The method is demonstrated by analyzing the absorption spectra of aqueous Pu(IV) titrated with nitrate ion at constant 2 molal ionic strength in aqueous perchloric acid. The best speciation model supported by the data is shown to include three Pu(IV) species with nitrate coordination numbers 0, 1, and 2. Formation constants are {beta}{sub 1}=3.2{+-}0.5 and {beta}{sub 2}=11.2{+-}1.2, where the uncertainties are 95% confidence limits estimated by propagating raw data uncertainties using Monte Carlo methods. Principal component analysis independently indicates three Pu(IV) complexes in equilibrium. (c) 2000 Society for Applied Spectroscopy.
Massively parallel Monte Carlo for many-particle simulations on GPUs
Anderson, Joshua A.; Jankowski, Eric [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)] [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Grubb, Thomas L. [Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)] [Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Engel, Michael [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)] [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Glotzer, Sharon C., E-mail: sglotzer@umich.edu [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)
2013-12-01
Current trends in parallel processors call for the design of efficient massively parallel algorithms for scientific computing. Parallel algorithms for Monte Carlo simulations of thermodynamic ensembles of particles have received little attention because of the inherent serial nature of the statistical sampling. In this paper, we present a massively parallel method that obeys detailed balance and implement it for a system of hard disks on the GPU. We reproduce results of serial high-precision Monte Carlo runs to verify the method. This is a good test case because the hard disk equation of state over the range where the liquid transforms into the solid is particularly sensitive to small deviations away from the balance conditions. On a Tesla K20, our GPU implementation executes over one billion trial moves per second, which is 148 times faster than on a single Intel Xeon E5540 CPU core, enables 27 times better performance per dollar, and cuts energy usage by a factor of 13. With this improved performance we are able to calculate the equation of state for systems of up to one million hard disks. These large system sizes are required in order to probe the nature of the melting transition, which has been debated for the last forty years. In this paper we present the details of our computational method, and discuss the thermodynamics of hard disks separately in a companion paper.
Surface Structures of Cubo-octahedral Pt-Mo Catalyst Nanoparticles from Monte Carlo Simulations
Wang, Guofeng; Van Hove, M.A.; Ross, P.N.; Baskes, M.I.
2005-03-31
The surface structures of cubo-octahedral Pt-Mo nanoparticles have been investigated using the Monte Carlo method and modified embedded atom method potentials that we developed for Pt-Mo alloys. The cubo-octahedral Pt-Mo nanoparticles are constructed with disordered fcc configurations, with sizes from 2.5 to 5.0 nm, and with Pt concentrations from 60 to 90 at. percent. The equilibrium Pt-Mo nanoparticle configurations were generated through Monte Carlo simulations allowing both atomic displacements and element exchanges at 600 K. We predict that the Pt atoms weakly segregate to the surfaces of such nanoparticles. The Pt concentrations in the surface are calculated to be 5 to 14 at. percent higher than the Pt concentrations of the nanoparticles. Moreover, the Pt atoms preferentially segregate to the facet sites of the surface, while the Pt and Mo atoms tend to alternate along the edges and vertices of these nanoparticles. We found that decreasing the size or increasing the Pt concentration leads to higher Pt concentrations but fewer Pt-Mo pairs in the Pt-Mo nanoparticle surfaces.
Monte Carlo analysis of neutron slowing-down-time spectrometer for fast reactor spent fuel assay
Chen, Jianwei; Lineberry, Michael
2007-07-01
Using the neutron slowing-down-time method as a nondestructive assay tool to improve input material accountancy for fast reactor spent fuel reprocessing is under investigation at Idaho State University. Monte Carlo analyses were performed to simulate the neutron slowing down process in different slowing down spectrometers, namely, lead and graphite, and determine their main parameters. {sup 238}U threshold fission chamber response was simulated in the Monte Carlo model to represent the spent fuel assay signals, the signature (fission/time) signals of {sup 235}U, {sup 239}Pu, and {sup 241}Pu were simulated as a convolution of fission cross sections and neutron flux inside the spent fuel. {sup 238}U detector signals were analyzed using linear regression model based on the signatures of fissile materials in the spent fuel to determine weight fractions of fissile materials in the Advanced Burner Test Reactor spent fuel. The preliminary results show even though lead spectrometer showed a better assay performance than graphite, graphite spectrometer could accurately determine weight fractions of {sup 239}Pu and {sup 241}Pu given proper assay energy range were chosen. (authors)
Energy density matrix formalism for interacting quantum systems: a quantum Monte Carlo study
Krogel, Jaron T; Kim, Jeongnim; Reboredo, Fernando A
2014-01-01
We develop an energy density matrix that parallels the one-body reduced density matrix (1RDM) for many-body quantum systems. Just as the density matrix gives access to the number density and occupation numbers, the energy density matrix yields the energy density and orbital occupation energies. The eigenvectors of the matrix provide a natural orbital partitioning of the energy density while the eigenvalues comprise a single particle energy spectrum obeying a total energy sum rule. For mean-field systems the energy density matrix recovers the exact spectrum. When correlation becomes important, the occupation energies resemble quasiparticle energies in some respects. We explore the occupation energy spectrum for the finite 3D homogeneous electron gas in the metallic regime and an isolated oxygen atom with ground state quantum Monte Carlo techniques imple- mented in the QMCPACK simulation code. The occupation energy spectrum for the homogeneous electron gas can be described by an effective mass below the Fermi level. Above the Fermi level evanescent behavior in the occupation energies is observed in similar fashion to the occupation numbers of the 1RDM. A direct comparison with total energy differences demonstrates a quantita- tive connection between the occupation energies and electron addition and removal energies for the electron gas. For the oxygen atom, the association between the ground state occupation energies and particle addition and removal energies becomes only qualitative. The energy density matrix provides a new avenue for describing energetics with quantum Monte Carlo methods which have traditionally been limited to total energies.
Billion-atom synchronous parallel kinetic Monte Carlo simulations of critical 3D Ising systems
Martinez, E.; Monasterio, P.R.; Marian, J.
2011-02-20
An extension of the synchronous parallel kinetic Monte Carlo (spkMC) algorithm developed by Martinez et al. [J. Comp. Phys. 227 (2008) 3804] to discrete lattices is presented. The method solves the master equation synchronously by recourse to null events that keep all processors' time clocks current in a global sense. Boundary conflicts are resolved by adopting a chessboard decomposition into non-interacting sublattices. We find that the bias introduced by the spatial correlations attendant to the sublattice decomposition is within the standard deviation of serial calculations, which confirms the statistical validity of our algorithm. We have analyzed the parallel efficiency of spkMC and find that it scales consistently with problem size and sublattice partition. We apply the method to the calculation of scale-dependent critical exponents in billion-atom 3D Ising systems, with very good agreement with state-of-the-art multispin simulations.
Theory of melting at high pressures: Amending density functional theory with quantum Monte Carlo
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Shulenburger, L.; Desjarlais, M. P.; Mattsson, T. R.
2014-10-01
We present an improved first-principles description of melting under pressure based on thermodynamic integration comparing Density Functional Theory (DFT) and quantum Monte Carlo (QMC) treatments of the system. The method is applied to address the longstanding discrepancy between density functional theory (DFT) calculations and diamond anvil cell (DAC) experiments on the melting curve of xenon, a noble gas solid where van der Waals binding is challenging for traditional DFT methods. The calculations show excellent agreement with data below 20 GPa and that the high-pressure melt curve is well described by a Lindemann behavior up to at least 80 GPa, amore » finding in stark contrast to DAC data.« less
A Proposal for a Standard Interface Between Monte Carlo Tools And One-Loop Programs
Binoth, T.; Boudjema, F.; Dissertori, G.; Lazopoulos, A.; Denner, A.; Dittmaier, S.; Frederix, R.; Greiner, N.; Hoeche, Stefan; Giele, W.; Skands, P.; Winter, J.; Gleisberg, T.; Archibald, J.; Heinrich, G.; Krauss, F.; Maitre, D.; Huber, M.; Huston, J.; Kauer, N.; Maltoni, F.; /Louvain U., CP3 /Milan Bicocca U. /INFN, Turin /Turin U. /Granada U., Theor. Phys. Astrophys. /CERN /NIKHEF, Amsterdam /Heidelberg U. /Oxford U., Theor. Phys.
2011-11-11
Many highly developed Monte Carlo tools for the evaluation of cross sections based on tree matrix elements exist and are used by experimental collaborations in high energy physics. As the evaluation of one-loop matrix elements has recently been undergoing enormous progress, the combination of one-loop matrix elements with existing Monte Carlo tools is on the horizon. This would lead to phenomenological predictions at the next-to-leading order level. This note summarises the discussion of the next-to-leading order multi-leg (NLM) working group on this issue which has been taking place during the workshop on Physics at TeV Colliders at Les Houches, France, in June 2009. The result is a proposal for a standard interface between Monte Carlo tools and one-loop matrix element programs.
Mller, Florian Jenny, Patrick Meyer, Daniel W.
2013-10-01
Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and BuckleyLeverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.
atl?, Serap; Tan?r, Gne?
2013-10-01
The present study aimed to investigate the effects of titanium, titanium alloy, and stainless steel hip prostheses on dose distribution based on the Monte Carlo simulation method, as well as the accuracy of the Eclipse treatment planning system (TPS) at 6 and 18 MV photon energies. In the present study the pencil beam convolution (PBC) method implemented in the Eclipse TPS was compared to the Monte Carlo method and ionization chamber measurements. The present findings show that if high-Z material is used in prosthesis, large dose changes can occur due to scattering. The variance in dose observed in the present study was dependent on material type, density, and atomic number, as well as photon energy; as photon energy increased back scattering decreased. The dose perturbation effect of hip prostheses was significant and could not be predicted accurately by the PBC method for hip prostheses. The findings show that for accurate dose calculation the Monte Carlo-based TPS should be used in patients with hip prostheses.
Monte Carlo Implementation Of Up- Or Down-Scattering Due To Collisions With
Office of Scientific and Technical Information (OSTI)
Material At Finite Temperature (Technical Report) | SciTech Connect Monte Carlo Implementation Of Up- Or Down-Scattering Due To Collisions With Material At Finite Temperature Citation Details In-Document Search Title: Monte Carlo Implementation Of Up- Or Down-Scattering Due To Collisions With Material At Finite Temperature Authors: Quaglioni, S ; Beck, B R Publication Date: 2011-06-03 OSTI Identifier: 1113914 Report Number(s): LLNL-TR-488174 DOE Contract Number: W-7405-ENG-48 Resource Type:
Modification to the Monte Carlo N-Particle (MCNP) Visual Editor (MCNPVised)
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
to Read in Computer Aided Design (CAD) Files (Technical Report) | SciTech Connect Modification to the Monte Carlo N-Particle (MCNP) Visual Editor (MCNPVised) to Read in Computer Aided Design (CAD) Files Citation Details In-Document Search Title: Modification to the Monte Carlo N-Particle (MCNP) Visual Editor (MCNPVised) to Read in Computer Aided Design (CAD) Files × You are accessing a document from the Department of Energy's (DOE) SciTech Connect. This site is a product of DOE's Office of
Zori 1.0: A Parallel Quantum Monte Carlo Electronic StructurePackage
Office of Scientific and Technical Information (OSTI)
(Journal Article) | SciTech Connect SciTech Connect Search Results Journal Article: Zori 1.0: A Parallel Quantum Monte Carlo Electronic StructurePackage Citation Details In-Document Search Title: Zori 1.0: A Parallel Quantum Monte Carlo Electronic StructurePackage No abstract prepared. Authors: Aspuru-Guzik, Alan ; Salomon-Ferrer, Romelia ; Austin, Brian ; Perusquia-Flores, Raul ; Griffin, Mary A. ; Oliva, Ricardo A. ; Skinner,David ; Dominik,Domin ; Lester Jr., William A. Publication Date:
Density-functional Monte-Carlo simulation of CuZn order-disorder transition
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Khan, Suffian N.; Eisenbach, Markus
2016-01-25
We perform a Wang-Landau Monte Carlo simulation of a Cu0.5Zn0.5 order-disorder transition using 250 atoms and pairwise atom swaps inside a 5 x 5 x 5 BCC supercell. Each time step uses energies calculated from density functional theory (DFT) via the all-electron Korringa-Kohn- Rostoker method and self-consistent potentials. Here we find CuZn undergoes a transition from a disordered A2 to an ordered B2 structure, as observed in experiment. Our calculated transition temperature is near 870 K, comparing favorably to the known experimental peak at 750 K. We also plot the entropy, temperature, specific-heat, and short-range order as a function ofmore » internal energy.« less
Direct simulation Monte Carlo investigation of the Richtmyer-Meshkov instability.
Gallis, Michail A.; Koehler, Timothy P.; Torczynski, John R.; Plimpton, Steven J.
2015-08-14
The Richtmyer-Meshkov instability (RMI) is investigated using the Direct Simulation Monte Carlo (DSMC) method of molecular gas dynamics. Due to the inherent statistical noise and the significant computational requirements, DSMC is hardly ever applied to hydrodynamic flows. Here, DSMC RMI simulations are performed to quantify the shock-driven growth of a single-mode perturbation on the interface between two atmospheric-pressure monatomic gases prior to re-shocking as a function of the Atwood and Mach numbers. The DSMC results qualitatively reproduce all features of the RMI and are in reasonable quantitative agreement with existing theoretical and empirical models. The DSMC simulations indicate that there is a universal behavior, consistent with previous work in this field that RMI growth follows.
Direct simulation Monte Carlo investigation of the Richtmyer-Meshkov instability.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Gallis, Michail A.; Koehler, Timothy P.; Torczynski, John R.; Plimpton, Steven J.
2015-08-14
The Richtmyer-Meshkov instability (RMI) is investigated using the Direct Simulation Monte Carlo (DSMC) method of molecular gas dynamics. Due to the inherent statistical noise and the significant computational requirements, DSMC is hardly ever applied to hydrodynamic flows. Here, DSMC RMI simulations are performed to quantify the shock-driven growth of a single-mode perturbation on the interface between two atmospheric-pressure monatomic gases prior to re-shocking as a function of the Atwood and Mach numbers. The DSMC results qualitatively reproduce all features of the RMI and are in reasonable quantitative agreement with existing theoretical and empirical models. The DSMC simulations indicate that theremore » is a universal behavior, consistent with previous work in this field that RMI growth follows.« less
Sunny, E. E.; Martin, W. R. [University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor MI 48109 (United States)
2013-07-01
Current Monte Carlo codes use one of three models to model neutron scattering in the epithermal energy range: (1) the asymptotic scattering model, (2) the free gas scattering model, or (3) the S({alpha},{beta}) model, depending on the neutron energy and the specific Monte Carlo code. The free gas scattering model assumes the scattering cross section is constant over the neutron energy range, which is usually a good approximation for light nuclei, but not for heavy nuclei where the scattering cross section may have several resonances in the epithermal region. Several researchers in the field have shown that using the free gas scattering model in the vicinity of the resonances in the lower epithermal range can under-predict resonance absorption due to the up-scattering phenomenon. Existing methods all involve performing the collision analysis in the center-of-mass frame, followed by a conversion back to the laboratory frame. In this paper, we will present a new sampling methodology that (1) accounts for the energy-dependent scattering cross sections in the collision analysis and (2) acts in the laboratory frame, avoiding the conversion to the center-of-mass frame. The energy dependence of the scattering cross section was modeled with even-ordered polynomials to approximate the scattering cross section in Blackshaw's equations for the moments of the differential scattering PDFs. These moments were used to sample the outgoing neutron speed and angle in the laboratory frame on-the-fly during the random walk of the neutron. Results for criticality studies on fuel pin and fuel assembly calculations using these methods showed very close comparison to results using the reference Doppler-broadened rejection correction (DBRC) scheme. (authors)
Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations
Arampatzis, Georgios; Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 ; Katsoulakis, Markos A.
2014-03-28
In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-coupled- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the BortzKalosLebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.
Cluster expansion modeling and Monte Carlo simulation of alnico 57 permanent magnets
Nguyen, Manh Cuong; Zhao, Xin; Wang, Cai -Zhuang; Ho, Kai -Ming
2015-03-05
The concerns about the supply and resource of rare earth (RE) metals have generated a lot of interests in searching for high performance RE-free permanent magnets. Alnico alloys are traditional non-RE permanent magnets and have received much attention recently due their good performance at high temperature. In this paper, we develop an accurate and efficient cluster expansion energy model for alnico 57. Monte Carlo simulations using the cluster expansion method are performed to investigate the structure of alnico 57 at atomistic and nano scales. The alnico 57 master alloy is found to decompose into FeCo-rich and NiAl-rich phases at low temperature. The boundary between these two phases is quite sharp (~2 nm) for a wide range of temperature. The compositions of the main constituents in these two phases become higher when the temperature gets lower. Both FeCo-rich and NiAl-rich phases are in B2 ordering with Fe and Al on ?-site and Ni and Co on ?-site. The degree of order of the NiAl-rich phase is much higher than that of the FeCo-rich phase. In addition, a small magnetic moment is also observed in NiAl-rich phase but the moment reduces as the temperature is lowered, implying that the magnetic properties of alnico 57 could be improved by lowering annealing temperature to diminish the magnetism in NiAl-rich phase. Furthermore, the results from our Monte Carlo simulations are consistent with available experimental results.
SciThur AM: YIS - 04: Gold Nanoparticle Enhanced Arc Radiotherapy: A Monte Carlo Feasibility Study
Koger, B; Kirkby, C
2014-08-15
Introduction: The use of gold nanoparticles (GNPs) in radiotherapy has shown promise for therapeutic enhancement. In this study, we explore the feasibility of enhancing radiotherapy with GNPs in an arc-therapy context. We use Monte Carlo simulations to quantify the macroscopic dose-enhancement ratio (DER) and tumour to normal tissue ratio (TNTR) as functions of photon energy over various tumour and body geometries. Methods: GNP-enhanced arc radiotherapy (GEART) was simulated using the PENELOPE Monte Carlo code and penEasy main program. We simulated 360 arc-therapy with monoenergetic photon energies 50 1000 keV and several clinical spectra used to treat a spherical tumour containing uniformly distributed GNPs in a cylindrical tissue phantom. Various geometries were used to simulate different tumour sizes and depths. Voxel dose was used to calculate DERs and TNTRs. Inhomogeneity effects were examined through skull dose in brain tumour treatment simulations. Results: Below 100 keV, DERs greater than 2.0 were observed. Compared to 6 MV, tumour dose at low energies was more conformai, with lower normal tissue dose and higher TNTRs. Both the DER and TNTR increased with increasing cylinder radius and decreasing tumour radius. The inclusion of bone showed excellent tumour conformality at low energies, though with an increase in skull dose (40% of tumour dose with 100 keV compared to 25% with 6 MV). Conclusions: Even in the presence of inhomogeneities, our results show promise for the treatment of deep-seated tumours with low-energy GEART, with greater tumour dose conformality and lower normal tissue dose than 6 MV.
Cluster expansion modeling and Monte Carlo simulation of alnico 5–7 permanent magnets
Nguyen, Manh Cuong; Zhao, Xin; Wang, Cai -Zhuang; Ho, Kai -Ming
2015-03-05
The concerns about the supply and resource of rare earth (RE) metals have generated a lot of interests in searching for high performance RE-free permanent magnets. Alnico alloys are traditional non-RE permanent magnets and have received much attention recently due their good performance at high temperature. In this paper, we develop an accurate and efficient cluster expansion energy model for alnico 5–7. Monte Carlo simulations using the cluster expansion method are performed to investigate the structure of alnico 5–7 at atomistic and nano scales. The alnico 5–7 master alloy is found to decompose into FeCo-rich and NiAl-rich phases at low temperature. The boundary between these two phases is quite sharp (~2 nm) for a wide range of temperature. The compositions of the main constituents in these two phases become higher when the temperature gets lower. Both FeCo-rich and NiAl-rich phases are in B2 ordering with Fe and Al on α-site and Ni and Co on β-site. The degree of order of the NiAl-rich phase is much higher than that of the FeCo-rich phase. In addition, a small magnetic moment is also observed in NiAl-rich phase but the moment reduces as the temperature is lowered, implying that the magnetic properties of alnico 5–7 could be improved by lowering annealing temperature to diminish the magnetism in NiAl-rich phase. Furthermore, the results from our Monte Carlo simulations are consistent with available experimental results.
SU-E-T-578: MCEBRT, A Monte Carlo Code for External Beam Treatment Plan Verifications
Chibani, O; Ma, C; Eldib, A
2014-06-01
Purpose: Present a new Monte Carlo code (MCEBRT) for patient-specific dose calculations in external beam radiotherapy. The code MLC model is benchmarked and real patient plans are re-calculated using MCEBRT and compared with commercial TPS. Methods: MCEBRT is based on the GEPTS system (Med. Phys. 29 (2002) 835846). Phase space data generated for Varian linac photon beams (6 15 MV) are used as source term. MCEBRT uses a realistic MLC model (tongue and groove, rounded ends). Patient CT and DICOM RT files are used to generate a 3D patient phantom and simulate the treatment configuration (gantry, collimator and couch angles; jaw positions; MLC sequences; MUs). MCEBRT dose distributions and DVHs are compared with those from TPS in absolute way (Gy). Results: Calculations based on the developed MLC model closely matches transmission measurements (pin-point ionization chamber at selected positions and film for lateral dose profile). See Fig.1. Dose calculations for two clinical cases (whole brain irradiation with opposed beams and lung case with eight fields) are carried out and outcomes are compared with the Eclipse AAA algorithm. Good agreement is observed for the brain case (Figs 2-3) except at the surface where MCEBRT dose can be higher by 20%. This is due to better modeling of electron contamination by MCEBRT. For the lung case an overall good agreement (91% gamma index passing rate with 3%/3mm DTA criterion) is observed (Fig.4) but dose in lung can be over-estimated by up to 10% by AAA (Fig.5). CTV and PTV DVHs from TPS and MCEBRT are nevertheless close (Fig.6). Conclusion: A new Monte Carlo code is developed for plan verification. Contrary to phantombased QA measurements, MCEBRT simulate the exact patient geometry and tissue composition. MCEBRT can be used as extra verification layer for plans where surface dose and tissue heterogeneity are an issue.
SU-E-T-277: Raystation Electron Monte Carlo Commissioning and Clinical Implementation
Allen, C; Sansourekidou, P; Pavord, D
2014-06-01
Purpose: To evaluate the Raystation v4.0 Electron Monte Carlo algorithm for an Elekta Infinity linear accelerator and commission for clinical use. Methods: A total of 199 tests were performed (75 Export and Documentation, 20 PDD, 30 Profiles, 4 Obliquity, 10 Inhomogeneity, 55 MU Accuracy, and 5 Grid and Particle History). Export and documentation tests were performed with respect to MOSAIQ (Elekta AB) and RadCalc (Lifeline Software Inc). Mechanical jaw parameters and cutout magnifications were verified. PDD and profiles for open cones and cutouts were extracted and compared with water tank measurements. Obliquity and inhomogeneity for bone and air calculations were compared to film dosimetry. MU calculations for open cones and cutouts were performed and compared to both RadCalc and simple hand calculations. Grid size and particle histories were evaluated per energy for statistical uncertainty performance. Acceptability was categorized as follows: performs as expected, negligible impact on workflow, marginal impact, critical impact or safety concern, and catastrophic impact of safety concern. Results: Overall results are: 88.8% perform as expected, 10.2% negligible, 2.0% marginal, 0% critical and 0% catastrophic. Results per test category are as follows: Export and Documentation: 100% perform as expected, PDD: 100% perform as expected, Profiles: 66.7% perform as expected, 33.3% negligible, Obliquity: 100% marginal, Inhomogeneity 50% perform as expected, 50% negligible, MU Accuracy: 100% perform as expected, Grid and particle histories: 100% negligible. To achieve distributions with satisfactory smoothness level, 5,000,000 particle histories were used. Calculation time was approximately 1 hour. Conclusion: Raystation electron Monte Carlo is acceptable for clinical use. All of the issues encountered have acceptable workarounds. Known issues were reported to Raysearch and will be resolved in upcoming releases.
SU-E-T-239: Monte Carlo Modelling of SMC Proton Nozzles Using TOPAS
Chung, K; Kim, J; Shin, J; Han, Y; Ju, S; Hong, C; Kim, D; Kim, H; Shin, E; Ahn, S; Chung, S; Choi, D
2014-06-01
Purpose: To expedite and cross-check the commissioning of the proton therapy nozzles at Samsung Medical Center using TOPAS. Methods: We have two different types of nozzles at Samsung Medical Center (SMC), a multi-purpose nozzle and a pencil beam scanning dedicated nozzle. Both nozzles have been modelled in Monte Carlo simulation by using TOPAS based on the vendor-provided geometry. The multi-purpose nozzle is mainly composed of wobbling magnets, scatterers, ridge filters and multi-leaf collimators (MLC). Including patient specific apertures and compensators, all the parts of the nozzle have been implemented in TOPAS following the geometry information from the vendor.The dedicated scanning nozzle has a simpler structure than the multi-purpose nozzle with a vacuum pipe at the down stream of the nozzle.A simple water tank volume has been implemented to measure the dosimetric characteristics of proton beams from the nozzles. Results: We have simulated the two proton beam nozzles at SMC. Two different ridge filters have been tested for the spread-out Bragg peak (SOBP) generation of wobbling mode in the multi-purpose nozzle. The spot sizes and lateral penumbra in two nozzles have been simulated and analyzed using a double Gaussian model. Using parallel geometry, both the depth dose curve and dose profile have been measured simultaneously. Conclusion: The proton therapy nozzles at SMC have been successfully modelled in Monte Carlo simulation using TOPAS. We will perform a validation with measured base data and then use the MC simulation to interpolate/extrapolate the measured data. We believe it will expedite the commissioning process of the proton therapy nozzles at SMC.
Cluster expansion modeling and Monte Carlo simulation of alnico 5–7 permanent magnets
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Nguyen, Manh Cuong; Zhao, Xin; Wang, Cai -Zhuang; Ho, Kai -Ming
2015-03-05
The concerns about the supply and resource of rare earth (RE) metals have generated a lot of interests in searching for high performance RE-free permanent magnets. Alnico alloys are traditional non-RE permanent magnets and have received much attention recently due their good performance at high temperature. In this paper, we develop an accurate and efficient cluster expansion energy model for alnico 5–7. Monte Carlo simulations using the cluster expansion method are performed to investigate the structure of alnico 5–7 at atomistic and nano scales. The alnico 5–7 master alloy is found to decompose into FeCo-rich and NiAl-rich phases at lowmore » temperature. The boundary between these two phases is quite sharp (~2 nm) for a wide range of temperature. The compositions of the main constituents in these two phases become higher when the temperature gets lower. Both FeCo-rich and NiAl-rich phases are in B2 ordering with Fe and Al on α-site and Ni and Co on β-site. The degree of order of the NiAl-rich phase is much higher than that of the FeCo-rich phase. In addition, a small magnetic moment is also observed in NiAl-rich phase but the moment reduces as the temperature is lowered, implying that the magnetic properties of alnico 5–7 could be improved by lowering annealing temperature to diminish the magnetism in NiAl-rich phase. Furthermore, the results from our Monte Carlo simulations are consistent with available experimental results.« less
3D Direct Simulation Monte Carlo Code Which Solves for Geometrics
Energy Science and Technology Software Center (OSTI)
1998-01-13
Pegasus is a 3D Direct Simulation Monte Carlo Code which solves for geometries which can be represented by bodies of revolution. Included are all the surface chemistry enhancements in the 2D code Icarus as well as a real vacuum pump model. The code includes multiple species transport.
Use of single scatter electron monte carlo transport for medical radiation sciences
Svatos, Michelle M. (Oakland, CA)
2001-01-01
The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.
MUSiC - An Automated Scan for Deviations between Data and Monte Carlo Simulation
Meyer, Arnd
2010-02-10
A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.
Evaluation of vectorized Monte Carlo algorithms on GPUs for a neutron Eigenvalue problem
Du, X.; Liu, T.; Ji, W.; Xu, X. G.; Brown, F. B.
2013-07-01
Conventional Monte Carlo (MC) methods for radiation transport computations are 'history-based', which means that one particle history at a time is tracked. Simulations based on such methods suffer from thread divergence on the graphics processing unit (GPU), which severely affects the performance of GPUs. To circumvent this limitation, event-based vectorized MC algorithms can be utilized. A versatile software test-bed, called ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - was used for this study. ARCHER facilitates the development and testing of a MC code based on the vectorized MC algorithm implemented on GPUs by using NVIDIA's Compute Unified Device Architecture (CUDA). The ARCHER{sub GPU} code was designed to solve a neutron eigenvalue problem and was tested on a NVIDIA Tesla M2090 Fermi card. We found that although the vectorized MC method significantly reduces the occurrence of divergent branching and enhances the warp execution efficiency, the overall simulation speed is ten times slower than the conventional history-based MC method on GPUs. By analyzing detailed GPU profiling information from ARCHER, we discovered that the main reason was the large amount of global memory transactions, causing severe memory access latency. Several possible solutions to alleviate the memory latency issue are discussed. (authors)
Monte Carlo Solution for Uncertainty Propagation in Particle...
Office of Scientific and Technical Information (OSTI)
Resource Relation: Conference: International Conference on Math. and Comp. Methods Applied ... presentation at the International Conference on Math. and Comp. Methods Applied to Nucl. ...
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Tringe, J. W.; Ileri, N.; Levie, H. W.; Stroeve, P.; Ustach, V.; Faller, R.; Renaud, P.
2015-08-01
We use Molecular Dynamics and Monte Carlo simulations to examine molecular transport phenomena in nanochannels, explaining four orders of magnitude difference in wheat germ agglutinin (WGA) protein diffusion rates observed by fluorescence correlation spectroscopy (FCS) and by direct imaging of fluorescently-labeled proteins. We first use the ESPResSo Molecular Dynamics code to estimate the surface transport distance for neutral and charged proteins. We then employ a Monte Carlo model to calculate the paths of protein molecules on surfaces and in the bulk liquid transport medium. Our results show that the transport characteristics depend strongly on the degree of molecular surface coverage.more » Atomic force microscope characterization of surfaces exposed to WGA proteins for 1000 s show large protein aggregates consistent with the predicted coverage. These calculations and experiments provide useful insight into the details of molecular motion in confined geometries.« less
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less
Fully Differential Monte-Carlo Generator Dedicated to TMDs and Bessel-Weighted Asymmetries
Aghasyan, Mher M.; Avakian, Harut A.
2013-10-01
We present studies of double longitudinal spin asymmetries in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator, which includes quark intrinsic transverse momentum within the generalized parton model based on the fully differential cross section for the process. Additionally, we apply Bessel-weighting to the simulated events to extract transverse momentum dependent parton distribution functions and also discuss possible uncertainties due to kinematic correlation effects.
Posters Monte Carlo Simulation of Longwave Fluxes Through Broken Scattering Cloud Fields
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
5 Posters Monte Carlo Simulation of Longwave Fluxes Through Broken Scattering Cloud Fields E. E. Takara and R. G. Ellingson University of Maryland College Park, Maryland To simplify the analysis, we made several assumptions: the clouds were cuboidal; they were all identically sized and shaped; and they had constant optical properties. Results and Discussion The model was run for a set of cloud fields with clouds of varying optical thickness and scattering albedo. The predicted effective cloud
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.
MO-G-BRF-09: Investigating Magnetic Field Dose Effects in Mice: A Monte Carlo Study
Rubinstein, A; Guindani, M; Followill, D; Melancon, A; Hazle, J; Court, L
2014-06-15
Purpose: In MRI-linac treatments, radiation dose distributions are affected by magnetic fields, especially at high-density/low-density interfaces. Radiobiological consequences of magnetic field dose effects are presently unknown; therefore, preclinical studies are needed to ensure the safe clinical use of MRI-linacs. This study investigates the optimal combination of beam energy and magnetic field strength needed for preclinical murine studies. Methods: The Monte Carlo code MCNP6 was used to simulate the effects of a magnetic field when irradiating a mouse-sized lung phantom with a 1.0cmx1.0cm photon beam. Magnetic field effects were examined using various beam energies (225kVp, 662keV[Cs-137], and 1.25MeV[Co-60]) and magnetic field strengths (0.75T, 1.5T, and 3T). The resulting dose distributions were compared to Monte Carlo results for humans with various field sizes and patient geometries using a 6MV/1.5T MRI-linac. Results: In human simulations, the addition of a 1.5T magnetic field caused an average dose increase of 49% (range:36%60%) to lung at the soft tissue-to-lung interface and an average dose decrease of 30% (range:25%36%) at the lung-to-soft tissue interface. In mouse simulations, the magnetic fields had no effect on the 225kVp dose distribution. The dose increases for the Cs-137 beam were 12%, 33%, and 49% for 0.75T, 1.5T, and 3.0T magnetic fields, respectively while the dose decreases were 7%, 23%, and 33%. For the Co-60 beam, the dose increases were 14%, 45%, and 41%, and the dose decreases were 18%, 35%, and 35%. Conclusion: The magnetic field dose effects observed in mouse phantoms using a Co-60 beam with 1.5T or 3T fields and a Cs-137 beam with a 3T field compare well with those seen in simulated human treatments with an MRI-linac. These irradiator/magnet combinations are suitable for preclinical studies investigating potential biological effects of delivering radiation therapy in the presence of a magnetic field. Partially funded by Elekta.
Monte Carlo simulation based study of a proposed multileaf collimator for a telecobalt machine
Sahani, G.; Dash Sharma, P. K.; Hussain, S. A.; Dutt Sharma, Sunil; Sharma, D. N.
2013-02-15
Purpose: The objective of the present work was to propose a design of a secondary multileaf collimator (MLC) for a telecobalt machine and optimize its design features through Monte Carlo simulation. Methods: The proposed MLC design consists of 72 leaves (36 leaf pairs) with additional jaws perpendicular to leaf motion having the capability of shaping a maximum square field size of 35 Multiplication-Sign 35 cm{sup 2}. The projected widths at isocenter of each of the central 34 leaf pairs and 2 peripheral leaf pairs are 10 and 5 mm, respectively. The ends of the leaves and the x-jaws were optimized to obtain acceptable values of dosimetric and leakage parameters. Monte Carlo N-Particle code was used for generating beam profiles and depth dose curves and estimating the leakage radiation through the MLC. A water phantom of dimension 50 Multiplication-Sign 50 Multiplication-Sign 40 cm{sup 3} with an array of voxels (4 Multiplication-Sign 0.3 Multiplication-Sign 0.6 cm{sup 3}= 0.72 cm{sup 3}) was used for the study of dosimetric and leakage characteristics of the MLC. Output files generated for beam profiles were exported to the PTW radiation field analyzer software through locally developed software for analysis of beam profiles in order to evaluate radiation field width, beam flatness, symmetry, and beam penumbra. Results: The optimized version of the MLC can define radiation fields of up to 35 Multiplication-Sign 35 cm{sup 2} within the prescribed tolerance values of 2 mm. The flatness and symmetry were found to be well within the acceptable tolerance value of 3%. The penumbra for a 10 Multiplication-Sign 10 cm{sup 2} field size is 10.7 mm which is less than the generally acceptable value of 12 mm for a telecobalt machine. The maximum and average radiation leakage through the MLC were found to be 0.74% and 0.41% which are well below the International Electrotechnical Commission recommended tolerance values of 2% and 0.75%, respectively. The maximum leakage through the leaf ends in closed condition was observed to be 8.6% which is less than the values reported for other MLCs designed for medical linear accelerators. Conclusions: It is concluded that dosimetric parameters and the leakage radiation of the optimized secondary MLC design are well below their recommended tolerance values. The optimized design of the proposed MLC can be integrated into a telecobalt machine by replacing the existing adjustable secondary collimator for conformal radiotherapy treatment of cancer patients.
Silva-Rodrguez, Jess Aguiar, Pablo; Servicio de Medicina Nuclear, Complexo Hospitalario Universidade de Santiago de Compostela , 15782, Galicia; Grupo de Imaxe Molecular, Instituto de Investigacin Sanitarias , Santiago de Compostela, 15706, Galicia ; Snchez, Manuel; Mosquera, Javier; Luna-Vega, Vctor; Corts, Julia; Garrido, Miguel; Pombar, Miguel; Ruibal, lvaro; Grupo de Imaxe Molecular, Instituto de Investigacin Sanitarias , Santiago de Compostela, 15706, Galicia; Fundacin Tejerina, 28003, Madrid
2014-05-15
Purpose: Current procedure guidelines for whole body [18F]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) state that studies with visible dose extravasations should be rejected for quantification protocols. Our work is focused on the development and validation of methods for estimating extravasated doses in order to correct standard uptake value (SUV) values for this effect in clinical routine. Methods: One thousand three hundred sixty-seven consecutive whole body FDG-PET studies were visually inspected looking for extravasation cases. Two methods for estimating the extravasated dose were proposed and validated in different scenarios using Monte Carlo simulations. All visible extravasations were retrospectively evaluated using a manual ROI based method. In addition, the 50 patients with higher extravasated doses were also evaluated using a threshold-based method. Results: Simulation studies showed that the proposed methods for estimating extravasated doses allow us to compensate the impact of extravasations on SUV values with an error below 5%. The quantitative evaluation of patient studies revealed that paravenous injection is a relatively frequent effect (18%) with a small fraction of patients presenting considerable extravasations ranging from 1% to a maximum of 22% of the injected dose. A criterion based on the extravasated volume and maximum concentration was established in order to identify this fraction of patients that might be corrected for paravenous injection effect. Conclusions: The authors propose the use of a manual ROI based method for estimating the effectively administered FDG dose and then correct SUV quantification in those patients fulfilling the proposed criterion.
Monte Carlo Solution for Uncertainty Propagation in Particle Transport with
Office of Scientific and Technical Information (OSTI)
a Stochastic Galerkin Method. (Conference) | SciTech Connect Authors: Franke, Brian C. ; Prinja, Anil K. Publication Date: 2013-01-01 OSTI Identifier: 1063492 Report Number(s): SAND2013-0204C DOE Contract Number: AC04-94AL85000 Resource Type: Conference Resource Relation: Conference: Proposed for presentation at the International Conference on Math. and Comp. Methods Applied to Nucl. Sci. and Engg. (M&C 2013) held May 5-9, 2013 in Sun Valley, ID. Research Org: Sandia National
Monte Carlo Solution for Uncertainty Propagation in Particle Transport with
Office of Scientific and Technical Information (OSTI)
a Stochastic Galerkin Method. (Conference) | SciTech Connect Abstract not provided. Authors: Franke, Brian C. ; Prinja, Anil K. Publication Date: 2013-04-01 OSTI Identifier: 1078905 Report Number(s): SAND2013-3409C 448625 DOE Contract Number: AC04-94AL85000 Resource Type: Conference Resource Relation: Conference: International Conference on Math. and Comp. Methods Applied to Nucl. Sci. and Engg. (M&C 2013) held May 5-9, 2013 in Sun Valley, ID.; Related Information: Proposed for
Boscoboinik, A. M.; Manzi, S. J.; Tysoe, W. T.; Pereyra, V. D.; Boscoboinik, J. A.
2015-09-10
The influence of directing agents in the self-assembly of molecular wires to produce two-dimensional electronic nanoarchitectures is studied here using a Monte Carlo approach to simulate the effect of arbitrarily locating nodal points on a surface, from which the growth of self-assembled molecular wires can be nucleated. This is compared to experimental results reported for the self-assembly of molecular wires when 1,4-phenylenediisocyanide (PDI) is adsorbed on Au(111). The latter results in the formation of (Au-PDI)_{n} organometallic chains, which were shown to be conductive when linked between gold nanoparticles on an insulating substrate. The present study analyzes, by means of stochastic methods, the influence of variables that affect the growth and design of self-assembled conductive nanoarchitectures, such as the distance between nodes, coverage of the monomeric units that leads to the formation of the desired architectures, and the interaction between the monomeric units. As a result, this study proposes an approach and sets the stage for the production of complex 2D nanoarchitectures using a bottom-up strategy but including the use of current state-of-the-art top-down technology as an integral part of the self-assembly strategy.
Abdel-Khalik, Hany S.; Zhang, Qiong
2014-05-20
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10^{3} - 10^{5} times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.
Evaluation of a new commercial Monte Carlo dose calculation algorithm for electron beams
Vandervoort, Eric J. Cygler, Joanna E.; The Faculty of Medicine, The University of Ottawa, Ottawa, Ontario K1H 8M5; Department of Physics, Carleton University, Ottawa, Ontario K1S 5B6 ; Tchistiakova, Ekaterina; Department of Medical Biophysics, University of Toronto, Ontario M5G 2M9; Heart and Stroke Foundation Centre for Stroke Recovery, Sunnybrook Research Institute, University of Toronto, Ontario M4N 3M5 ; La Russa, Daniel J.; The Faculty of Medicine, The University of Ottawa, Ottawa, Ontario K1H 8M5
2014-02-15
Purpose: In this report the authors present the validation of a Monte Carlo dose calculation algorithm (XiO EMC from Elekta Software) for electron beams. Methods: Calculated and measured dose distributions were compared for homogeneous water phantoms and for a 3D heterogeneous phantom meant to approximate the geometry of a trachea and spine. Comparisons of measurements and calculated data were performed using 2D and 3D gamma index dose comparison metrics. Results: Measured outputs agree with calculated values within estimated uncertainties for standard and extended SSDs for open applicators, and for cutouts, with the exception of the 17 MeV electron beam at extended SSD for cutout sizes smaller than 5 5 cm{sup 2}. Good agreement was obtained between calculated and experimental depth dose curves and dose profiles (minimum number of measurements that pass a 2%/2 mm agreement 2D gamma index criteria for any applicator or energy was 97%). Dose calculations in a heterogeneous phantom agree with radiochromic film measurements (>98% of pixels pass a 3 dimensional 3%/2 mm ?-criteria) provided that the steep dose gradient in the depth direction is considered. Conclusions: Clinically acceptable agreement (at the 2%/2 mm level) between the measurements and calculated data for measurements in water are obtained for this dose calculation algorithm. Radiochromic film is a useful tool to evaluate the accuracy of electron MC treatment planning systems in heterogeneous media.
Vrugt, Jasper A; Hyman, James M; Robinson, Bruce A; Higdon, Dave; Ter Braak, Cajo J F; Diks, Cees G H
2008-01-01
Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Clay, Raymond C.; Holzmann, Markus; Ceperley, David M.; Morales, Maguel A.
2016-01-19
An accurate understanding of the phase diagram of dense hydrogen and helium mixtures is a crucial component in the construction of accurate models of Jupiter, Saturn, and Jovian extrasolar planets. Though DFT based rst principles methods have the potential to provide the accuracy and computational e ciency required for this task, recent benchmarking in hydrogen has shown that achieving this accuracy requires a judicious choice of functional, and a quanti cation of the errors introduced. In this work, we present a quantum Monte Carlo based benchmarking study of a wide range of density functionals for use in hydrogen-helium mixtures atmore » thermodynamic conditions relevant for Jovian planets. Not only do we continue our program of benchmarking energetics and pressures, but we deploy QMC based force estimators and use them to gain insights into how well the local liquid structure is captured by di erent density functionals. We nd that TPSS, BLYP and vdW-DF are the most accurate functionals by most metrics, and that the enthalpy, energy, and pressure errors are very well behaved as a function of helium concentration. Beyond this, we highlight and analyze the major error trends and relative di erences exhibited by the major classes of functionals, and estimate the magnitudes of these e ects when possible.« less
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Boscoboinik, A. M.; Manzi, S. J.; Tysoe, W. T.; Pereyra, V. D.; Boscoboinik, J. A.
2015-09-10
The influence of directing agents in the self-assembly of molecular wires to produce two-dimensional electronic nanoarchitectures is studied here using a Monte Carlo approach to simulate the effect of arbitrarily locating nodal points on a surface, from which the growth of self-assembled molecular wires can be nucleated. This is compared to experimental results reported for the self-assembly of molecular wires when 1,4-phenylenediisocyanide (PDI) is adsorbed on Au(111). The latter results in the formation of (Au-PDI)n organometallic chains, which were shown to be conductive when linked between gold nanoparticles on an insulating substrate. The present study analyzes, by means of stochasticmore » methods, the influence of variables that affect the growth and design of self-assembled conductive nanoarchitectures, such as the distance between nodes, coverage of the monomeric units that leads to the formation of the desired architectures, and the interaction between the monomeric units. As a result, this study proposes an approach and sets the stage for the production of complex 2D nanoarchitectures using a bottom-up strategy but including the use of current state-of-the-art top-down technology as an integral part of the self-assembly strategy.« less
Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W; Grove, Robert E
2015-01-01
Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNce reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).
Rota, R.; Casulleras, J.; Mazzanti, F.; Boronat, J.
2015-03-21
We present a method based on the path integral Monte Carlo formalism for the calculation of ground-state time correlation functions in quantum systems. The key point of the method is the consideration of time as a complex variable whose phase δ acts as an adjustable parameter. By using high-order approximations for the quantum propagator, it is possible to obtain Monte Carlo data all the way from purely imaginary time to δ values near the limit of real time. As a consequence, it is possible to infer accurately the spectral functions using simple inversion algorithms. We test this approach in the calculation of the dynamic structure function S(q, ω) of two one-dimensional model systems, harmonic and quartic oscillators, for which S(q, ω) can be exactly calculated. We notice a clear improvement in the calculation of the dynamic response with respect to the common approach based on the inverse Laplace transform of the imaginary-time correlation function.
Jiang, F.-J.; Nyfeler, M.; Kaempfer, F.
2009-07-15
Motivated by the possible mechanism for the pinning of the electronic liquid crystal direction in YBa{sub 2}Cu{sub 3}O{sub 6.45} as proposed by Pardini et al. [Phys. Rev. B 78, 024439 (2008)], we use the first-principles Monte Carlo method to study the spin-(1/2) Heisenberg model with antiferromagnetic couplings J{sub 1} and J{sub 2} on the square lattice. In particular, the low-energy constants spin stiffness {rho}{sub s}, staggered magnetization M{sub s}, and spin wave velocity c are determined by fitting the Monte Carlo data to the predictions of magnon chiral perturbation theory. Further, the spin stiffnesses {rho}{sub s1} and {rho}{sub s2} as a function of the ratio J{sub 2}/J{sub 1} of the couplings are investigated in detail. Although we find a good agreement between our results with those obtained by the series expansion method in the weakly anisotropic regime, for strong anisotropy we observe discrepancies.
Minibeam radiation therapy for the management of osteosarcomas: A Monte Carlo study
Martnez-Rovira, I.; Prezado, Y.
2014-06-15
Purpose: Minibeam radiation therapy (MBRT) exploits the well-established tissue-sparing effect provided by the combination of submillimetric field sizes and a spatial fractionation of the dose. The aim of this work is to evaluate the feasibility and potential therapeutic gain of MBRT, in comparison with conventional radiotherapy, for osteosarcoma treatments. Methods: Monte Carlo simulations (PENELOPE/PENEASY code) were used as a method to study the dose distributions resulting from MBRT irradiations of a rat femur and a realistic human femur phantoms. As a figure of merit, peak and valley doses and peak-to-valley dose ratios (PVDR) were assessed. Conversion of absorbed dose to normalized total dose (NTD) was performed in the human case. Several field sizes and irradiation geometries were evaluated. Results: It is feasible to deliver a uniform dose distribution in the target while the healthy tissue benefits from a spatial fractionation of the dose. Very high PVDR values (?20) were achieved in the entrance beam path in the rat case. PVDR values ranged from 2 to 9 in the human phantom. NTD{sub 2.0} of 87 Gy might be reached in the tumor in the human femur while the healthy tissues might receive valley NTD{sub 2.0} lower than 20 Gy. The doses in the tumor and healthy tissues might be significantly higher and lower than the ones commonly delivered used in conventional radiotherapy. Conclusions: The obtained dose distributions indicate that a gain in normal tissue sparing might be expected. This would allow the use of higher (and potentially curative) doses in the tumor. Biological experiments are warranted.
Structural Stability and Defect Energetics of ZnO from Diffusion Quantum Monte Carlo
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Santana Palacio, Juan A; Krogel, Jaron T; Kim, Jeongnim; Kent, Paul R; Reboredo, Fernando A
2015-01-01
We have applied the many-body ab-initio diffusion quantum Monte Carlo (DMC) method to study Zn and ZnO crystals under pressure, and the energetics of the oxygen vacancy, zinc interstitial and hydrogen impurities in ZnO. We show that DMC is an accurate and practical method that can be used to characterize multiple properties of materials that are challenging for density functional theory approximations. DMC agrees with experimental measurements to within 0.3 eV, including the band-gap of ZnO, the ionization potential of O and Zn, and the atomization energy of O2, ZnO dimer, and wurtzite ZnO. DMC predicts the oxygen vacancy asmore » a deep donor with a formation energy of 5.0(2) eV under O-rich conditions and thermodynamic transition levels located between 1.8 and 2.5 eV from the valence band maximum. Our DMC results indicate that the concentration of zinc interstitial and hydrogen impurities in ZnO should be low under n-type, and Zn- and H-rich conditions because these defects have formation energies above 1.4 eV under these conditions. Comparison of DMC and hybrid functionals shows that these DFT approximations can be parameterized to yield a general correct qualitative description of ZnO. However, the formation energy of defects in ZnO evaluated with DMC and hybrid functionals can differ by more than 0.5 eV.« less
W/Z + b bbar/Jets at NLO Using the Monte Carlo MCFM
John M. Campbell
2001-05-29
We summarize recent progress in next-to-leading QCD calculations made using the Monte Carlo MCFM. In particular, we focus on the calculations of p{bar p} {r_arrow} Wb{bar b}, Zb{bar b} and highlight the significant corrections to background estimates for Higgs searches in the channels WH and ZH at the Tevatron. We also report on the current progress of, and strategies for, the calculation of the process p{bar p} {r_arrow} W/Z + 2 jets.
Shafer, J.D.; Shepard, J.R.
1997-04-01
We derive an approximate renormalization group (RG) flow equation for the local effective potential of single-component {phi}{sup 4} field theory at finite temperature. Previous zero-temperature RG equations are recovered in the low- and high-temperature limits, in the latter case, via the phenomenon of dimensional reduction. We numerically solve our RG equations to obtain local effective potentials at finite temperature. These are found to be in excellent agreement with Monte Carlo results, especially when lattice artifacts are accounted for in the RG treatment. {copyright} {ital 1997} {ital The American Physical Society}
Simulation of atomic diffusion in the Fcc NiAl system: A kinetic Monte Carlo study
Alfonso, Dominic R.; Tafen, De Nyago
2015-04-28
The atomic diffusion in fcc NiAl binary alloys was studied by kinetic Monte Carlo simulation. The environment dependent hopping barriers were computed using a pair interaction model whose parameters were fitted to relevant data derived from electronic structure calculations. Long time diffusivities were calculated and the effect of composition change on the tracer diffusion coefficients was analyzed. These results indicate that this variation has noticeable impact on the atomic diffusivities. A reduction in the mobility of both Ni and Al is demonstrated with increasing Al content. As a result, examination of the pair interaction between atoms was carried out for the purpose of understanding the predicted trends.
Monte Carlo simulations of channeling spectra recorded for samples containing complex defects
Jagielski, Jacek; Turos, Prof. Andrzej; Nowicki, Lech; Jozwik, P.; Shutthanandan, Vaithiyalingam; Zhang, Yanwen; Sathish, N.; Thome, Lionel; Stonert, A.; Jozwik-Biala, Iwona
2012-01-01
The aim of the present paper is to describe the current status of the development of McChasy, a Monte Carlo simulation code, to make it suitable for the analysis of dislocations and dislocation loops in crystals. Such factors like the shape of the bent channel and geometrical distortions of the crystalline structure in the vicinity of dislocation has been discussed. The results obtained demonstrate that the new procedure applied to the spectra recorded on crystals containing dislocation yields damage profiles which are independent of the energy of the analyzing beam.
Monte Carlo simulations of channeling spectra recorded for samples containing complex defects
Jagielski, Jacek K.; Turos, Andrzej W.; Nowicki, L.; Jozwik, Przemyslaw A.; Shutthanandan, V.; Zhang, Yanwen; Sathish, N.; Thome, Lionel; Stonert, A.; Jozwik Biala, Iwona
2012-02-15
The main aim of the present paper is to describe the current status of the development of McChasy, a Monte Carlo simulation code, to make it suitable for the analysis of dislocations and dislocation loops in crystals. Such factors like the shape of the bent channel and geometrical distortions of the crystalline structure in the vicinity of dislocation has been discussed. Several examples of the analysis performed at different energies of analyzing ions are presented. The results obtained demonstrate that the new procedure applied to the spectra recorded on crystals containing dislocation yields damage profiles which are independent of the energy of the analyzing beam.
Monte Carlo generators for studies of the 3D structure of the nucleon
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Avakian, Harut; D'Alesio, U.; Murgia, F.
2015-01-23
In this study, extraction of transverse momentum and space distributions of partons from measurements of spin and azimuthal asymmetries requires development of a self consistent analysis framework, accounting for evolution effects, and allowing control of systematic uncertainties due to variations of input parameters and models. Development of realistic Monte-Carlo generators, accounting for TMD evolution effects, spin-orbit and quark-gluon correlations will be crucial for future studies of quark-gluon dynamics in general and 3D structure of the nucleon in particular.
Quantized vortices in {sup 4}He droplets: A quantum Monte Carlo study
Sola, E.; Casulleras, J.; Boronat, J.
2007-08-01
We present a diffusion Monte Carlo study of a vortex line excitation attached to the center of a {sup 4}He droplet at zero temperature. The vortex energy is estimated for droplets of increasing number of atoms, from N=70 up to 300, showing a monotonous increase with N. The evolution of the core radius and its associated energy, the core energy, is also studied as a function of N. The core radius is {approx}1 A in the center and increases when approaching the droplet surface; the core energy per unit volume stabilizes at a value 2.8 K{sigma}{sup -3} ({sigma}=2.556 A) for N{>=}200.
SU-E-T-238: Monte Carlo Estimation of Cerenkov Dose for Photo-Dynamic Radiotherapy
Chibani, O; Price, R; Ma, C; Eldib, A; Mora, G
2014-06-01
Purpose: Estimation of Cerenkov dose from high-energy megavoltage photon and electron beams in tissue and its impact on the radiosensitization using Protoporphyrine IX (PpIX) for tumor targeting enhancement in radiotherapy. Methods: The GEPTS Monte Carlo code is used to generate dose distributions from 18MV Varian photon beam and generic high-energy (45-MV) photon and (45-MeV) electron beams in a voxel-based tissueequivalent phantom. In addition to calculating the ionization dose, the code scores Cerenkov energy released in the wavelength range 375425 nm corresponding to the pick of the PpIX absorption spectrum (Fig. 1) using the Frank-Tamm formula. Results: The simulations shows that the produced Cerenkov dose suitable for activating PpIX is 4000 to 5500 times lower than the overall radiation dose for all considered beams (18MV, 45 MV and 45 MeV). These results were contradictory to the recent experimental studies by Axelsson et al. (Med. Phys. 38 (2011) p 4127), where Cerenkov dose was reported to be only two orders of magnitude lower than the radiation dose. Note that our simulation results can be corroborated by a simple model where the Frank and Tamm formula is applied for electrons with 2 MeV/cm stopping power generating Cerenkov photons in the 375425 nm range and assuming these photons have less than 1mm penetration in tissue. Conclusion: The Cerenkov dose generated by high-energy photon and electron beams may produce minimal clinical effect in comparison with the photon fluence (or dose) commonly used for photo-dynamic therapy. At the present time, it is unclear whether Cerenkov radiation is a significant contributor to the recently observed tumor regression for patients receiving radiotherapy and PpIX versus patients receiving radiotherapy only. The ongoing study will include animal experimentation and investigation of dose rate effects on PpIX response.
A Generalized Boltzmann Fokker-Planck Method for Coupled Charged...
Office of Scientific and Technical Information (OSTI)
Monte Carlo methods but also the established condensed history Monte Carlo technique. ... Resource Type: Technical Report Research Org: University of New Mexico Sponsoring Org: ...
Tsvetkov, Pavel V.; Ames II, David E.; Alajo, Ayodeji B.; Pritchard, Megan L.
2006-07-01
Partitioning and transmutation of minor actinides are expected to have a positive impact on the future of nuclear technology. Their deployment would lead to incineration of hazardous nuclides and could potentially provide additional fuel supply. The U.S. DOE NERI Project assesses the possibility, advantages and limitations of involving minor actinides as a fuel component. The analysis takes into consideration and compares capabilities of actinide-fueled VHTRs with pebble-bed and prismatic cores to approach a reactor lifetime long operation without intermediate refueling. A hybrid Monte Carlo-deterministic methodology has been adopted for coupled neutronics-thermal hydraulics design studies of VHTRs. Within the computational scheme, the key technical issues are being addressed and resolved by implementing efficient automated modeling procedures and sequences, combining Monte Carlo and deterministic approaches, developing and applying realistic 3D coupled neutronics-thermal-hydraulics models with multi-heterogeneity treatments, developing and performing experimental/computational benchmarks for model verification and validation, analyzing uncertainty effects and error propagation. This paper introduces the suggested modeling approach, discusses benchmark results and the preliminary analysis of actinide-fueled VHTRs. The presented up-to-date results are in agreement with the available experimental data. Studies of VHTRs with minor actinides suggest promising performance. (authors)
A Coupled Neutron-Photon 3-D Combinatorial Geometry Monte Carlo Transport Code
Energy Science and Technology Software Center (OSTI)
1998-06-12
TART97 is a coupled neutron-photon, 3 dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly fast: if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system canmore » save you a great deal of time and energy. TART 97 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and ist data files.« less
MCViNE- An object oriented Monte Carlo neutron ray tracing simulation package
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Lin, J. Y. Y.; Smith, Hillary L.; Granroth, Garrett E.; Abernathy, Douglas L.; Lumsden, Mark D.; Winn, Barry L.; Aczel, Adam A.; Aivazis, Michael; Fultz, Brent
2015-11-28
MCViNE (Monte-Carlo VIrtual Neutron Experiment) is an open-source Monte Carlo (MC) neutron ray-tracing software for performing computer modeling and simulations that mirror real neutron scattering experiments. We exploited the close similarity between how instrument components are designed and operated and how such components can be modeled in software. For example we used object oriented programming concepts for representing neutron scatterers and detector systems, and recursive algorithms for implementing multiple scattering. Combining these features together in MCViNE allows one to handle sophisticated neutron scattering problems in modern instruments, including, for example, neutron detection by complex detector systems, and single and multiplemore » scattering events in a variety of samples and sample environments. In addition, MCViNE can use simulation components from linear-chain-based MC ray tracing packages which facilitates porting instrument models from those codes. Furthermore it allows for components written solely in Python, which expedites prototyping of new components. These developments have enabled detailed simulations of neutron scattering experiments, with non-trivial samples, for time-of-flight inelastic instruments at the Spallation Neutron Source. Examples of such simulations for powder and single-crystal samples with various scattering kernels, including kernels for phonon and magnon scattering, are presented. As a result, with simulations that closely reproduce experimental results, scattering mechanisms can be turned on and off to determine how they contribute to the measured scattering intensities, improving our understanding of the underlying physics.« less
Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.; Brown, Forrest B.
2015-11-19
Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics for one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.
O'Brien, M J; Brantley, P S
2015-01-20
In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 2^{21} = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domains replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.
Ibrahim, Ahmad M; Wilson, P.; Sawan, M.; Mosher, Scott W; Peplow, Douglas E.; Grove, Robert E
2013-01-01
Three mesh adaptivity algorithms were developed to facilitate and expedite the use of the CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques in accurate full-scale neutronics simulations of fusion energy systems with immense sizes and complicated geometries. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility and resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation. Additionally, because of the significant increase in the efficiency of FW-CADIS simulations, the three algorithms enabled this difficult calculation to be accurately solved on a regular computer cluster, eliminating the need for a world-class super computer.
TU-F-18A-03: Improving Tissue Segmentation for Monte Carlo Dose Calculation Using DECT Data
Di, Salvio A; Bedwani, S; Carrier, J
2014-06-15
Purpose: To develop a new segmentation technique using dual energy CT (DECT) to overcome limitations related to segmentation from a standard Hounsfield unit (HU) to electron density (ED) calibration curve. Both methods are compared with a Monte Carlo analysis of dose distribution. Methods: DECT allows a direct calculation of both ED and effective atomic number (EAN) within a given voxel. The EAN is here defined as a function of the total electron cross-section of a medium. These values can be effectively acquired using a calibrated method from scans at two different energies. A prior stoichiometric calibration on a Gammex RMI phantom allows us to find the parameters to calculate EAN and ED within a voxel. Scans from a Siemens SOMATOM Definition Flash dual source system provided the data for our study. A Monte Carlo analysis compares dose distribution simulated by dosxyz-nrc, considering a head phantom defined by both segmentation techniques. Results: Results from depth dose and dose profile calculations show that materials with different atomic compositions but similar EAN present differences of less than 1%. Therefore, it is possible to define a short list of basis materials from which density can be adapted to imitate interaction behavior of any tissue. Comparison of the dose distributions on both segmentations shows a difference of 50% in dose in areas surrounding bone at low energy. Conclusion: The presented segmentation technique allows a more accurate medium definition in each voxel, especially in areas of tissue transition. Since the behavior of human tissues is highly sensitive at low energies, this reduces the errors on calculated dose distribution. This method could be further developed to optimize the tissue characterization based on anatomic site.
Kim, Jeongnim; Reboredo, Fernando A
2014-01-01
The self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo J. Chem. Phys. {\\bf 136}, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. {\\bf 89}, 6316 (1988)] are blended to obtain a method for the calculation of thermodynamic properties of many-body systems at low temperatures. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric trial wave functions. A statistical method is derived for the calculation of finite temperature properties of many-body systems near the ground state. In the process we also obtain a parallel algorithm that optimizes the many-body basis of a small subspace of the many-body Hilbert space. This small subspace is optimized to have maximum overlap with the one expanded by the lower energy eigenstates of a many-body Hamiltonian. We show in a model system that the Helmholtz free energy is minimized within this subspace as the iteration number increases. We show that the subspace expanded by the small basis systematically converges towards the subspace expanded by the lowest energy eigenstates. Possible applications of this method to calculate the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can be also used to accelerate the calculation of the ground or excited states with Quantum Monte Carlo.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Mayers, Matthew Z.; Berkelbach, Timothy C.; Hybertsen, Mark S.; Reichman, David R.
2015-10-09
Ground-state diffusion Monte Carlo is used to investigate the binding energies and intercarrier radial probability distributions of excitons, trions, and biexcitons in a variety of two-dimensional transition-metal dichalcogenide materials. We compare these results to approximate variational calculations, as well as to analogous Monte Carlo calculations performed with simplified carrier interaction potentials. Our results highlight the successes and failures of approximate approaches as well as the physical features that determine the stability of small carrier complexes in monolayer transition-metal dichalcogenide materials. In conclusion, we discuss points of agreement and disagreement with recent experiments.
Nakano, Y. Yamazaki, A.; Watanabe, K.; Uritani, A.; Ogawa, K.; Isobe, M.
2014-11-15
Neutron monitoring is important to manage safety of fusion experiment facilities because neutrons are generated in fusion reactions. Monte Carlo simulations play an important role in evaluating the influence of neutron scattering from various structures and correcting differences between deuterium plasma experiments and in situ calibration experiments. We evaluated these influences based on differences between the both experiments at Large Helical Device using Monte Carlo simulation code MCNP5. A difference between the both experiments in absolute detection efficiency of the fission chamber between O-ports is estimated to be the biggest of all monitors. We additionally evaluated correction coefficients for some neutron monitors.
Quantum Monte Carlo Study of the Ground-State Properties of a Fermi Gas in the BCS-BEC Crossover
Giorgini, S.; Astrakharchik, G. E.; Boronat, J.; Casulleras, J.
2006-11-07
The ground-state properties of a two-component Fermi gas with attractive short-range interactions are calculated using the fixed-node diffusion Monte Carlo method. The interaction strength is varied over a wide range by tuning the value of the s-wave scattering length of the two-body potential. We calculate the ground-state energy per particle and we characterize the equation of state of the system. Off-diagonal long-range order is investigated through the asymptotic behavior of the two-body density matrix. The condensate fraction of pairs is calculated in the unitary limit and on both sides of the BCS-BEC crossover.
SU-D-19A-03: Monte Carlo Investigation of the Mobetron to Perform Modulated Electron Beam Therapy
Emam, I; Eldib, A; Hosini, M; AlSaeed, E; Ma, C
2014-06-01
Purpose: Modulated electron radiotherapy (MERT) has been proposed as a mean of delivering conformal dose to shallow tumors while sparing distal structures and surrounding tissues. In intraoperative radiotherapy (IORT) utilizing Mobetron, an applicator is placed as closely as possible to the suspected cancerous tissues to be treated. In this study we investigate the characteristics of Mobetron electron beams collimated by an in-house prospective electron multileaf collimator (eMLC) and its feasibility for MERT. Methods: IntraOp Mobetron dedicated to perform radiotherapy during surgery was used in the study. It provides several energies (6, 9 and 12 MeV). Dosimetry measurements were performed to obtain percentage depth dose curves (PDD) and profiles for a 10-cm diameter applicator using the PTW MP3/XS 3D-scanning system and the semiflex ion chamber. MCBEAM/MCSIM Monte Carlo codes were used for the treatment head simulation and phantom dose calculation. The design of an electron beam collimation by an eMLC attached to the Mobetron head was also investigated using Monte Carlo simulations. Isodose distributions resulting from eMLC collimated beams were compared to that collimated using cutouts. The design for our Mobetron eMLC is based on our previous experiences with eMLCs designed for clinical linear accelerators. For Mobetron the eMLC is attached to the end of a spacer-mounted rectangular applicator at 50 cm SSD. Steel will be used as the leaf material because other materials would be toxic and will not be suitable for intraoperative applications. Results: Good agreement (within 2%) was achieved between measured and calculated PDD curves and profiles for all available energies. Dose distributiosn provided by the eMLC showed reasonable agreement (?3%/1mm) with those obtained by conventional cutouts. Conclusion: Monte Carlo simulations are capable of modeling Mobetron electron beams with a reliable accuracy. An eMLC attached to the Mobteron treatment head will allow better treatment options with those machines.
Faught, A; Davidson, S; Kry, S; Ibbott, G; Followill, D; Fontenot, J; Etzel, C
2014-06-01
Purpose: To develop a comprehensive end-to-end test for Varian's TrueBeam linear accelerator for head and neck IMRT using a custom phantom designed to utilize multiple dosimetry devices. Purpose: To commission a multiple-source Monte Carlo model of Elekta linear accelerator beams of nominal energies 6MV and 10MV. Methods: A three source, Monte Carlo model of Elekta 6 and 10MV therapeutic x-ray beams was developed. Energy spectra of two photon sources corresponding to primary photons created in the target and scattered photons originating in the linear accelerator head were determined by an optimization process that fit the relative fluence of 0.25 MeV energy bins to the product of Fatigue-Life and Fermi functions to match calculated percent depth dose (PDD) data with that measured in a water tank for a 10x10cm2 field. Off-axis effects were modeled by a 3rd degree polynomial used to describe the off-axis half-value layer as a function of off-axis angle and fitting the off-axis fluence to a piecewise linear function to match calculated dose profiles with measured dose profiles for a 4040cm2 field. The model was validated by comparing calculated PDDs and dose profiles for field sizes ranging from 33cm2 to 3030cm2 to those obtained from measurements. A benchmarking study compared calculated data to measurements for IMRT plans delivered to anthropomorphic phantoms. Results: Along the central axis of the beam 99.6% and 99.7% of all data passed the 2%/2mm gamma criterion for 6 and 10MV models, respectively. Dose profiles at depths of dmax, through 25cm agreed with measured data for 99.4% and 99.6% of data tested for 6 and 10MV models, respectively. A comparison of calculated dose to film measurement in a head and neck phantom showed an average of 85.3% and 90.5% of pixels passing a 3%/2mm gamma criterion for 6 and 10MV models respectively. Conclusion: A Monte Carlo multiple-source model for Elekta 6 and 10MV therapeutic x-ray beams has been developed as a quality assurance tool for clinical trials.
SU-E-I-28: Evaluating the Organ Dose From Computed Tomography Using Monte Carlo Calculations
Ono, T; Araki, F
2014-06-01
Purpose: To evaluate organ doses from computed tomography (CT) using Monte Carlo (MC) calculations. Methods: A Philips Brilliance CT scanner (64 slice) was simulated using the GMctdospp (IMPS, Germany) based on the EGSnrc user code. The X-ray spectra and a bowtie filter for MC simulations were determined to coincide with measurements of half-value layer (HVL) and off-center ratio (OCR) profile in air. The MC dose was calibrated from absorbed dose measurements using a Farmer chamber and a cylindrical water phantom. The dose distribution from CT was calculated using patient CT images and organ doses were evaluated from dose volume histograms. Results: The HVLs of Al at 80, 100, and 120 kV were 6.3, 7.7, and 8.7 mm, respectively. The calculated HVLs agreed with measurements within 0.3%. The calculated and measured OCR profiles agreed within 3%. For adult head scans (CTDIvol) =51.4 mGy), mean doses for brain stem, eye, and eye lens were 23.2, 34.2, and 37.6 mGy, respectively. For pediatric head scans (CTDIvol =35.6 mGy), mean doses for brain stem, eye, and eye lens were 19.3, 24.5, and 26.8 mGy, respectively. For adult chest scans (CTDIvol=19.0 mGy), mean doses for lung, heart, and spinal cord were 21.1, 22.0, and 15.5 mGy, respectively. For adult abdominal scans (CTDIvol=14.4 mGy), the mean doses for kidney, liver, pancreas, spleen, and spinal cord were 17.4, 16.5, 16.8, 16.8, and 13.1 mGy, respectively. For pediatric abdominal scans (CTDIvol=6.76 mGy), mean doses for kidney, liver, pancreas, spleen, and spinal cord were 8.24, 8.90, 8.17, 8.31, and 6.73 mGy, respectively. In head scan, organ doses were considerably different from CTDIvol values. Conclusion: MC dose distributions calculated by using patient CT images are useful to evaluate organ doses absorbed to individual patients.
Monte Carlo based beam model using a photon MLC for modulated electron radiotherapy
Henzen, D. Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Vetterli, D.; Chatelain, C.; Fix, M. K.; Neuenschwander, H.; Stampanoni, M. F. M.
2014-02-15
Purpose: Modulated electron radiotherapy (MERT) promises sparing of organs at risk for certain tumor sites. Any implementation of MERT treatment planning requires an accurate beam model. The aim of this work is the development of a beam model which reconstructs electron fields shaped using the Millennium photon multileaf collimator (MLC) (Varian Medical Systems, Inc., Palo Alto, CA) for a Varian linear accelerator (linac). Methods: This beam model is divided into an analytical part (two photon and two electron sources) and a Monte Carlo (MC) transport through the MLC. For dose calculation purposes the beam model has been coupled with a macro MC dose calculation algorithm. The commissioning process requires a set of measurements and precalculated MC input. The beam model has been commissioned at a source to surface distance of 70 cm for a Clinac 23EX (Varian Medical Systems, Inc., Palo Alto, CA) and a TrueBeam linac (Varian Medical Systems, Inc., Palo Alto, CA). For validation purposes, measured and calculated depth dose curves and dose profiles are compared for four different MLC shaped electron fields and all available energies. Furthermore, a measured two-dimensional dose distribution for patched segments consisting of three 18 MeV segments, three 12 MeV segments, and a 9 MeV segment is compared with corresponding dose calculations. Finally, measured and calculated two-dimensional dose distributions are compared for a circular segment encompassed with a C-shaped segment. Results: For 15 34, 5 5, and 2 2 cm{sup 2} fields differences between water phantom measurements and calculations using the beam model coupled with the macro MC dose calculation algorithm are generally within 2% of the maximal dose value or 2 mm distance to agreement (DTA) for all electron beam energies. For a more complex MLC pattern, differences between measurements and calculations are generally within 3% of the maximal dose value or 3 mm DTA for all electron beam energies. For the two-dimensional dose comparisons, the differences between calculations and measurements are generally within 2% of the maximal dose value or 2 mm DTA. Conclusions : The results of the dose comparisons suggest that the developed beam model is suitable to accurately reconstruct photon MLC shaped electron beams for a Clinac 23EX and a TrueBeam linac. Hence, in future work the beam model will be utilized to investigate the possibilities of MERT using the photon MLC to shape electron beams.
Clay, Raymond C.; Mcminis, Jeremy; McMahon, Jeffrey M.; Pierleoni, Carlo; Ceperley, David M.; Morales, Miguel A.
2014-05-01
The ab initio phase diagram of dense hydrogen is very sensitive to errors in the treatment of electronic correlation. Recently, it has been shown that the choice of the density functional has a large effect on the predicted location of both the liquid-liquid phase transition and the solid insulator-to-metal transition in dense hydrogen. To identify the most accurate functional for dense hydrogen applications, we systematically benchmark some of the most commonly used functionals using quantum Monte Carlo. By considering several measures of functional accuracy, we conclude that the van der Waals and hybrid functionals significantly outperform local density approximation and Perdew-Burke-Ernzerhof. We support these conclusions by analyzing the impact of functional choice on structural optimization in the molecular solid, and on the location of the liquid-liquid phase transition.
Size and habit evolution of PETN crystals - a lattice Monte Carlo study
Zepeda-Ruiz, L A; Maiti, A; Gee, R; Gilmer, G H; Weeks, B
2006-02-28
Starting from an accurate inter-atomic potential we develop a simple scheme of generating an ''on-lattice'' molecular potential of short range, which is then incorporated into a lattice Monte Carlo code for simulating size and shape evolution of nanocrystallites. As a specific example, we test such a procedure on the morphological evolution of a molecular crystal of interest to us, e.g., Pentaerythritol Tetranitrate, or PETN, and obtain realistic facetted structures in excellent agreement with experimental morphologies. We investigate several interesting effects including, the evolution of the initial shape of a ''seed'' to an equilibrium configuration, and the variation of growth morphology as a function of the rate of particle addition relative to diffusion.
Excitonic effects in two-dimensional semiconductors: Path integral Monte Carlo approach
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Velizhanin, Kirill A.; Saxena, Avadh
2015-11-11
The most striking features of novel two-dimensional semiconductors (e.g., transition metal dichalcogenide monolayers or phosphorene) is a strong Coulomb interaction between charge carriers resulting in large excitonic effects. In particular, this leads to the formation of multicarrier bound states upon photoexcitation (e.g., excitons, trions, and biexcitons), which could remain stable at near-room temperatures and contribute significantly to the optical properties of such materials. In our work we have used the path integral Monte Carlo methodology to numerically study properties of multicarrier bound states in two-dimensional semiconductors. Specifically, we have accurately investigated and tabulated the dependence of single-exciton, trion, and biexcitonmore » binding energies on the strength of dielectric screening, including the limiting cases of very strong and very weak screening. Our results of this work are potentially useful in the analysis of experimental data and benchmarking of theoretical and computational models.« less
penORNL: a parallel monte carlo photon and electron transport package using PENELOPE
Bekar, Kursat B.; Miller, Thomas Martin; Patton, Bruce W.; Weber, Charles F.
2015-01-01
The parallel Monte Carlo photon and electron transport code package penORNL was developed at Oak Ridge National Laboratory to enable advanced scanning electron microscope (SEM) simulations on high performance computing systems. This paper discusses the implementations, capabilities and parallel performance of the new code package. penORNL uses PENELOPE for its physics calculations and provides all available PENELOPE features to the users, as well as some new features including source definitions specifically developed for SEM simulations, a pulse-height tally capability for detailed simulations of gamma and x-ray detectors, and a modified interaction forcing mechanism to enable accurate energy deposition calculations. The parallel performance of penORNL was extensively tested with several model problems, and very good linear parallel scaling was observed with up to 512 processors. penORNL, along with its new features, will be available for SEM simulations upon completion of the new pulse-height tally implementation.
A bottom collider vertex detector design, Monte-Carlo simulation and analysis package
Lebrun, P.
1990-10-01
A detailed simulation of the BCD vertex detector is underway. Specifications and global design issues are briefly reviewed. The BCD design based on double sided strip detector is described in more detail. The GEANT3-based Monte-Carlo program and the analysis package used to estimate detector performance are discussed in detail. The current status of the expected resolution and signal to noise ratio for the golden'' CP violating mode B{sub d} {yields} {pi}{sup +}{pi}{sup {minus}} is presented. These calculations have been done at FNAL energy ({radical}s = 2.0 TeV). Emphasis is placed on design issues, analysis techniques and related software rather than physics potentials. 20 refs., 46 figs.
Neutrinos from WIMP annihilations obtained using a full three-flavor Monte Carlo approach
Blennow, Mattias; Ohlsson, Tommy; Edsjoe, Joakim E-mail: edsjo@physto.se
2008-01-15
Weakly interacting massive particles (WIMPs) are one of the main candidates for making up the dark matter in the Universe. If these particles make up the dark matter, then they can be captured by the Sun or the Earth, sink to the respective cores, annihilate, and produce neutrinos. Thus, these neutrinos can be a striking dark matter signature at neutrino telescopes looking towards the Sun and/or the Earth. Here, we improve previous analyses on computing the neutrino yields from WIMP annihilations in several respects. We include neutrino oscillations in a full three-flavor framework as well as all effects from neutrino interactions on the way through the Sun (absorption, energy loss, and regeneration from tau decays). In addition, we study the effects of non-zero values of the mixing angle {theta}{sub 13} as well as the normal and inverted neutrino mass hierarchies. Our study is performed in an event-based setting which makes these results very useful both for theoretical analyses and for building a neutrino telescope Monte Carlo code. All our results for the neutrino yields, as well as our Monte Carlo code, are publicly available. We find that the yield of muon-type neutrinos from WIMP annihilations in the Sun is enhanced or suppressed, depending on the dominant WIMP annihilation channel. This effect is due to an effective flavor mixing caused by neutrino oscillations. For WIMP annihilations inside the Earth, the distance from source to detector is too small to allow for any significant amount of oscillations at the neutrino energies relevant for neutrino telescopes.
Zink, K.; Czarnecki, D.; Voigts-Rhetz, P. von; Looe, H. K.; Harder, D.
2014-11-01
Purpose: The electron fluence inside a parallel-plate ionization chamber positioned in a water phantom and exposed to a clinical electron beam deviates from the unperturbed fluence in water in absence of the chamber. One reason for the fluence perturbation is the well-known inscattering effect, whose physical cause is the lack of electron scattering in the gas-filled cavity. Correction factors determined to correct for this effect have long been recommended. However, more recent Monte Carlo calculations have led to some doubt about the range of validity of these corrections. Therefore, the aim of the present study is to reanalyze the development of the fluence perturbation with depth and to review the function of the guard rings. Methods: Spatially resolved Monte Carlo simulations of the dose profiles within gas-filled cavities with various radii in clinical electron beams have been performed in order to determine the radial variation of the fluence perturbation in a coin-shaped cavity, to study the influences of the radius of the collecting electrode and of the width of the guard ring upon the indicated value of the ionization chamber formed by the cavity, and to investigate the development of the perturbation as a function of the depth in an electron-irradiated phantom. The simulations were performed for a primary electron energy of 6 MeV. Results: The Monte Carlo simulations clearly demonstrated a surprisingly large in- and outward electron transport across the lateral cavity boundary. This results in a strong influence of the depth-dependent development of the electron field in the surrounding medium upon the chamber reading. In the buildup region of the depth-dose curve, the inout balance of the electron fluence is positive and shows the well-known dose oscillation near the cavity/water boundary. At the depth of the dose maximum the inout balance is equilibrated, and in the falling part of the depth-dose curve it is negative, as shown here the first time. The influences of both the collecting electrode radius and the width of the guard ring are reflecting the deep radial penetration of the electron transport processes into the gas-filled cavities and the need for appropriate corrections of the chamber reading. New values for these corrections have been established in two forms, one converting the indicated value into the absorbed dose to water in the front plane of the chamber, the other converting it into the absorbed dose to water at the depth of the effective point of measurement of the chamber. In the Appendix, the inout imbalance of electron transport across the lateral cavity boundary is demonstrated in the approximation of classical small-angle multiple scattering theory. Conclusions: The inout electron transport imbalance at the lateral boundaries of parallel-plate chambers in electron beams has been studied with Monte Carlo simulation over a range of depth in water, and new correction factors, covering all depths and implementing the effective point of measurement concept, have been developed.
Leon, Stephanie M. Wagner, Louis K.; Brateman, Libby F.
2014-11-01
Purpose: Monte Carlo simulations were performed with the goal of verifying previously published physical measurements characterizing scatter as a function of apparent thickness. A secondary goal was to provide a way of determining what effect tissue glandularity might have on the scatter characteristics of breast tissue. The overall reason for characterizing mammography scatter in this research is the application of these data to an image processing-based scatter-correction program. Methods: MCNPX was used to simulate scatter from an infinitesimal pencil beam using typical mammography geometries and techniques. The spreading of the pencil beam was characterized by two parameters: mean radial extent (MRE) and scatter fraction (SF). The SF and MRE were found as functions of target, filter, tube potential, phantom thickness, and the presence or absence of a grid. The SF was determined by separating scatter and primary by the angle of incidence on the detector, then finding the ratio of the measured scatter to the total number of detected events. The accuracy of the MRE was determined by placing ring-shaped tallies around the impulse and fitting those data to the point-spread function (PSF) equation using the value for MRE derived from the physical measurements. The goodness-of-fit was determined for each data set as a means of assessing the accuracy of the physical MRE data. The effect of breast glandularity on the SF, MRE, and apparent tissue thickness was also considered for a limited number of techniques. Results: The agreement between the physical measurements and the results of the Monte Carlo simulations was assessed. With a grid, the SFs ranged from 0.065 to 0.089, with absolute differences between the measured and simulated SFs averaging 0.02. Without a grid, the range was 0.280.51, with absolute differences averaging ?0.01. The goodness-of-fit values comparing the Monte Carlo data to the PSF from the physical measurements ranged from 0.96 to 1.00 with a grid and 0.65 to 0.86 without a grid. Analysis of the data suggested that the nongrid data could be better described by a biexponential function than the single exponential used here. The simulations assessing the effect of breast composition on SF and MRE showed only a slight impact on these quantities. When compared to a mix of 50% glandular/50% adipose tissue, the impact of substituting adipose or glandular breast compositions on the apparent thickness of the tissue was about 5%. Conclusions: The findings show agreement between the physical measurements published previously and the Monte Carlo simulations presented here; the resulting data can therefore be used more confidently for an application such as image processing-based scatter correction. The findings also suggest that breast composition does not have a major impact on the scatter characteristics of breast tissue. Application of the scatter data to the development of a scatter-correction software program can be simplified by ignoring the variations in density among breast tissues.
Integrated Cost and Schedule using Monte Carlo Simulation of a CPM Model - 12419
Hulett, David T.; Nosbisch, Michael R.
2012-07-01
This discussion of the recommended practice (RP) 57R-09 of AACE International defines the integrated analysis of schedule and cost risk to estimate the appropriate level of cost and schedule contingency reserve on projects. The main contribution of this RP is to include the impact of schedule risk on cost risk and hence on the need for cost contingency reserves. Additional benefits include the prioritizing of the risks to cost, some of which are risks to schedule, so that risk mitigation may be conducted in a cost-effective way, scatter diagrams of time-cost pairs for developing joint targets of time and cost, and probabilistic cash flow which shows cash flow at different levels of certainty. Integrating cost and schedule risk into one analysis based on the project schedule loaded with costed resources from the cost estimate provides both: (1) more accurate cost estimates than if the schedule risk were ignored or incorporated only partially, and (2) illustrates the importance of schedule risk to cost risk when the durations of activities using labor-type (time-dependent) resources are risky. Many activities such as detailed engineering, construction or software development are mainly conducted by people who need to be paid even if their work takes longer than scheduled. Level-of-effort resources, such as the project management team, are extreme examples of time-dependent resources, since if the project duration exceeds its planned duration the cost of these resources will increase over their budgeted amount. The integrated cost-schedule risk analysis is based on: - A high quality CPM schedule with logic tight enough so that it will provide the correct dates and critical paths during simulation automatically without manual intervention. - A contingency-free estimate of project costs that is loaded on the activities of the schedule. - Resolves inconsistencies between cost estimate and schedule that often creep into those documents as project execution proceeds. - Good-quality risk data that are usually collected in risk interviews of the project team, management and others knowledgeable in the risk of the project. The risks from the risk register are used as the basis of the risk data in the risk driver method. The risk driver method is based in the fundamental principle that identifiable risks drive overall cost and schedule risk. - A Monte Carlo simulation software program that can simulate schedule risk, burn WM2012 rate risk and time-independent resource risk. The results include the standard histograms and cumulative distributions of possible cost and time results for the project. However, by simulating both cost and time simultaneously we can collect the cost-time pairs of results and hence show the scatter diagram ('football chart') that indicates the joint probability of finishing on time and on budget. Also, we can derive the probabilistic cash flow for comparison with the time-phased project budget. Finally the risks to schedule completion and to cost can be prioritized, say at the P-80 level of confidence, to help focus the risk mitigation efforts. If the cost and schedule estimates including contingency reserves are not acceptable to the project stakeholders the project team should conduct risk mitigation workshops and studies, deciding which risk mitigation actions to take, and re-run the Monte Carlo simulation to determine the possible improvement to the project's objectives. Finally, it is recommended that the contingency reserves of cost and of time, calculated at a level that represents an acceptable degree of certainty and uncertainty for the project stakeholders, be added as a resource-loaded activity to the project schedule for strategic planning purposes. The risk analysis described in this paper is correct only for the current plan, represented by the schedule. The project contingency reserve of time and cost that are the main results of this analysis apply if that plan is to be followed. Of course project managers have the option of re-planning and re-scheduling in the face of new facts, in part by m
Statistical Exploration of Electronic Structure of Molecules from Quantum Monte-Carlo Simulations
Prabhat, Mr; Zubarev, Dmitry; Lester, Jr., William A.
2010-12-22
In this report, we present results from analysis of Quantum Monte Carlo (QMC) simulation data with the goal of determining internal structure of a 3N-dimensional phase space of an N-electron molecule. We are interested in mining the simulation data for patterns that might be indicative of the bond rearrangement as molecules change electronic states. We examined simulation output that tracks the positions of two coupled electrons in the singlet and triplet states of an H2 molecule. The electrons trace out a trajectory, which was analyzed with a number of statistical techniques. This project was intended to address the following scientific questions: (1) Do high-dimensional phase spaces characterizing electronic structure of molecules tend to cluster in any natural way? Do we see a change in clustering patterns as we explore different electronic states of the same molecule? (2) Since it is hard to understand the high-dimensional space of trajectories, can we project these trajectories to a lower dimensional subspace to gain a better understanding of patterns? (3) Do trajectories inherently lie in a lower-dimensional manifold? Can we recover that manifold? After extensive statistical analysis, we are now in a better position to respond to these questions. (1) We definitely see clustering patterns, and differences between the H2 and H2tri datasets. These are revealed by the pamk method in a fairly reliable manner and can potentially be used to distinguish bonded and non-bonded systems and get insight into the nature of bonding. (2) Projecting to a lower dimensional subspace ({approx}4-5) using PCA or Kernel PCA reveals interesting patterns in the distribution of scalar values, which can be related to the existing descriptors of electronic structure of molecules. Also, these results can be immediately used to develop robust tools for analysis of noisy data obtained during QMC simulations (3) All dimensionality reduction and estimation techniques that we tried seem to indicate that one needs 4 or 5 components to account for most of the variance in the data, hence this 5D dataset does not necessarily lie on a well-defined, low dimensional manifold. In terms of specific clustering techniques, K-means was generally useful in exploring the dataset. The partition around medoids (pam) technique produced the most definitive results for our data showing distinctive patterns for both a sample of the complete data and time-series. The gap statistic with tibshirani criteria did not provide any distinction across the 2 dataset. The gap statistic w/DandF criteria, Model based clustering and hierarchical modeling simply failed to run on our datasets. Thankfully, the vanilla PCA technique was successful in handling our entire dataset. PCA revealed some interesting patterns for the scalar value distribution. Kernel PCA techniques (vanilladot, RBF, Polynomial) and MDS failed to run on the entire dataset, or even a significant fraction of the dataset, and we resorted to creating an explicit feature map followed by conventional PCA. Clustering using K-means and PAM in the new basis set seems to produce promising results. Understanding the new basis set in the scientific context of the problem is challenging, and we are currently working to further examine and interpret the results.
Monte Carlo calculations of electron beam quality conversion factors for several ion chamber types
Muir, B. R.; Rogers, D. W. O.
2014-11-01
Purpose: To provide a comprehensive investigation of electron beam reference dosimetry using Monte Carlo simulations of the response of 10 plane-parallel and 18 cylindrical ion chamber types. Specific emphasis is placed on the determination of the optimal shift of the chambers effective point of measurement (EPOM) and beam quality conversion factors. Methods: The EGSnrc system is used for calculations of the absorbed dose to gas in ion chamber models and the absorbed dose to water as a function of depth in a water phantom on which cobalt-60 and several electron beam source models are incident. The optimal EPOM shifts of the ion chambers are determined by comparing calculations of R{sub 50} converted from I{sub 50} (calculated using ion chamber simulations in phantom) to R{sub 50} calculated using simulations of the absorbed dose to water vs depth in water. Beam quality conversion factors are determined as the calculated ratio of the absorbed dose to water to the absorbed dose to air in the ion chamber at the reference depth in a cobalt-60 beam to that in electron beams. Results: For most plane-parallel chambers, the optimal EPOM shift is inside of the active cavity but different from the shift determined with water-equivalent scaling of the front window of the chamber. These optimal shifts for plane-parallel chambers also reduce the scatter of beam quality conversion factors, k{sub Q}, as a function of R{sub 50}. The optimal shift of cylindrical chambers is found to be less than the 0.5 r{sub cav} recommended by current dosimetry protocols. In most cases, the values of the optimal shift are close to 0.3 r{sub cav}. Values of k{sub ecal} are calculated and compared to those from the TG-51 protocol and differences are explained using accurate individual correction factors for a subset of ion chambers investigated. High-precision fits to beam quality conversion factors normalized to unity in a beam with R{sub 50} = 7.5 cm (k{sub Q}{sup ?}) are provided. These factors avoid the use of gradient correction factors as used in the TG-51 protocol although a chamber dependent optimal shift in the EPOM is required when using plane-parallel chambers while no shift is needed with cylindrical chambers. The sensitivity of these results to parameters used to model the ion chambers is discussed and the uncertainty related to the practical use of these results is evaluated. Conclusions: These results will prove useful as electron beam reference dosimetry protocols are being updated. The analysis of this work indicates that cylindrical ion chambers may be appropriate for use in low-energy electron beams but measurements are required to characterize their use in these beams.
Cascade annealing simulations of bcc iron using object kinetic Monte Carlo
Xu, Haixuan; Osetskiy, Yury N; Stoller, Roger E
2012-01-01
Simulations of displacement cascade annealing were carried out using object kinetic Monte Carlo based on an extensive MD database including various primary knock-on atom energies and directions. The sensitivity of the results to a broad range of material and model parameters was examined. The diffusion mechanism of interstitial clusters has been identified to have the most significant impact on the fraction of stable interstitials that escape the cascade region. The maximum level of recombination was observed for the limiting case in which all interstitial clusters exhibit 3D random walk diffusion. The OKMC model was parameterized using two alternative sets of defect migration and binding energies, one from ab initio calculations and the second from an empirical potential. The two sets of data predict essentially the same fraction of surviving defects but different times associated with the defect escape processes. This study provides a comprehensive picture of the first phase of long-term defect evolution in bcc iron and generates information that can be used as input data for mean field rate theory (MFRT) to predict the microstructure evolution of materials under irradiation. In addition, the limitations of the current OKMC model are discussed and a potential way to overcome these limitations is outlined.
MONTE CARLO SIMULATIONS OF PERIODIC PULSED REACTOR WITH MOVING GEOMETRY PARTS
Cao, Yan; Gohar, Yousry
2015-11-01
In a periodic pulsed reactor, the reactor state varies periodically from slightly subcritical to slightly prompt supercritical for producing periodic power pulses. Such periodic state change is accomplished by a periodic movement of specific reactor parts, such as control rods or reflector sections. The analysis of such reactor is difficult to perform with the current reactor physics computer programs. Based on past experience, the utilization of the point kinetics approximations gives considerable errors in predicting the magnitude and the shape of the power pulse if the reactor has significantly different neutron life times in different zones. To accurately simulate the dynamics of this type of reactor, a Monte Carlo procedure using the transfer function TRCL/TR of the MCNP/MCNPX computer programs is utilized to model the movable reactor parts. In this paper, two algorithms simulating the geometry part movements during a neutron history tracking have been developed. Several test cases have been developed to evaluate these procedures. The numerical test cases have shown that the developed algorithms can be utilized to simulate the reactor dynamics with movable geometry parts.
von Wittenau, A; Aufderheide, M B; Henderson, G L
2010-05-07
Given the cost and lead-times involved in high-energy proton radiography, it is prudent to model proposed radiographic experiments to see if the images predicted would return useful information. We recently modified our raytracing transmission radiography modeling code HADES to perform simplified Monte Carlo simulations of the transport of protons in a proton radiography beamline. Beamline objects include the initial diffuser, vacuum magnetic fields, windows, angle-selecting collimators, and objects described as distorted 2D (planar or cylindrical) meshes or as distorted 3D hexahedral meshes. We present an overview of the algorithms used for the modeling and code timings for simulations through typical 2D and 3D meshes. We next calculate expected changes in image blur as scattering materials are placed upstream and downstream of a resolution test object (a 3 mm thick sheet of tantalum, into which 0.4 mm wide slits have been cut), and as the current supplied to the focusing magnets is varied. We compare and contrast the resulting simulations with the results of measurements obtained at the 800 MeV Los Alamos LANSCE Line-C proton radiography facility.
Electrolyte pore/solution partitioning by expanded grand canonical ensemble Monte Carlo simulation
Moucka, Filip; Bratko, Dusan Luzar, Alenka
2015-03-28
Using a newly developed grand canonical Monte Carlo approach based on fractional exchanges of dissolved ions and water molecules, we studied equilibrium partitioning of both components between laterally extended apolar confinements and surrounding electrolyte solution. Accurate calculations of the Hamiltonian and tensorial pressure components at anisotropic conditions in the pore required the development of a novel algorithm for a self-consistent correction of nonelectrostatic cut-off effects. At pore widths above the kinetic threshold to capillary evaporation, the molality of the salt inside the confinement grows in parallel with that of the bulk phase, but presents a nonuniform width-dependence, being depleted at some and elevated at other separations. The presence of the salt enhances the layered structure in the slit and lengthens the range of inter-wall pressure exerted by the metastable liquid. Solvation pressure becomes increasingly repulsive with growing salt molality in the surrounding bath. Depending on the sign of the excess molality in the pore, the wetting free energy of pore walls is either increased or decreased by the presence of the salt. Because of simultaneous rise in the solution surface tension, which increases the free-energy cost of vapor nucleation, the rise in the apparent hydrophobicity of the walls has not been shown to enhance the volatility of the metastable liquid in the pores.
Krueger, Rachel A.; Haibach, Frederick G.; Fry, Dana L.; Gomez, Maria A.
2015-04-21
A centrality measure based on the time of first returns rather than the number of steps is developed and applied to finding proton traps and access points to proton highways in the doped perovskite oxides: AZr{sub 0.875}D{sub 0.125}O{sub 3}, where A is Ba or Sr and the dopant D is Y or Al. The high centrality region near the dopant is wider in the SrZrO{sub 3} systems than the BaZrO{sub 3} systems. In the aluminum-doped systems, a region of intermediate centrality (secondary region) is found in a plane away from the dopant. Kinetic Monte Carlo (kMC) trajectories show that this secondary region is an entry to fast conduction planes in the aluminum-doped systems in contrast to the highest centrality area near the dopant trap. The yttrium-doped systems do not show this secondary region because the fast conduction routes are in the same plane as the dopant and hence already in the high centrality trapped area. This centrality measure complements kMC by highlighting key areas in trajectories. The limiting activation barriers found via kMC are in very good agreement with experiments and related to the barriers to escape dopant traps.
Uribe, R. M.; Salvat, F.; Cleland, M. R.; Berejka, A.
2009-03-10
The Monte Carlo code PENELOPE was used to simulate the irradiation of alanine coated film dosimeters with electron beams of energies from 1 to 5 MeV being produced by a high-current industrial electron accelerator. This code includes a geometry package that defines complex quadratic geometries, such as those of the irradiation of products in an irradiation processing facility. In the present case the energy deposited on a water film at the surface of a wood parallelepiped was calculated using the program PENMAIN, which is a generic main program included in the PENELOPE distribution package. The results from the simulation were then compared with measurements performed by irradiating alanine film dosimeters with electrons using a 150 kW Dynamitron electron accelerator. The alanine films were placed on top of a set of wooden planks using the same geometrical arrangement as the one used for the simulation. The way the results from the simulation can be correlated with the actual measurements, taking into account the irradiation parameters, is described. An estimation of the percentage difference between measurements and calculations is also presented.
The hydrophobic effect in a simple isotropic water-like model: Monte Carlo study
Huš, Matej; Urbic, Tomaz
2014-04-14
Using Monte Carlo computer simulations, we show that a simple isotropic water-like model with two characteristic lengths can reproduce the hydrophobic effect and the solvation properties of small and large non-polar solutes. Influence of temperature, pressure, and solute size on the thermodynamic properties of apolar solute solvation in a water model was systematically studied, showing two different solvation regimes. Small particles can fit into the cavities around the solvent particles, inducing additional order in the system and lowering the overall entropy. Large particles force the solvent to disrupt their network, increasing the entropy of the system. At low temperatures, the ordering effect of small solutes is very pronounced. Above the cross-over temperature, which strongly depends on the solute size, the entropy change becomes strictly positive. Pressure dependence was also investigated, showing a “cross-over pressure” where the entropy and enthalpy of solvation are the lowest. These results suggest two fundamentally different solvation mechanisms, as observed experimentally in water and computationally in various water-like models.
Da, B.; Li, Z. Y.; Chang, H. C.; Ding, Z. J.; Mao, S. F.
2014-09-28
It has been experimentally found that the carbon surface contamination influences strongly the spectrum signals in reflection electron energy loss spectroscopy (REELS) especially at low primary electron energy. However, there is still little theoretical work dealing with the carbon contamination effect in REELS. Such a work is required to predict REELS spectrum for layered structural sample, providing an understanding of the experimental phenomena observed. In this study, we present a numerical calculation result on the spatially varying differential inelastic mean free path for a sample made of a carbon contamination layer of varied thickness on a SrTiO{sub 3} substrate. A Monte Carlo simulation model for electron interaction with a layered structural sample is built by combining this inelastic scattering cross-section with the Mott's cross-section for electron elastic scattering. The simulation results have clearly shown that the contribution of the electron energy loss from carbon surface contamination increases with decreasing primary energy due to increased individual scattering processes along trajectory parts carbon contamination layer. Comparison of the simulated spectra for different thicknesses of the carbon contamination layer and for different primary electron energies with experimental spectra clearly identifies that the carbon contamination in the measured sample was in the form of discontinuous islands other than the uniform film.
Saha, Krishnendu; Straus, Kenneth J.; Glick, Stephen J.; Chen, Yu.
2014-08-28
To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.
Monte Carlo modeling of neutron and gamma-ray imaging systems
Hall, J.
1996-04-01
Detailed numerical prototypes are essential to design of efficient and cost-effective neutron and gamma-ray imaging systems. We have exploited the unique capabilities of an LLNL-developed radiation transport code (COG) to develop code modules capable of simulating the performance of neutron and gamma-ray imaging systems over a wide range of source energies. COG allows us to simulate complex, energy-, angle-, and time-dependent radiation sources, model 3-dimensional system geometries with ``real world`` complexity, specify detailed elemental and isotopic distributions and predict the responses of various types of imaging detectors with full Monte Carlo accuray. COG references detailed, evaluated nuclear interaction databases allowingusers to account for multiple scattering, energy straggling, and secondary particle production phenomena which may significantly effect the performance of an imaging system by may be difficult or even impossible to estimate using simple analytical models. This work presents examples illustrating the use of these routines in the analysis of industrial radiographic systems for thick target inspection, nonintrusive luggage and cargoscanning systems, and international treaty verification.
Collapse transitions in thermosensitive multi-block copolymers: A Monte Carlo study
Rissanou, Anastassia N.; Tzeli, Despoina S.; Anastasiadis, Spiros H.; Bitsanis, Ioannis A.
2014-05-28
Monte Carlo simulations are performed on a simple cubic lattice to investigate the behavior of a single linear multiblock copolymer chain of various lengths N. The chain of type (A{sub n}B{sub n}){sub m} consists of alternating A and B blocks, where A are solvophilic and B are solvophobic and N = 2nm. The conformations are classified in five cases of globule formation by the solvophobic blocks of the chain. The dependence of globule characteristics on the molecular weight and on the number of blocks, which participate in their formation, is examined. The focus is on relative high molecular weight blocks (i.e., N in the range of 5005000 units) and very differing energetic conditions for the two blocks (very goodalmost athermal solvent for A and bad solvent for B). A rich phase behavior is observed as a result of the alternating architecture of the multiblock copolymer chain. We trust that thermodynamic equilibrium has been reached for chains of N up to 2000 units; however, for longer chains kinetic entrapments are observed. The comparison among equivalent globules consisting of different number of B-blocks shows that the more the solvophobic blocks constituting the globule the bigger its radius of gyration and the looser its structure. Comparisons between globules formed by the solvophobic blocks of the multiblock copolymer chain and their homopolymer analogs highlight the important role of the solvophilic A-blocks.
Byun, H. S.; Pirbadian, S.; Nakano, Aiichiro; Shi, Liang; El-Naggar, Mohamed Y.
2014-09-05
Microorganisms overcome the considerable hurdle of respiring extracellular solid substrates by deploying large multiheme cytochrome complexes that form 20 nanometer conduits to traffic electrons through the periplasm and across the cellular outer membrane. Here we report the first kinetic Monte Carlo simulations and single-molecule scanning tunneling microscopy (STM) measurements of the Shewanella oneidensis MR-1 outer membrane decaheme cytochrome MtrF, which can perform the final electron transfer step from cells to minerals and microbial fuel cell anodes. We find that the calculated electron transport rate through MtrF is consistent with previously reported in vitro measurements of the Shewanella Mtr complex, as well as in vivo respiration rates on electrode surfaces assuming a reasonable (experimentally verified) coverage of cytochromes on the cell surface. The simulations also reveal a rich phase diagram in the overall electron occupation density of the hemes as a function of electron injection and ejection rates. Single molecule tunneling spectroscopy confirms MtrF's ability to mediate electron transport between an STM tip and an underlying Au(111) surface, but at rates higher than expected from previously calculated heme-heme electron transfer rates for solvated molecules.
Monte Carlo modeling of transport in PbSe nanocrystal films
Carbone, I. Carter, S. A.; Zimanyi, G. T.
2013-11-21
A Monte Carlo hopping model was developed to simulate electron and hole transport in nanocrystalline PbSe films. Transport is carried out as a series of thermally activated hopping events between neighboring sites on a cubic lattice. Each site, representing an individual nanocrystal, is assigned a size-dependent electronic structure, and the effects of particle size, charging, interparticle coupling, and energetic disorder on electron and hole mobilities were investigated. Results of simulated field-effect measurements confirm that electron mobilities and conductivities at constant carrier densities increase with particle diameter by an order of magnitude up to 5?nm and begin to decrease above 6?nm. We find that as particle size increases, fewer hops are required to traverse the same distance and that site energy disorder significantly inhibits transport in films composed of smaller nanoparticles. The dip in mobilities and conductivities at larger particle sizes can be explained by a decrease in tunneling amplitudes and by charging penalties that are incurred more frequently when carriers are confined to fewer, larger nanoparticles. Using a nearly identical set of parameter values as the electron simulations, hole mobility simulations confirm measurements that increase monotonically with particle size over two orders of magnitude.
A Monte Carlo Analysis of Gas Centrifuge Enrichment Plant Process Load Cell Data
Garner, James R; Whitaker, J Michael
2013-01-01
As uranium enrichment plants increase in number, capacity, and types of separative technology deployed (e.g., gas centrifuge, laser, etc.), more automated safeguards measures are needed to enable the IAEA to maintain safeguards effectiveness in a fiscally constrained environment. Monitoring load cell data can significantly increase the IAEA s ability to efficiently achieve the fundamental safeguards objective of confirming operations as declared (i.e., no undeclared activities), but care must be taken to fully protect the operator s proprietary and classified information related to operations. Staff at ORNL, LANL, JRC/ISPRA, and University of Glasgow are investigating monitoring the process load cells at feed and withdrawal (F/W) stations to improve international safeguards at enrichment plants. A key question that must be resolved is what is the necessary frequency of recording data from the process F/W stations? Several studies have analyzed data collected at a fixed frequency. This paper contributes to load cell process monitoring research by presenting an analysis of Monte Carlo simulations to determine the expected errors caused by low frequency sampling and its impact on material balance calculations.
Prasad, Manish; Conforti, Patrick F.; Garrison, Barbara J.
2007-08-28
The coarse grained chemical reaction model is enhanced to build a molecular dynamics (MD) simulation framework with an embedded Monte Carlo (MC) based reaction scheme. The MC scheme utilizes predetermined reaction chemistry, energetics, and rate kinetics of materials to incorporate chemical reactions occurring in a substrate into the MD simulation. The kinetics information is utilized to set the probabilities for the types of reactions to perform based on radical survival times and reaction rates. Implementing a reaction involves changing the reactants species types which alters their interaction potentials and thus produces the required energy change. We discuss the application of this method to study the initiation of ultraviolet laser ablation in poly(methyl methacrylate). The use of this scheme enables the modeling of all possible photoexcitation pathways in the polymer. It also permits a direct study of the role of thermal, mechanical, and chemical processes that can set off ablation. We demonstrate that the role of laser induced heating, thermomechanical stresses, pressure wave formation and relaxation, and thermochemical decomposition of the polymer substrate can be investigated directly by suitably choosing the potential energy and chemical reaction energy landscape. The results highlight the usefulness of such a modeling approach by showing that various processes in polymer ablation are intricately linked leading to the transformation of the substrate and its ejection. The method, in principle, can be utilized to study systems where chemical reactions are expected to play a dominant role or interact strongly with other physical processes.
Chibani, Omar C-M Ma, Charlie
2014-05-15
Purpose: To present a new accelerated Monte Carlo code for CT-based dose calculations in high dose rate (HDR) brachytherapy. The new code (HDRMC) accounts for both tissue and nontissue heterogeneities (applicator and contrast medium). Methods: HDRMC uses a fast ray-tracing technique and detailed physics algorithms to transport photons through a 3D mesh of voxels representing the patient anatomy with applicator and contrast medium included. A precalculated phase space file for the{sup 192}Ir source is used as source term. HDRM is calibrated to calculated absolute dose for real plans. A postprocessing technique is used to include the exact density and composition of nontissue heterogeneities in the 3D phantom. Dwell positions and angular orientations of the source are reconstructed using data from the treatment planning system (TPS). Structure contours are also imported from the TPS to recalculate dose-volume histograms. Results: HDRMC was first benchmarked against the MCNP5 code for a single source in homogenous water and for a loaded gynecologic applicator in water. The accuracy of the voxel-based applicator model used in HDRMC was also verified by comparing 3D dose distributions and dose-volume parameters obtained using 1-mm{sup 3} versus 2-mm{sup 3} phantom resolutions. HDRMC can calculate the 3D dose distribution for a typical HDR cervix case with 2-mm resolution in 5 min on a single CPU. Examples of heterogeneity effects for two clinical cases (cervix and esophagus) were demonstrated using HDRMC. The neglect of tissue heterogeneity for the esophageal case leads to the overestimate of CTV D90, CTV D100, and spinal cord maximum dose by 3.2%, 3.9%, and 3.6%, respectively. Conclusions: A fast Monte Carlo code for CT-based dose calculations which does not require a prebuilt applicator model is developed for those HDR brachytherapy treatments that use CT-compatible applicators. Tissue and nontissue heterogeneities should be taken into account in modern HDR brachytherapy planning.
In the OSTI Collections: Monte Carlo Methods | OSTI, US Dept...
Office of Scientific and Technical Information (OSTI)
... and acetic acid if the reaction is catalyzed on the surface of a gold-palladium alloy. ... calculation was used to work out how the gold and palladium atoms are likely to be ...
BENCHMARK TESTS FOR MARKOV CHAIN MONTE CARLO FITTING OF EXOPLANET ECLIPSE OBSERVATIONS
Rogers, Justin; Lopez-Morales, Mercedes; Apai, Daniel; Adams, Elisabeth
2013-04-10
Ground-based observations of exoplanet eclipses provide important clues to the planets' atmospheric physics, yet systematics in light curve analyses are not fully understood. It is unknown if measurements suggesting near-infrared flux densities brighter than models predict are real, or artifacts of the analysis processes. We created a large suite of model light curves, using both synthetic and real noise, and tested the common process of light curve modeling and parameter optimization with a Markov Chain Monte Carlo algorithm. With synthetic white noise models, we find that input eclipse signals are generally recovered within 10% accuracy for eclipse depths greater than the noise amplitude, and to smaller depths for higher sampling rates and longer baselines. Red noise models see greater discrepancies between input and measured eclipse signals, often biased in one direction. Finally, we find that in real data, systematic biases result even with a complex model to account for trends, and significant false eclipse signals may appear in a non-Gaussian distribution. To quantify the bias and validate an eclipse measurement, we compare both the planet-hosting star and several of its neighbors to a separately chosen control sample of field stars. Re-examining the Rogers et al. Ks-band measurement of CoRoT-1b finds an eclipse 3190{sup +370}{sub -440} ppm deep centered at {phi}{sub me} = 0.50418{sup +0.00197}{sub -0.00203}. Finally, we provide and recommend the use of selected data sets we generated as a benchmark test for eclipse modeling and analysis routines, and propose criteria to verify eclipse detections.
Spadea, Maria Francesca; Verburg, Joost Mathias; Seco, Joao; Baroni, Guido
2014-01-15
Purpose: The aim of the study was to evaluate the dosimetric impact of low-Z and high-Z metallic implants on IMRT plans. Methods: Computed tomography (CT) scans of three patients were analyzed to study effects due to the presence of Titanium (low-Z), Platinum and Gold (high-Z) inserts. To eliminate artifacts in CT images, a sinogram-based metal artifact reduction algorithm was applied. IMRT dose calculations were performed on both the uncorrected and corrected images using a commercial planning system (convolution/superposition algorithm) and an in-house Monte Carlo platform. Dose differences between uncorrected and corrected datasets were computed and analyzed using gamma index (P?{sub <1}) and setting 2 mm and 2% as distance to agreement and dose difference criteria, respectively. Beam specific depth dose profiles across the metal were also examined. Results: Dose discrepancies between corrected and uncorrected datasets were not significant for low-Z material. High-Z materials caused under-dosage of 20%25% in the region surrounding the metal and over dosage of 10%15% downstream of the hardware. Gamma index test yielded P?{sub <1}>99% for all low-Z cases; while for high-Z cases it returned 91% < P?{sub <1}< 99%. Analysis of the depth dose curve of a single beam for low-Z cases revealed that, although the dose attenuation is altered inside the metal, it does not differ downstream of the insert. However, for high-Z metal implants the dose is increased up to 10%12% around the insert. In addition, Monte Carlo method was more sensitive to the presence of metal inserts than superposition/convolution algorithm. Conclusions: The reduction in terms of dose of metal artifacts in CT images is relevant for high-Z implants. In this case, dose distribution should be calculated using Monte Carlo algorithms, given their superior accuracy in dose modeling in and around the metal. In addition, the knowledge of the composition of metal inserts improves the accuracy of the Monte Carlo dose calculation significantly.
Fang, Yuan; Karim, Karim S.; Badano, Aldo
2014-01-15
Purpose: The authors describe the modification to a previously developed Monte Carlo model of semiconductor direct x-ray detector required for studying the effect of burst and recombination algorithms on detector performance. This work provides insight into the effect of different charge generation models for a-Se detectors on Swank noise and recombination fraction. Methods: The proposed burst and recombination models are implemented in the Monte Carlo simulation package, ARTEMIS, developed byFang et al. [Spatiotemporal Monte Carlo transport methods in x-ray semiconductor detectors: Application to pulse-height spectroscopy in a-Se, Med. Phys. 39(1), 308319 (2012)]. The burst model generates a cloud of electron-hole pairs based on electron velocity, energy deposition, and material parameters distributed within a spherical uniform volume (SUV) or on a spherical surface area (SSA). A simple first-hit (FH) and a more detailed but computationally expensive nearest-neighbor (NN) recombination algorithms are also described and compared. Results: Simulated recombination fractions for a single electron-hole pair show good agreement with Onsager model for a wide range of electric field, thermalization distance, and temperature. The recombination fraction and Swank noise exhibit a dependence on the burst model for generation of many electron-hole pairs from a single x ray. The Swank noise decreased for the SSA compared to the SUV model at 4 V/?m, while the recombination fraction decreased for SSA compared to the SUV model at 30 V/?m. The NN and FH recombination results were comparable. Conclusions: Results obtained with the ARTEMIS Monte Carlo transport model incorporating drift and diffusion are validated with the Onsager model for a single electron-hole pair as a function of electric field, thermalization distance, and temperature. For x-ray interactions, the authors demonstrate that the choice of burst model can affect the simulation results for the generation of many electron-hole pairs. The SSA model is more sensitive to the effect of electric field compared to the SUV model and that the NN and FH recombination algorithms did not significantly affect simulation results.
Liu, T.; Ding, A.; Ji, W.; Xu, X. G. [Nuclear Engineering and Engineering Physics, Rensselaer Polytechnic Inst., Troy, NY 12180 (United States); Carothers, C. D. [Dept. of Computer Science, Rensselaer Polytechnic Inst. RPI (United States); Brown, F. B. [Los Alamos National Laboratory (LANL) (United States)
2012-07-01
Monte Carlo (MC) method is able to accurately calculate eigenvalues in reactor analysis. Its lengthy computation time can be reduced by general-purpose computing on Graphics Processing Units (GPU), one of the latest parallel computing techniques under development. The method of porting a regular transport code to GPU is usually very straightforward due to the 'embarrassingly parallel' nature of MC code. However, the situation becomes different for eigenvalue calculation in that it will be performed on a generation-by-generation basis and the thread coordination should be explicitly taken care of. This paper presents our effort to develop such a GPU-based MC code in Compute Unified Device Architecture (CUDA) environment. The code is able to perform eigenvalue calculation under simple geometries on a multi-GPU system. The specifics of algorithm design, including thread organization and memory management were described in detail. The original CPU version of the code was tested on an Intel Xeon X5660 2.8 GHz CPU, and the adapted GPU version was tested on NVIDIA Tesla M2090 GPUs. Double-precision floating point format was used throughout the calculation. The result showed that a speedup of 7.0 and 33.3 were obtained for a bare spherical core and a binary slab system respectively. The speedup factor was further increased by a factor of {approx}2 on a dual GPU system. The upper limit of device-level parallelism was analyzed, and a possible method to enhance the thread-level parallelism was proposed. (authors)
Su, L.; Du, X.; Liu, T.; Xu, X. G.
2013-07-01
An electron-photon coupled Monte Carlo code ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - is being developed at Rensselaer Polytechnic Institute as a software test bed for emerging heterogeneous high performance computers that utilize accelerators such as GPUs. In this paper, the preliminary results of code development and testing are presented. The electron transport in media was modeled using the class-II condensed history method. The electron energy considered ranges from a few hundred keV to 30 MeV. Moller scattering and bremsstrahlung processes above a preset energy were explicitly modeled. Energy loss below that threshold was accounted for using the Continuously Slowing Down Approximation (CSDA). Photon transport was dealt with using the delta tracking method. Photoelectric effect, Compton scattering and pair production were modeled. Voxelised geometry was supported. A serial ARHCHER-CPU was first written in C++. The code was then ported to the GPU platform using CUDA C. The hardware involved a desktop PC with an Intel Xeon X5660 CPU and six NVIDIA Tesla M2090 GPUs. ARHCHER was tested for a case of 20 MeV electron beam incident perpendicularly on a water-aluminum-water phantom. The depth and lateral dose profiles were found to agree with results obtained from well tested MC codes. Using six GPU cards, 6x10{sup 6} histories of electrons were simulated within 2 seconds. In comparison, the same case running the EGSnrc and MCNPX codes required 1645 seconds and 9213 seconds, respectively, on a CPU with a single core used. (authors)
Lopez-Pino, N.; Padilla-Cabal, F.; Garcia-Alvarez, J. A.; Vazquez, L.; D'Alessandro, K.; Correa-Alfonso, C. M.; Godoy, W.; Maidana, N. L.; Vanin, V. R.
2013-05-06
A detailed characterization of a X-ray Si(Li) detector was performed to obtain the energy dependence of efficiency in the photon energy range of 6.4 - 59.5 keV, which was measured and reproduced by Monte Carlo (MC) simulations. Significant discrepancies between MC and experimental values were found when the manufacturer parameters of the detector were used in the simulation. A complete Computerized Tomography (CT) detector scan allowed to find the correct crystal dimensions and position inside the capsule. The computed efficiencies with the resulting detector model differed with the measured values no more than 10% in most of the energy range.
Talamo, A.; Gohar, Y. (Nuclear Engineering Division) [Nuclear Engineering Division
2011-05-12
This study investigates the performance of the YALINA Booster subcritical assembly, located in Belarus, during operation with high (90%), medium (36%), and low (21%) enriched uranium fuels in the assembly's fast zone. The YALINA Booster is a zero-power, subcritical assembly driven by a conventional neutron generator. It was constructed for the purpose of investigating the static and dynamic neutronics properties of accelerator driven subcritical systems, and to serve as a fast neutron source for investigating the properties of nuclear reactions, in particular transmutation reactions involving minor-actinides. The first part of this study analyzes the assembly's performance with several fuel types. The MCNPX and MONK Monte Carlo codes were used to determine effective and source neutron multiplication factors, effective delayed neutron fraction, prompt neutron lifetime, neutron flux profiles and spectra, and neutron reaction rates produced from the use of three neutron sources: californium, deuterium-deuterium, and deuterium-tritium. In the latter two cases, the external neutron source operates in pulsed mode. The results discussed in the first part of this report show that the use of low enriched fuel in the fast zone of the assembly diminishes neutron multiplication. Therefore, the discussion in the second part of the report focuses on finding alternative fuel loading configurations that enhance neutron multiplication while using low enriched uranium fuel. It was found that arranging the interface absorber between the fast and the thermal zones in a circular rather than a square array is an effective method of operating the YALINA Booster subcritical assembly without downgrading neutron multiplication relative to the original value obtained with the use of the high enriched uranium fuels in the fast zone.
Dong, Han; Sharma, Diksha; Badano, Aldo
2014-12-15
Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: The visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.
Hardiansyah, D.; Haryanto, F.; Male, S.
2014-09-30
Prism is a non-commercial Radiotherapy Treatment Planning System (RTPS) develop by Ira J. Kalet from Washington University. Inhomogeneity factor is included in Prism TPS dose calculation. The aim of this study is to investigate the sensitivity of dose calculation on Prism using Monte Carlo simulation. Phase space source from head linear accelerator (LINAC) for Monte Carlo simulation is implemented. To achieve this aim, Prism dose calculation is compared with EGSnrc Monte Carlo simulation. Percentage depth dose (PDD) and R50 from both calculations are observed. BEAMnrc is simulated electron transport in LINAC head and produced phase space file. This file is used as DOSXYZnrc input to simulated electron transport in phantom. This study is started with commissioning process in water phantom. Commissioning process is adjusted Monte Carlo simulation with Prism RTPS. Commissioning result is used for study of inhomogeneity phantom. Physical parameters of inhomogeneity phantom that varied in this study are: density, location and thickness of tissue. Commissioning result is shown that optimum energy of Monte Carlo simulation for 6 MeV electron beam is 6.8 MeV. This commissioning is used R50 and PDD with Practical length (R{sub p}) as references. From inhomogeneity study, the average deviation for all case on interest region is below 5 %. Based on ICRU recommendations, Prism has good ability to calculate the radiation dose in inhomogeneity tissue.
Cranmer-Sargison, G.; Weston, S.; Evans, J. A.; Sidhu, N. P.; Thwaites, D. I.
2011-12-15
Purpose: The goal of this work was to implement a recently proposed small field dosimetry formalism [Alfonso et al., Med. Phys. 35(12), 5179-5186 (2008)] for a comprehensive set of diode detectors and provide the required Monte Carlo generated factors to correct measurement. Methods: Jaw collimated square small field sizes of side 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, and 3.0 cm normalized to a reference field of 5.0 cm x 5.0 cm were used throughout this study. Initial linac modeling was performed with electron source parameters at 6.0, 6.1, and 6.2 MeV with the Gaussian FWHM decreased in steps of 0.010 cm from 0.150 to 0.100 cm. DOSRZnrc was used to develop models of the IBA stereotactic field diode (SFD) as well as the PTW T60008, T60012, T60016, and T60017 field diodes. Simulations were run and isocentric, detector specific, output ratios (OR{sub det}) calculated at depths of 1.5, 5.0, and 10.0 cm. This was performed using the following source parameter subset: 6.1 and 6.2 MeV with a FWHM = 0.100, 0.110, and 0.120 cm. The source parameters were finalized by comparing experimental detector specific output ratios with simulation. Simulations were then run with the active volume and surrounding materials set to water and the replacement correction factors calculated according to the newly proposed formalism. Results: In all cases, the experimental field size widths (at the 50% level) were found to be smaller than the nominal, and therefore, the simulated field sizes were adjusted accordingly. At a FWHM = 0.150 cm simulation produced penumbral widths that were too broad. The fit improved as the FWHM was decreased, yet for all but the smallest field size worsened again at a FWHM = 0.100 cm. The simulated OR{sub det} were found to be greater than, equivalent to and less than experiment for spot size FWHM = 0.100, 0.110, and 0.120 cm, respectively. This is due to the change in source occlusion as a function of FWHM and field size. The corrections required for the 0.5 cm field size were 0.95 ({+-}1.0%) for the SFD, T60012 and T60017 diodes and 0.90 ({+-}1.0%) for the T60008 and T60016 diodes--indicating measured output ratios to be 5% and 10% high, respectively. Our results also revealed the correction factors to be the same within statistical variation at all depths considered. Conclusions: A number of general conclusions are evident: (1) small field OR{sub det} are very sensitive to the simulated source parameters, and therefore, rigorous Monte Carlo linac model commissioning, with respect to measurement, must be pursued prior to use, (2) backscattered dose to the monitor chamber should be included in simulated OR{sub det} calculations, (3) the corrections required for diode detectors are design dependent and therefore detailed detector modeling is required, and (4) the reported detector specific correction factors may be applied to experimental small field OR{sub det} consistent with those presented here.
Qiang, J.
2009-10-17
In this paper, we report on study of ion back bombardment in a high average current radio-frequency (RF) photo-gun using a particle-in-cell/Monte Carlo simulation method. Using this method, we systematically studied effects of gas pressure, RF frequency, RF initial phase, electric field profile, magnetic field, laser repetition rate, different ion species on ion particle line density distribution, kinetic energy spectrum, and ion power line density distribution back bombardment onto the photocathode. Those simulation results suggested that effects of ion back bombardment could increase linearly with the background gas pressure and laser repetition rate. The RF frequency has significantly affected the ion motion inside the gun so that the ion power deposition on the photocathode in an RF gun can be several orders of magnitude lower than that in a DC gun. The ion back bombardment can be minimized by appropriately choosing the electric field profile and the initial phase.
TH-A-18C-09: Ultra-Fast Monte Carlo Simulation for Cone Beam CT Imaging of Brain Trauma
Sisniega, A; Zbijewski, W; Stayman, J; Yorkston, J; Aygun, N; Koliatsos, V; Siewerdsen, J
2014-06-15
Purpose: Application of cone-beam CT (CBCT) to low-contrast soft tissue imaging, such as in detection of traumatic brain injury, is challenged by high levels of scatter. A fast, accurate scatter correction method based on Monte Carlo (MC) estimation is developed for application in high-quality CBCT imaging of acute brain injury. Methods: The correction involves MC scatter estimation executed on an NVIDIA GTX 780 GPU (MC-GPU), with baseline simulation speed of ~1e7 photons/sec. MC-GPU is accelerated by a novel, GPU-optimized implementation of variance reduction (VR) techniques (forced detection and photon splitting). The number of simulated tracks and projections is reduced for additional speed-up. Residual noise is removed and the missing scatter projections are estimated via kernel smoothing (KS) in projection plane and across gantry angles. The method is assessed using CBCT images of a head phantom presenting a realistic simulation of fresh intracranial hemorrhage (100 kVp, 180 mAs, 720 projections, source-detector distance 700 mm, source-axis distance 480 mm). Results: For a fixed run-time of ~1 sec/projection, GPU-optimized VR reduces the noise in MC-GPU scatter estimates by a factor of 4. For scatter correction, MC-GPU with VR is executed with 4-fold angular downsampling and 1e5 photons/projection, yielding 3.5 minute run-time per scan, and de-noised with optimized KS. Corrected CBCT images demonstrate uniformity improvement of 18 HU and contrast improvement of 26 HU compared to no correction, and a 52% increase in contrast-tonoise ratio in simulated hemorrhage compared to “oracle” constant fraction correction. Conclusion: Acceleration of MC-GPU achieved through GPU-optimized variance reduction and kernel smoothing yields an efficient (<5 min/scan) and accurate scatter correction that does not rely on additional hardware or simplifying assumptions about the scatter distribution. The method is undergoing implementation in a novel CBCT dedicated to brain trauma imaging at the point of care in sports and military applications. Research grant from Carestream Health. JY is an employee of Carestream Health.
Mohammadyari, P; Faghihi, R; Shirazi, M Mosleh; Lotfi, M; Meigooni, A
2014-06-01
Purpose: the accuboost is the most modern method of breast brachytherapy that is a boost method in compressed tissue by a mammography unit. the dose distribution in uncompressed tissue, as compressed tissue is important that should be characterized. Methods: In this study, the mechanical behavior of breast in mammography loading, the displacement of breast tissue and the dose distribution in compressed and uncompressed tissue, are investigated. Dosimetry was performed by two dosimeter methods of Monte Carlo simulations using MCNP5 code and thermoluminescence dosimeters. For Monte Carlo simulations, the dose values in cubical lattice were calculated using tally F6. The displacement of the breast elements was simulated by Finite element model and calculated using ABAQUS software, from which the 3D dose distribution in uncompressed tissue was determined. The geometry of the model is constructed from MR images of 6 volunteers. Experimental dosimetery was performed by placing the thermoluminescence dosimeters into the polyvinyl alcohol breast equivalent phantom and on the proximal edge of compression plates to the chest. Results: The results indicate that using the cone applicators would deliver more than 95% of dose to the depth of 5 to 17mm, while round applicator will increase the skin dose. Nodal displacement, in presence of gravity and 60N forces, i.e. in mammography compression, was determined with 43% contraction in the loading direction and 37% expansion in orthogonal orientation. Finally, in comparison of the acquired from thermoluminescence dosimeters with MCNP5, they are consistent with each other in breast phantom and in chest's skin with average different percentage of 13.7±5.7 and 7.7±2.3, respectively. Conclusion: The major advantage of this kind of dosimetry is the ability of 3D dose calculation by FE Modeling. Finally, polyvinyl alcohol is a reliable material as a breast tissue equivalent dosimetric phantom that provides the ability of TLD dosimetry for validation.
TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation
Xu, Y; Bai, T; Yan, H; Ouyang, L; Wang, J; Pompos, A; Jiang, S; Jia, X; Zhou, L
2014-06-15
Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections; 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research in Strategic Emerging Industry, Guangdong, China (2011A081402003)
Souris, K; Lee, J; Sterpin, E
2014-06-15
Purpose: Recent studies have demonstrated the capability of graphics processing units (GPUs) to compute dose distributions using Monte Carlo (MC) methods within clinical time constraints. However, GPUs have a rigid vectorial architecture that favors the implementation of simplified particle transport algorithms, adapted to specific tasks. Our new, fast, and multipurpose MC code, named MCsquare, runs on Intel Xeon Phi coprocessors. This technology offers 60 independent cores, and therefore more flexibility to implement fast and yet generic MC functionalities, such as prompt gamma simulations. Methods: MCsquare implements several models and hence allows users to make their own tradeoff between speed and accuracy. A 200 MeV proton beam is simulated in a heterogeneous phantom using Geant4 and two configurations of MCsquare. The first one is the most conservative and accurate. The method of fictitious interactions handles the interfaces and secondary charged particles emitted in nuclear interactions are fully simulated. The second, faster configuration simplifies interface crossings and simulates only secondary protons after nuclear interaction events. Integral depth-dose and transversal profiles are compared to those of Geant4. Moreover, the production profile of prompt gammas is compared to PENH results. Results: Integral depth dose and transversal profiles computed by MCsquare and Geant4 are within 3%. The production of secondaries from nuclear interactions is slightly inaccurate at interfaces for the fastest configuration of MCsquare but this is unlikely to have any clinical impact. The computation time varies between 90 seconds for the most conservative settings to merely 59 seconds in the fastest configuration. Finally prompt gamma profiles are also in very good agreement with PENH results. Conclusion: Our new, fast, and multi-purpose Monte Carlo code simulates prompt gammas and calculates dose distributions in less than a minute, which complies with clinical time constraints. It has been successfully validated with Geant4. This work has been financialy supported by InVivoIGT, a public/private partnership between UCL and IBA.
Li, Wenfang; Du, Jinjin; Wen, Ruijuan; Yang, Pengfei; Li, Gang; Zhang, Tiancai; Liang, Junjun
2014-03-17
We investigate the transmission of single-atom transits based on a strongly coupled cavity quantum electrodynamics system. By superposing the transit transmissions of a considerable number of atoms, we obtain the absorption spectra of the cavity induced by single atoms and obtain the temperature of the cold atom. The number of atoms passing through the microcavity for each release is also counted, and this number changes exponentially along with the atom temperature. Monte Carlo simulations agree closely with the experimental results, and the initial temperature of the cold atom is determined. Compared with the conventional time-of-flight (TOF) method, this approach avoids some uncertainties in the standard TOF and sheds new light on determining temperature of cold atoms by counting atoms individually in a confined space.
MO-G-BRF-05: Determining Response to Anti-Angiogenic Therapies with Monte Carlo Tumor Modeling
Valentinuzzi, D; Simoncic, U; Jeraj, R; Titz, B
2014-06-15
Purpose: Patient response to anti-angiogenic therapies with vascular endothelial growth factor receptor - tyrosine kinase inhibitors (VEGFR TKIs) is heterogeneous. This study investigates key biological characteristics that drive differences in patient response via Monte Carlo computational modeling capable of simulating tumor response to therapy with VEGFR TKI. Methods: VEGFR TKIs potently block receptors, responsible for promoting angiogenesis in tumors. The model incorporates drug pharmacokinetic and pharmacodynamic properties, as well as patientspecific data of cellular proliferation derived from [18F]FLT-PET data. Sensitivity of tumor response was assessed for multiple parameters, including initial partial oxygen tension (pO{sub 2}), cell cycle time, daily vascular growth fraction, and daily vascular regression fraction. Results were benchmarked to clinical data (patient 2 weeks on VEGFR TKI, followed by 1-week drug holiday). The tumor pO{sub 2} was assumed to be uniform. Results: Among the investigated parameters, the simulated proliferation was most sensitive to the initial tumor pO{sub 2}. Initial change of 5 mmHg can already Result in significantly different levels of proliferation. The model reveals that hypoxic tumors (pO{sub 2} ? 20 mmHg) show the highest decrease of proliferation, experiencing mean FLT standardized uptake value (SUVmean) decrease for at least 50% at the end of the clinical trial (day 21). Oxygenated tumors (pO{sub 2} 20 mmHg) show a transient SUV decrease (3050%) at the end of the treatment with VEGFR TKI (day 14) but experience a rapid SUV rebound close to the pre-treatment SUV levels (70110%) at the time of a drug holiday (day 1421) - the phenomenon known as a proliferative flare. Conclusion: Model's high sensitivity to initial pO{sub 2} clearly emphasizes the need for experimental assessment of the pretreatment tumor hypoxia status, as it might be predictive of response to antiangiogenic therapies and the occurrence of proliferative flare. Experimental assessment of other model parameters would further improve understanding of patient response.
Wang, Z; Gao, M
2014-06-01
Purpose: Monte Carlo simulation plays an important role for proton Pencil Beam Scanning (PBS) technique. However, MC simulation demands high computing power and is limited to few large proton centers that can afford a computer cluster. We study the feasibility of utilizing cloud computing in the MC simulation of PBS beams. Methods: A GATE/GEANT4 based MC simulation software was installed on a commercial cloud computing virtual machine (Linux 64-bits, Amazon EC2). Single spot Integral Depth Dose (IDD) curves and in-air transverse profiles were used to tune the source parameters to simulate an IBA machine. With the use of StarCluster software developed at MIT, a Linux cluster with 2100 nodes can be conveniently launched in the cloud. A proton PBS plan was then exported to the cloud where the MC simulation was run. Results: The simulated PBS plan has a field size of 1010cm{sup 2}, 20cm range, 10cm modulation, and contains over 10,000 beam spots. EC2 instance type m1.medium was selected considering the CPU/memory requirement and 40 instances were used to form a Linux cluster. To minimize cost, master node was created with on-demand instance and worker nodes were created with spot-instance. The hourly cost for the 40-node cluster was $0.63 and the projected cost for a 100-node cluster was $1.41. Ten million events were simulated to plot PDD and profile, with each job containing 500k events. The simulation completed within 1 hour and an overall statistical uncertainty of < 2% was achieved. Good agreement between MC simulation and measurement was observed. Conclusion: Cloud computing is a cost-effective and easy to maintain platform to run proton PBS MC simulation. When proton MC packages such as GATE and TOPAS are combined with cloud computing, it will greatly facilitate the pursuing of PBS MC studies, especially for newly established proton centers or individual researchers.
SU-E-T-585: Commissioning of Electron Monte Carlo in Eclipse Treatment Planning System for TrueBeam
Yang, X; Lasio, G; Zhou, J; Lin, M; Yi, B; Guerrero, M
2014-06-01
Purpose: To commission electron Monte Carlo (eMC) algorithm in Eclipse Treatment Planning System (TPS) for TrueBeam Linacs, including the evaluation of dose calculation accuracy for small fields and oblique beams and comparison with the existing eMC model for Clinacs. Methods: Electron beam percent-depth-dose (PDDs) and profiles with and without applicators, as well as output factors, were measured from two Varian TrueBeam machines. Measured data were compared against the Varian TrueBeam Representative Beam Data (VTBRBD). The selected data set was transferred into Eclipse for beam configuration. Dose calculation accuracy from eMC was evaluated for open fields, small cut-out fields, and oblique beams at different incident angles. The TrueBeam data was compared to the existing Clinac data and eMC model to evaluate the differences among Linac types. Results: Our measured data indicated that electron beam PDDs from our TrueBeam machines are well matched to those from our Varian Clinac machines, but in-air profiles, cone factors and open-filed output factors are significantly different. The data from our two TrueBeam machines were well represented by the VTBRBD. Variations of TrueBeam PDDs and profiles were within the 2% /2mm criteria for all energies, and the output factors for fields with and without applicators all agree within 2%. Obliquity factor for two clinically relevant applicator sizes (1010 and 1515 cm{sup 2}) and three oblique angles (15, 30, and 45 degree) were measured for nominal R100, R90, and R80 of each electron beam energy. Comparisons of calculations using eMC of obliquity factors and cut-out factors versus measurements will be presented. Conclusion: eMC algorithm in Eclipse TPS can be configured using the VTBRBD. Significant differences between TrueBeam and Clinacs were found in in-air profiles and open field output factors. The accuracy of the eMC algorithm was evaluated for a wide range of cut-out factors and oblique incidence.
Muir, B. R. Rogers, D. W. O.
2013-12-15
Purpose: To investigate recommendations for reference dosimetry of electron beams and gradient effects for the NE2571 chamber and to provide beam quality conversion factors using Monte Carlo simulations of the PTW Roos and NE2571 ion chambers. Methods: The EGSnrc code system is used to calculate the absorbed dose-to-water and the dose to the gas in fully modeled ion chambers as a function of depth in water. Electron beams are modeled using realistic accelerator simulations as well as beams modeled as collimated point sources from realistic electron beam spectra or monoenergetic electrons. Beam quality conversion factors are calculated with ratios of the doses to water and to the air in the ion chamber in electron beams and a cobalt-60 reference field. The overall ion chamber correction factor is studied using calculations of water-to-air stopping power ratios. Results: The use of an effective point of measurement shift of 1.55 mm from the front face of the PTW Roos chamber, which places the point of measurement inside the chamber cavity, minimizes the difference betweenR{sub 50}, the beam quality specifier, calculated from chamber simulations compared to that obtained using depth-dose calculations in water. A similar shift minimizes the variation of the overall ion chamber correction factor with depth to the practical range and reduces the root-mean-square deviation of a fit to calculated beam quality conversion factors at the reference depth as a function of R{sub 50}. Similarly, an upstream shift of 0.34 r{sub cav} allows a more accurate determination of R{sub 50} from NE2571 chamber calculations and reduces the variation of the overall ion chamber correction factor with depth. The determination of the gradient correction using a shift of 0.22 r{sub cav} optimizes the root-mean-square deviation of a fit to calculated beam quality conversion factors if all beams investigated are considered. However, if only clinical beams are considered, a good fit to results for beam quality conversion factors is obtained without explicitly correcting for gradient effects. The inadequacy of R{sub 50} to uniquely specify beam quality for the accurate selection of k{sub Q} factors is discussed. Systematic uncertainties in beam quality conversion factors are analyzed for the NE2571 chamber and amount to between 0.4% and 1.2% depending on assumptions used. Conclusions: The calculated beam quality conversion factors for the PTW Roos chamber obtained here are in good agreement with literature data. These results characterize the use of an NE2571 ion chamber for reference dosimetry of electron beams even in low-energy beams.
Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation
Pecchia, M.; D'Auria, F.; Mazzantini, O.
2012-07-01
Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI for performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)
Looking for Auger signatures in III-nitride light emitters: A full-band Monte Carlo perspective
Bertazzi, Francesco Goano, Michele; Zhou, Xiangyu; Calciati, Marco; Ghione, Giovanni; Matsubara, Masahiko; Bellotti, Enrico
2015-02-09
Recent experiments of electron emission spectroscopy (EES) on III-nitride light-emitting diodes (LEDs) have shown a correlation between droop onset and hot electron emission at the cesiated surface of the LED p-cap. The observed hot electrons have been interpreted as a direct signature of Auger recombination in the LED active region, as highly energetic Auger-excited electrons would be collected in long-lived satellite valleys of the conduction band so that they would not decay on their journey to the surface across the highly doped p-contact layer. We discuss this interpretation by using a full-band Monte Carlo model based on first-principles electronic structure and lattice dynamics calculations. The results of our analysis suggest that Auger-excited electrons cannot be unambiguously detected in the LED structures used in the EES experiments. Additional experimental and simulative work are necessary to unravel the complex physics of GaN cesiated surfaces.
Hui, Y.Y.; Chang, Y.-R.; Lee, H.-Y.; Chang, H.-C.; Lim, T.-S.; Fann Wunshain
2009-01-05
The number of negatively charged nitrogen-vacancy centers (N-V){sup -} in fluorescent nanodiamond (FND) has been determined by photon correlation spectroscopy and Monte Carlo simulations at the single particle level. By taking account of the random dipole orientation of the multiple (N-V){sup -} fluorophores and simulating the probability distribution of their effective numbers (N{sub e}), we found that the actual number (N{sub a}) of the fluorophores is in linear correlation with N{sub e}, with correction factors of 1.8 and 1.2 in measurements using linearly and circularly polarized lights, respectively. We determined N{sub a}=8{+-}1 for 28 nm FND particles prepared by 3 MeV proton irradiation.
Sarrut, David; Universit Lyon 1; Centre Lon Brard ; Bardis, Manuel; Marcatili, Sara; Mauxion, Thibault; Boussion, Nicolas; Freud, Nicolas; Ltang, Jean-Michel; Jan, Sbastien; Maigne, Lydia; Perrot, Yann; Pietrzyk, Uwe; Robert, Charlotte; and others
2014-06-15
In this paper, the authors' review the applicability of the open-source GATE Monte Carlo simulation platform based on the GEANT4 toolkit for radiation therapy and dosimetry applications. The many applications of GATE for state-of-the-art radiotherapy simulations are described including external beam radiotherapy, brachytherapy, intraoperative radiotherapy, hadrontherapy, molecular radiotherapy, and in vivo dose monitoring. Investigations that have been performed using GEANT4 only are also mentioned to illustrate the potential of GATE. The very practical feature of GATE making it easy to model both a treatment and an imaging acquisition within the same frameworkis emphasized. The computational times associated with several applications are provided to illustrate the practical feasibility of the simulations using current computing facilities.
Shang, Yu; Lin, Yu; Yu, Guoqiang; Li, Ting; Chen, Lei; Toborek, Michal
2014-05-12
Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αD{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αD{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo stroke model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αD{sub B} (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: −5.3% to −18.0%) for different tissue models. Although adding random noises to DCS data resulted in αD{sub B} variations, the mean values of errors in extracting αD{sub B} were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αD{sub B} using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.
Mayorga, P. A.; Departamento de Fsica Atmica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada ; Brualla, L.; Sauerwein, W.; Lallena, A. M.
2014-01-15
Purpose: Retinoblastoma is the most common intraocular malignancy in the early childhood. Patients treated with external beam radiotherapy respond very well to the treatment. However, owing to the genotype of children suffering hereditary retinoblastoma, the risk of secondary radio-induced malignancies is high. The University Hospital of Essen has successfully treated these patients on a daily basis during nearly 30 years using a dedicated D-shaped collimator. The use of this collimator that delivers a highly conformed small radiation field, gives very good results in the control of the primary tumor as well as in preserving visual function, while it avoids the devastating side effects of deformation of midface bones. The purpose of the present paper is to propose a modified version of the D-shaped collimator that reduces even further the irradiation field with the scope to reduce as well the risk of radio-induced secondary malignancies. Concurrently, the new dedicated D-shaped collimator must be easier to build and at the same time produces dose distributions that only differ on the field size with respect to the dose distributions obtained by the current collimator in use. The scope of the former requirement is to facilitate the employment of the authors' irradiation technique both at the authors' and at other hospitals. The fulfillment of the latter allows the authors to continue using the clinical experience gained in more than 30 years. Methods: The Monte Carlo codePENELOPE was used to study the effect that the different structural elements of the dedicated D-shaped collimator have on the absorbed dose distribution. To perform this study, the radiation transport through a Varian Clinac 2100 C/D operating at 6 MV was simulated in order to tally phase-space files which were then used as radiation sources to simulate the considered collimators and the subsequent dose distributions. With the knowledge gained in that study, a new, simpler, D-shaped collimator is proposed. Results: The proposed collimator delivers a dose distribution which is 2.4 cm wide along the inferior-superior direction of the eyeball. This width is 0.3 cm narrower than that of the dose distribution obtained with the collimator currently in clinical use. The other relevant characteristics of the dose distribution obtained with the new collimator, namely, depth doses at clinically relevant positions, penumbrae width, and shape of the lateral profiles, are statistically compatible with the results obtained for the collimator currently in use. Conclusions: The smaller field size delivered by the proposed collimator still fully covers the planning target volume with at least 95% of the maximum dose at a depth of 2 cm and provides a safety margin of 0.2 cm, so ensuring an adequate treatment while reducing the irradiated volume.
Lin, J. Y. Y. [California Institute of Technology, Pasadena] [California Institute of Technology, Pasadena; Aczel, Adam A [ORNL] [ORNL; Abernathy, Douglas L [ORNL] [ORNL; Nagler, Stephen E [ORNL] [ORNL; Buyers, W. J. L. [National Research Council of Canada] [National Research Council of Canada; Granroth, Garrett E [ORNL] [ORNL
2014-01-01
Recently an extended series of equally spaced vibrational modes was observed in uranium nitride (UN) by performing neutron spectroscopy measurements using the ARCS and SEQUOIA time-of- flight chopper spectrometers [A.A. Aczel et al, Nature Communications 3, 1124 (2012)]. These modes are well described by 3D isotropic quantum harmonic oscillator (QHO) behavior of the nitrogen atoms, but there are additional contributions to the scattering that complicate the measured response. In an effort to better characterize the observed neutron scattering spectrum of UN, we have performed Monte Carlo ray tracing simulations of the ARCS and SEQUOIA experiments with various sample kernels, accounting for the nitrogen QHO scattering, contributions that arise from the acoustic portion of the partial phonon density of states (PDOS), and multiple scattering. These simulations demonstrate that the U and N motions can be treated independently, and show that multiple scattering contributes an approximate Q-independent background to the spectrum at the oscillator mode positions. Temperature dependent studies of the lowest few oscillator modes have also been made with SEQUOIA, and our simulations indicate that the T-dependence of the scattering from these modes is strongly influenced by the uranium lattice.
G. S. Chang; R. C. Pederson
2005-07-01
Mixed oxide (MOX) test capsules prepared with weapons-derived plutonium have been irradiated to a burnup of 50 GWd/t. The MOX fuel was fabricated at Los Alamos National Laboratory by a master-mix process and has been irradiated in the Advanced Test Reactor (ATR) at the Idaho National Laboratory (INL). Previous withdrawals of the same fuel have occurred at 9, 21, 30, and 40 GWd/t. Oak Ridge National Laboratory (ORNL) manages this test series for the Department of Energys Fissile Materials Disposition Program (FMDP). The fuel burnup analyses presented in this study were performed using MCWO, a welldeveloped tool that couples the Monte Carlo transport code MCNP with the isotope depletion and buildup code ORIGEN-2. MCWO analysis yields time-dependent and neutron-spectrum-dependent minor actinide and Pu concentrations for the ATR small I-irradiation test position. The purpose of this report is to validate both the Weapons-Grade Mixed Oxide (WG-MOX) test assembly model and the new fuel burnup analysis methodology by comparing the computed results against the neutron monitor measurements.
Many-body ab-initio diffusion quantum Monte Carlo applied to the strongly correlated oxide NiO
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Mitra, Chandrima; Krogel, Jaron T.; Santana, Juan A.; Reboredo, Fernando A.
2015-10-28
We present a many-body diffusion quantum Monte Carlo (DMC) study of the bulk and defect properties of NiO. We find excellent agreement with experimental values, within 0.3%, 0.6%, and 3.5% for the lattice constant, cohesive energy, and bulk modulus, respectively. The quasiparticle bandgap was also computed, and the DMC result of 4.72 (0.17) eV compares well with the experimental value of 4.3 eV. Furthermore, DMC calculations of excited states at the L, Z, and the gamma point of the Brillouin zone reveal a flat upper valence band for NiO, in good agreement with Angle Resolved Photoemission Spectroscopy results. To studymoredefect properties, we evaluated the formation energies of the neutral and charged vacancies of oxygen and nickel in NiO. A formation energy of 7.2 (0.15) eV was found for the oxygen vacancy under oxygen rich conditions. For the Ni vacancy, we obtained a formation energy of 3.2 (0.15) eV under Ni rich conditions. These results confirm that NiO occurs as a p-type material with the dominant intrinsic vacancy defect being Ni vacancy.less
Reverse Monte Carlo simulation of Se{sub 80}Te{sub 20} and Se{sub 80}Te{sub 15}Sb{sub 5} glasses
Abdel-Baset, A. M.; Rashad, M.; Moharram, A. H.
2013-12-16
Two-dimensional Monte Carlo of the total pair distribution functions g(r) is determined for Se{sub 80}Te{sub 20} and Se{sub 80}Te{sub 15}Sb{sub 5} alloys, and then it used to assemble the three-dimensional atomic configurations using the reverse Monte Carlo simulation. The partial pair distribution functions g{sub ij}(r) indicate that the basic structure unit in the Se{sub 80}Te{sub 15}Sb{sub 5} glass is di-antimony tri-selenide units connected together through Se-Se and Se-Te chain. The structure of Se{sub 80}Te{sub 20} alloys is a chain of Se-Te and Se-Se in addition to some rings of Se atoms.
Schach Von Wittenau, Alexis E. (Livermore, CA)
2003-01-01
A method is provided to represent the calculated phase space of photons emanating from medical accelerators used in photon teletherapy. The method reproduces the energy distributions and trajectories of the photons originating in the bremsstrahlung target and of photons scattered by components within the accelerator head. The method reproduces the energy and directional information from sources up to several centimeters in radial extent, so it is expected to generalize well to accelerators made by different manufacturers. The method is computationally both fast and efficient overall sampling efficiency of 80% or higher for most field sizes. The computational cost is independent of the number of beams used in the treatment plan.
Sheu, R; Tseng, T; Powers, A; Lo, Y
2014-06-01
Purpose: To provide commissioning and acceptance test data of the Varian Eclipse electron Monte Carlo model (eMC v.11) for TrueBeam linac. We also investigated the uncertainties in beam model parameters and dose calculation results for different geometric configurations. Methods: For beam commissioning, PTW CC13 thimble chamber and IBA Blue Phantom2 were used to collect PDD and dose profiles in air. Cone factors were measured with a parallel plate chamber (PTW N23342) in solid water. GafChromic EBT3 films were used for dose calculation verifications to compare with parallel plate chamber results in the following test geometries: oblique incident, extended distance, small cutouts, elongated cutouts, irregular surface, and heterogeneous layers. Results: Four electron energies (6e, 9e, 12e, and 15e) and five cones (66, 1010, 1515, 2020, and 2525) with standard cutouts were calculated for different grid sizes (1, 1.5,2, and 2.5 mm) and compared with chamber measurements. The results showed calculations performed with a coarse grid size underestimated the absolute dose. The underestimation decreased as energy increased. For 6e, the underestimation (max 3.3 %) was greater than the statistical uncertainty level (3%) and was systematically observed for all cone sizes. By using a 1mm grid size, all the calculation results agreed with measurements within 5% for all test configurations. The calculations took 21s and 46s for 6e and 15e (2.5mm grid size) respectively distributed on 4 calculation servants. Conclusion: In general, commissioning the eMC dose calculation model on TrueBeam is straightforward and thedose calculation is in good agreement with measurements for all test cases. Monte Carlo dose calculation provides more accurate results which improves treatment planning quality. However, the normal acceptable grid size (2.5mm) would cause systematic underestimation in absolute dose calculation for lower energies, such as 6e. Users need to be cautious in this situation.
Les Houches Guidebook to Monte Carlo generators for hadron collider physics
Dobbs, M.A
2004-08-24
Recently the collider physics community has seen significant advances in the formalisms and implementations of event generators. This review is a primer of the methods commonly used for the simulation of high energy physics events at particle colliders. We provide brief descriptions, references, and links to the specific computer codes which implement the methods. The aim is to provide an overview of the available tools, allowing the reader to ascertain which tool is best for a particular application, but also making clear the limitations of each tool.
Chen Zhaoquan [College of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, Anhui 232001 (China); State Key Laboratory of Structural Analysis for Industrial Equipment, Dalian University of Technology, Dalian, Liaoning 116024 (China); State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China); Ye Qiubo [College of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, Anhui 232001 (China); Communications Research Centre, 3701 Carling Ave., Ottawa K2H 8S2 (Canada); Xia Guangqing [State Key Laboratory of Structural Analysis for Industrial Equipment, Dalian University of Technology, Dalian, Liaoning 116024 (China); Hong Lingli; Hu Yelin; Zheng Xiaoliang; Li Ping [College of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, Anhui 232001 (China); Zhou Qiyan [College of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, Anhui 232001 (China); State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China); Hu Xiwei; Liu Minghai [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China)
2013-03-15
Although surface-wave plasma (SWP) sources have many industrial applications, the ionization process for SWP discharges is not yet well understood. The resonant excitation of surface plasmon polaritons (SPPs) has recently been proposed to produce SWP efficiently, and this work presents a numerical study of the mechanism to produce SWP sources. Specifically, SWP resonantly excited by SPPs at low pressure (0.25 Torr) are modeled using a two-dimensional in the working space and three-dimensional in the velocity space particle-in-cell with the Monte Carlo collision method. Simulation results are sampled at different time steps, in which the detailed information about the distribution of electrons and electromagnetic fields is obtained. Results show that the mode conversion between surface waves of SPPs and electron plasma waves (EPWs) occurs efficiently at the location where the plasma density is higher than 3.57 Multiplication-Sign 10{sup 17} m{sup -3}. Due to the effect of the locally enhanced electric field of SPPs, the mode conversion between the surface waves of SPPs and EPWs is very strong, which plays a significant role in efficiently heating SWP to the overdense state.
Fan, Yu; Zou, Ying; Sun, Jizhong; Wang, Dezhen [Key Laboratory of Materials Modification by Laser, Ion and Electron Beams (Ministry of Education), School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024 (China)] [Key Laboratory of Materials Modification by Laser, Ion and Electron Beams (Ministry of Education), School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024 (China); Stirner, Thomas [Department of Electronic Engineering, University of Applied Sciences Deggendorf, Edlmairstr. 6-8, D-94469 Deggendorf (Germany)] [Department of Electronic Engineering, University of Applied Sciences Deggendorf, Edlmairstr. 6-8, D-94469 Deggendorf (Germany)
2013-10-15
The influence of an applied magnetic field on plasma-related devices has a wide range of applications. Its effects on a plasma have been studied for years; however, there are still many issues that are not understood well. This paper reports a detailed kinetic study with the two-dimension-in-space and three-dimension-in-velocity particle-in-cell plus Monte Carlo collision method on the role of EB drift in a capacitive argon discharge, similar to the experiment of You et al.[Thin Solid Films 519, 6981 (2011)]. The parameters chosen in the present study for the external magnetic field are in a range common to many applications. Two basic configurations of the magnetic field are analyzed in detail: the magnetic field direction parallel to the electrode with or without a gradient. With an extensive parametric study, we give detailed influences of the drift on the collective behaviors of the plasma along a two-dimensional domain, which cannot be represented by a 1 spatial and 3 velocity dimensions model. By analyzing the results of the simulations, the occurring collisionless heating mechanism is explained well.
Kadoura, Ahmad; Sun, Shuyu Salama, Amgad
2014-08-01
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (?, ?) for single site models were proposed for methane, nitrogen and carbon monoxide.
Interpretation of 3D void measurements with Tripoli4.6/JEFF3.1.1 Monte Carlo code
Blaise, P.; Colomba, A.
2012-07-01
The present work details the first analysis of the 3D void phase conducted during the EPICURE/UM17x17/7% mixed UOX/MOX configuration. This configuration is composed of a homogeneous central 17x17 MOX-7% assembly, surrounded by portions of 17x17 1102 assemblies with guide-tubes. The void bubble is modelled by a small waterproof 5x5 fuel pin parallelepiped box of 11 cm height, placed in the centre of the MOX assembly. This bubble, initially placed at the core mid-plane, is then moved in different axial positions to study the evolution in the core of the axial perturbation. Then, to simulate the growing of this bubble in order to understand the effects of increased void fraction along the fuel pin, 3 and 5 bubbles have been stacked axially, from the core mid-plane. The C/E comparison obtained with the Monte Carlo code Tripoli4 for both radial and axial fission rate distributions, and in particular the reproduction of the very important flux gradients at the void/water interfaces, changing as the bubble is displaced along the z-axis are very satisfactory. It demonstrates both the capability of the code and its library to reproduce this kind of situation, as the very good quality of the experimental results, confirming the UM-17x17 as an excellent experimental benchmark for 3D code validation. This work has been performed within the frame of the V and V program for the future APOLL03 deterministic code of CEA starting in 2012, and its V and V benchmarking database. (authors)
Wang, J.; Biasca, R.; Liewer, P.C.
1996-01-01
Although the existence of the critical ionization velocity (CIV) is known from laboratory experiments, no agreement has been reached as to whether CIV exists in the natural space environment. In this paper the authors move towards more realistic models of CIV and present the first fully three-dimensional, electromagnetic particle-in-cell Monte-Carlo collision (PIC-MCC) simulations of typical space-based CIV experiments. In their model, the released neutral gas is taken to be a spherical cloud traveling across a magnetized ambient plasma. Simulations are performed for neutral clouds with various sizes and densities. The effects of the cloud parameters on ionization yield, wave energy growth, electron heating, momentum coupling, and the three-dimensional structure of the newly ionized plasma are discussed. The simulations suggest that the quantitative characteristics of momentum transfers among the ion beam, neutral cloud, and plasma waves is the key indicator of whether CIV can occur in space. The missing factors in space-based CIV experiments may be the conditions necessary for a continuous enhancement of the beam ion momentum. For a typical shaped charge release experiment, favorable CIV conditions may exist only in a very narrow, intermediate spatial region some distance from the release point due to the effects of the cloud density and size. When CIV does occur, the newly ionized plasma from the cloud forms a very complex structure due to the combined forces from the geomagnetic field, the motion induced emf, and the polarization. Hence the detection of CIV also critically depends on the sensor location. 32 refs., 8 figs., 2 tabs.
Monte Carlo modeling of electron density in hypersonic rarefied gas flows
Fan, Jin; Zhang, Yuhuai; Jiang, Jianzheng
2014-12-09
The electron density distribution around a vehicle employed in the RAM-C II flight test is calculated with the DSMC method. To resolve the mole fraction of electrons which is several orders lower than those of the primary species in the free stream, an algorithm named as trace species separation (TSS) is utilized. The TSS algorithm solves the primary and trace species separately, which is similar to the DSMC overlay techniques; however it generates new simulated molecules of trace species, such as ions and electrons in each cell, basing on the ionization and recombination rates directly, which differs from the DSMC overlay techniques based on probabilistic models. The electron density distributions computed by TSS agree well with the flight data measured in the RAM-C II test along a decent trajectory at three altitudes 81km, 76km, and 71km.
SU-E-T-584: Commissioning of the MC2 Monte Carlo Dose Computation Engine
Titt, U; Mirkovic, D; Liu, A; Ciangaru, G; Mohan, R; Anand, A; Perles, L
2014-06-01
Purpose: An automated system, MC2, was developed to convert DICOM proton therapy treatment plans into a sequence MCNPX input files, and submit these to a computing cluster. MC2 converts the results into DICOM format, and any treatment planning system can import the data for comparison vs. conventional dose predictions. This work describes the data and the efforts made to validate the MC2 system against measured dose profiles and how the system was calibrated to predict the correct number of monitor units (MUs) to deliver the prescribed dose. Methods: A set of simulated lateral and longitudinal profiles was compared to data measured for commissioning purposes and during annual quality assurance efforts. Acceptance criteria were relative dose differences smaller than 3% and differences in range (in water) of less than 2 mm. For two out of three double scattering beam lines validation results were already published. Spot checks were performed to assure proper performance. For the small snout, all available measurements were used for validation vs. simulated data. To calibrate the dose per MU, the energy deposition per source proton at the center of the spread out Bragg peaks (SOBPs) was recorded for a set of SOBPs from each option. Subsequently these were then scaled to the results of dose per MU determination based on published methods. The simulations of the doses in the magnetically scanned beam line were also validated vs. measured longitudinal and lateral profiles. The source parameters were fine tuned to achieve maximum agreement with measured data. The dosimetric calibration was performed by scoring energy deposition per proton, and scaling the results to a standard dose measurement of a 10 x 10 x 10 cm3 volume irradiation using 100 MU. Results: All simulated data passed the acceptance criteria. Conclusion: MC2 is fully validated and ready for clinical application.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Avoids Born-Oppenheimer approximation Distinctive feature of QMC Theory is straightforward but needs good wave functions. Sidesteps variance problem associated with...
Chrissanthopoulos, A.; Jovari, P.; Kaban, I.; Gruner, S.; Kavetskyy, T.; Borc, J.; Wang, W.; Ren, J.; Chen, G.; Yannopoulos, S.N.
2012-08-15
We report an investigation of the structure and vibrational modes of Ge-In-S-AgI bulk glasses using X-ray diffraction, EXAFS spectroscopy, Reverse Monte-Carlo (RMC) modelling, Raman spectroscopy, and density functional theoretical (DFT) calculations. The combination of these techniques made it possible to elucidate the short- and medium-range structural order of these glasses. Data interpretation revealed that the AgI-free glass structure is composed of a network where GeS{sub 4/2} tetrahedra are linked with trigonal InS{sub 3/2} units; S{sub 3/2}Ge-GeS{sub 3/2} ethane-like species linked with InS{sub 4/2}{sup -} tetrahedra form sub-structures which are dispersed in the network structure. The addition of AgI into the Ge-In-S glassy matrix causes appreciable structural changes, enriching the Indium species with Iodine terminal atoms. The existence of trigonal species InS{sub 2/2}I and tetrahedral units InS{sub 3/2}I{sup -} and InS{sub 2/2}I{sub 2}{sup -} is compatible with the EXAFS and RMC analysis. Their vibrational properties (harmonic frequencies and Raman activities) calculated by DFT are in very good agreement with the experimental values determined by Raman spectroscopy. - Graphical abstract: Experiment (XRD, EXAFS, RMC, Raman scattering) and density functional calculations are employed to study the structure of AgI-doped Ge-In-S glasses. The role of mixed structural units as illustrated in the figure is elucidated. Highlights: Black-Right-Pointing-Pointer Doping Ge-In-S glasses with AgI causes significant changes in glass structure. Black-Right-Pointing-Pointer Experiment and DFT are combined to elucidate short- and medium-range structural order. Black-Right-Pointing-Pointer Indium atoms form both (InS{sub 4/2}){sup -} tetrahedra and InS{sub 3/2} planar triangles. Black-Right-Pointing-Pointer (InS{sub 4/2}){sup -} tetrahedra bond to (S{sub 3/2}Ge-GeS{sub 3/2}){sup 2+} ethane-like units forming neutral sub-structures. Black-Right-Pointing-Pointer Mixed chalcohalide species (InS{sub 3/2}I){sup -} offer vulnerable sites for the uptake of Ag{sup +}.
EMAM, M; Eldib, A; Lin, M; Li, J; Chibani, O; Ma, C
2014-06-01
Purpose: An in-house Monte Carlo based treatment planning system (MC TPS) has been developed for modulated electron radiation therapy (MERT). Our preliminary MERT planning experience called for a more user friendly graphical user interface. The current work aimed to design graphical windows and tools to facilitate the contouring and planning process. Methods: Our In-house GUI MC TPS is built on a set of EGS4 user codes namely MCPLAN and MCBEAM in addition to an in-house optimization code, which was named as MCOPTIM. Patient virtual phantom is constructed using the tomographic images in DICOM format exported from clinical treatment planning systems (TPS). Treatment target volumes and critical structures were usually contoured on clinical TPS and then sent as a structure set file. In our GUI program we developed a visualization tool to allow the planner to visualize the DICOM images and delineate the various structures. We implemented an option in our code for automatic contouring of the patient body and lungs. We also created an interface window displaying a three dimensional representation of the target and also showing a graphical representation of the treatment beams. Results: The new GUI features helped streamline the planning process. The implemented contouring option eliminated the need for performing this step on clinical TPS. The auto detection option for contouring the outer patient body and lungs was tested on patient CTs and it was shown to be accurate as compared to that of clinical TPS. The three dimensional representation of the target and the beams allows better selection of the gantry, collimator and couch angles. Conclusion: An in-house GUI program has been developed for more efficient MERT planning. The application of aiding tools implemented in the program is time saving and gives better control of the planning process.
Vazquez Quino, L; Calvo, O; Huerta, C; DeWeese, M
2014-06-01
Purpose: To study the perturbation due to the use of a novel Reference Ion Chamber designed to measure small field dosimetry (KermaX Plus C by IBA). Methods: Using the Phase-space files for TrueBeam photon beams available by Varian in IAEA-compliant format for 6 and 15 MV. Monte Carlo simulations were performed using BEAMnrc and DOSXYZnrc to investigate the perturbation introduced by a reference chamber into the PDDs and profiles measured in water tank. Field sizes ranging from 11, 22,33, 55 cm2 were simulated for both energies with and without a 0.5 mm foil of Aluminum which is equivalent to the attenuation equivalent of the reference chamber specifications in a water phantom of 303030 cm3 and a pixel resolution of 2 mm. The PDDs, profiles, and gamma analysis of the simulations were performed as well as a energy spectrum analysis of the phase-space files generated during the simulation. Results: Examination of the energy spectrum analysis performed shown a very small increment of the energy spectrum at the build-up region but no difference is appreciated after dmax. The PDD, profiles and gamma analysis had shown a very good agreement among the simulations with and without the Al foil, with a gamma analysis with a criterion of 2% and 2mm resulting in 99.9% of the points passing this criterion. Conclusion: This work indicates the potential benefits of using the KermaX Plus C as reference chamber in the measurement of PDD and Profiles for small fields since the perturbation due to in the presence of the chamber the perturbation is minimal and the chamber can be considered transparent to the photon beam.
Farah, J; Bonfrate, A; Donadille, L; Dubourg, N; Lacoste, V; Martinetti, F; Sayah, R; Trompier, F; Clairand, I [IRSN - Institute for Radiological Protection and Nuclear Safety, Fontenay-aux-roses (France); Caresana, M [Politecnico di Milano, Milano (Italy); Delacroix, S; Nauraye, C [Institut Curie - Centre de Protontherapie d Orsay, Orsay (France); Herault, J [Centre Antoine Lacassagne, Nice (France); Piau, S; Vabre, I [Institut de Physique Nucleaire d Orsay, Orsay (France)
2014-06-01
Purpose: Measure stray radiation inside a passive scattering proton therapy facility, compare values to Monte Carlo (MC) simulations and identify the actual needs and challenges. Methods: Measurements and MC simulations were considered to acknowledge neutron exposure associated with 75 MeV ocular or 180 MeV intracranial passively scattered proton treatments. First, using a specifically-designed high sensitivity Bonner Sphere system, neutron spectra were measured at different positions inside the treatment rooms. Next, measurement-based mapping of neutron ambient dose equivalent was fulfilled using several TEPCs and rem-meters. Finally, photon and neutron organ doses were measured using TLDs, RPLs and PADCs set inside anthropomorphic phantoms (Rando, 1 and 5-years-old CIRS). All measurements were also simulated with MCNPX to investigate the efficiency of MC models in predicting stray neutrons considering different nuclear cross sections and models. Results: Knowledge of the neutron fluence and energy distribution inside a proton therapy room is critical for stray radiation dosimetry. However, as spectrometry unfolding is initiated using a MC guess spectrum and suffers from algorithmic limits a 20% spectrometry uncertainty is expected. H*(10) mapping with TEPCs and rem-meters showed a good agreement between the detectors. Differences within measurement uncertainty (1015%) were observed and are inherent to the energy, fluence and directional response of each detector. For a typical ocular and intracranial treatment respectively, neutron doses outside the clinical target volume of 0.4 and 11 mGy were measured inside the Rando phantom. Photon doses were 210 times lower depending on organs position. High uncertainties (40%) are inherent to TLDs and PADCs measurements due to the need for neutron spectra at detector position. Finally, stray neutrons prediction with MC simulations proved to be extremely dependent on proton beam energy and the used nuclear models and cross sections. Conclusion: This work highlights measurement and simulation limits for ion therapy radiation protection applications.
Dupuy, Nicolas; Bouaouli, Samira; Mauri, Francesco Casula, Michele; Sorella, Sandro
2015-06-07
We study the ionization energy, electron affinity, and the ? ? ?{sup ?} ({sup 1}L{sub a}) excitation energy of the anthracene molecule, by means of variational quantum Monte Carlo (QMC) methods based on a Jastrow correlated antisymmetrized geminal power (JAGP) wave function, developed on molecular orbitals (MOs). The MO-based JAGP ansatz allows one to rigorously treat electron transitions, such as the HOMO ? LUMO one, which underlies the {sup 1}L{sub a} excited state. We present a QMC optimization scheme able to preserve the rank of the antisymmetrized geminal power matrix, thanks to a constrained minimization with projectors built upon symmetry selected MOs. We show that this approach leads to stable energy minimization and geometry relaxation of both ground and excited states, performed consistently within the correlated QMC framework. Geometry optimization of excited states is needed to make a reliable and direct comparison with experimental adiabatic excitation energies. This is particularly important in ?-conjugated and polycyclic aromatic hydrocarbons, where there is a strong interplay between low-lying energy excitations and structural modifications, playing a functional role in many photochemical processes. Anthracene is an ideal benchmark to test these effects. Its geometry relaxation energies upon electron excitation are of up to 0.3 eV in the neutral {sup 1}L{sub a} excited state, while they are of the order of 0.1 eV in electron addition and removal processes. Significant modifications of the ground state bond length alternation are revealed in the QMC excited state geometry optimizations. Our QMC study yields benchmark results for both geometries and energies, with values below chemical accuracy if compared to experiments, once zero point energy effects are taken into account.
Liu, T; Du, X; Su, L; Gao, Y; Ji, W; Xu, X; Zhang, D; Shi, J; Liu, B; Kalra, M
2014-06-15
Purpose: To compare the CT doses derived from the experiments and GPU-based Monte Carlo (MC) simulations, using a human cadaver and ATOM phantom. Methods: The cadaver of an 88-year old male and the ATOM phantom were scanned by a GE LightSpeed Pro 16 MDCT. For the cadaver study, the Thimble chambers (Model 105?0.6CT and 106?0.6CT) were used to measure the absorbed dose in different deep and superficial organs. Whole-body scans were first performed to construct a complete image database for MC simulations. Abdomen/pelvis helical scans were then conducted using 120/100 kVps, 300 mAs and a pitch factor of 1.375:1. For the ATOM phantom study, the OSL dosimeters were used and helical scans were performed using 120 kVp and x, y, z tube current modulation (TCM). For the MC simulations, sufficient particles were run in both cases such that the statistical errors of the results by ARCHER-CT were limited to 1%. Results: For the human cadaver scan, the doses to the stomach, liver, colon, left kidney, pancreas and urinary bladder were compared. The difference between experiments and simulations was within 19% for the 120 kVp and 25% for the 100 kVp. For the ATOM phantom scan, the doses to the lung, thyroid, esophagus, heart, stomach, liver, spleen, kidneys and thymus were compared. The difference was 39.2% for the esophagus, and within 16% for all other organs. Conclusion: In this study the experimental and simulated CT doses were compared. Their difference is primarily attributed to the systematic errors of the MC simulations, including the accuracy of the bowtie filter modeling, and the algorithm to generate voxelized phantom from DICOM images. The experimental error is considered small and may arise from the dosimeters. R01 grant (R01EB015478) from National Institute of Biomedical Imaging and Bioengineering.
Besemer, A; Bednarz, B; Titz, B; Grudzinski, J; Weichert, J; Hall, L
2014-06-01
Purpose: Combination targeted radionuclide therapy (TRT) is appealing because it can potentially exploit different mechanisms of action from multiple radionuclides as well as the variable dose rates due to the different radionuclide half-lives. The work describes the development of a multiobjective optimization algorithm to calculate the optimal ratio of radionuclide injection activities for delivery of combination TRT. Methods: The diapeutic (diagnostic and therapeutic) agent, CLR1404, was used as a proof-of-principle compound in this work. Isosteric iodine substitution in CLR1404 creates a molecular imaging agent when labeled with I-124 or a targeted radiotherapeutic agent when labeled with I-125 or I-131. PET/CT images of high grade glioma patients were acquired at 4.5, 24, and 48 hours post injection of 124I-CLR1404. The therapeutic 131I-CLR1404 and 125ICLR1404 absorbed dose (AD) and biological effective dose (BED) were calculated for each patient using a patient-specific Monte Carlo dosimetry platform. The optimal ratio of injection activities for each radionuclide was calculated with a multi-objective optimization algorithm using the weighted sum method. Objective functions such as the tumor dose heterogeneity and the ratio of the normal tissue to tumor doses were minimized and the relative importance weights of each optimization function were varied. Results: For each optimization function, the program outputs a Pareto surface map representing all possible combinations of radionuclide injection activities so that values that minimize the objective function can be visualized. A Pareto surface map of the weighted sum given a set of user-specified importance weights is also displayed. Additionally, the ratio of optimal injection activities as a function of the all possible importance weights is generated so that the user can select the optimal ratio based on the desired weights. Conclusion: Multi-objective optimization of radionuclide injection activities can provide an invaluable tool for maximizing the dosimetric benefits in multi-radionuclide combination TRT. BT, JG, and JW are affiliated with Cellectar Biosciences which owns the licensing rights to CLR1404 and related compounds.
Teymurazyan, A.; Rowlands, J. A.; Thunder Bay Regional Research Institute , Thunder Bay P7A 7T1; Department of Radiation Oncology, University of Toronto, Toronto M5S 3E2 ; Pang, G.
2014-04-15
Purpose: Electronic Portal Imaging Devices (EPIDs) have been widely used in radiation therapy and are still needed on linear accelerators (Linacs) equipped with kilovoltage cone beam CT (kV-CBCT) or MRI systems. Our aim is to develop a new high quantum efficiency (QE) ?erenkov Portal Imaging Device (CPID) that is quantum noise limited at dose levels corresponding to a single Linac pulse. Methods: Recently a new concept of CPID for MV x-ray imaging in radiation therapy was introduced. It relies on ?erenkov effect for x-ray detection. The proposed design consisted of a matrix of optical fibers aligned with the incident x-rays and coupled to an active matrix flat panel imager (AMFPI) for image readout. A weakness of such design is that too few ?erenkov light photons reach the AMFPI for each incident x-ray and an AMFPI with an avalanche gain is required in order to overcome the readout noise for portal imaging application. In this work the authors propose to replace the optical fibers in the CPID with light guides without a cladding layer that are suspended in air. The air between the light guides takes on the role of the cladding layer found in a regular optical fiber. Since air has a significantly lower refractive index (?1 versus 1.38 in a typical cladding layer), a much superior light collection efficiency is achieved. Results: A Monte Carlo simulation of the new design has been conducted to investigate its feasibility. Detector quantities such as quantum efficiency (QE), spatial resolution (MTF), and frequency dependent detective quantum efficiency (DQE) have been evaluated. The detector signal and the quantum noise have been compared to the readout noise. Conclusions: Our studies show that the modified new CPID has a QE and DQE more than an order of magnitude greater than that of current clinical systems and yet a spatial resolution similar to that of current low-QE flat-panel based EPIDs. Furthermore it was demonstrated that the new CPID does not require an avalanche gain in the AMFPI and is quantum noise limited at dose levels corresponding to a single Linac pulse.
Forbang, R Teboh
2014-06-01
Purpose: MultiPlan, the treatment planning system for the CyberKnife Robotic Radiosurgery system offers two approaches to dose computation, namely Ray-Tracing (RT), the default technique and Monte Carlo (MC), an option. RT is deterministic, however it accounts for primary heterogeneity only. MC on the other hand has an uncertainty associated with the calculation results. The advantage is that in addition, it accounts for heterogeneity effects on the scattered dose. Not all sites will benefit from MC. The goal of this work was to focus on central nervous system (CNS) tumors and compare dosimetrically, treatment plans computed with RT versus MC. Methods: Treatment plans were computed using both RT and MC for sites covering (a) the brain (b) C-spine (c) upper T-spine (d) lower T-spine (e) L-spine and (f) sacrum. RT was first used to compute clinically valid treatment plans. Then the same treatment parameters, monitor units, beam weights, etc., were used in the MC algorithm to compute the dose distribution. The plans were then compared for tumor coverage to illustrate the difference if any. All MC calculations were performed at a 1% uncertainty. Results: Using the RT technique, the tumor coverage for the brain, C-spine (C3–C7), upper T-spine (T4–T6), lower T-spine (T10), Lspine (L2) and sacrum were 96.8%, 93.1%, 97.2%, 87.3%, 91.1%, and 95.3%. The corresponding tumor coverage based on the MC approach was 98.2%, 95.3%, 87.55%, 88.2%, 92.5%, and 95.3%. It should be noted that the acceptable planning target coverage for our clinical practice is >95%. The coverage can be compromised for spine tumors to spare normal tissues such as the spinal cord. Conclusion: For treatment planning involving the CNS, RT and MC appear to be similar for most sites but for the T-spine area where most of the beams traverse lung tissue. In this case, MC is highly recommended.
Ondis, L.A., II; Tyburski, L.J.; Moskowitz, B.S.
2000-03-01
The RCP01 Monte Carlo program is used to analyze many geometries of interest in nuclear design and analysis of light water moderated reactors such as the core in its pressure vessel with complex piping arrangement, fuel storage arrays, shipping and container arrangements, and neutron detector configurations. Written in FORTRAN and in use on a variety of computers, it is capable of estimating steady state neutron or photon reaction rates and neutron multiplication factors. The energy range covered in neutron calculations is that relevant to the fission process and subsequent slowing-down and thermalization, i.e., 20 MeV to 0 eV. The same energy range is covered for photon calculations.
Thfoin, I. Reverdin, C.; Duval, A.; Leboeuf, X.; Lecherbourg, L.; Ross, B.; Hulin, S.; Batani, D.; Santos, J. J.; Vaisseau, X.; Fourment, C.; Giuffrida, L.; Szabo, C. I.; Bastiani-Ceccotti, S.; Brambrink, E.; Koenig, M.; Nakatsutsumi, M.; Morace, A.
2014-11-15
Transmission crystal spectrometers (TCS) are used on many laser facilities to record hard X-ray spectra. During experiments, signal recorded on imaging plates is often degraded by a background noise. Monte-Carlo simulations made with the code GEANT4 show that this background noise is mainly generated by diffusion of MeV electrons and very hard X-rays. An experiment, carried out at LULI2000, confirmed that the use of magnets in front of the diagnostic, that bent the electron trajectories, reduces significantly this background. The new spectrometer SPECTIX (Spectromtre PETAL Cristal en TransmIssion X), built for the LMJ/PETAL facility, will include this optimized shielding.
Ryabtsev, I. I.; Tretyakov, D. B.; Beterov, I. I.; Entin, V. M.; Yakshina, E. A.
2010-11-15
Results of numerical Monte Carlo simulations for the Stark-tuned Fo{center_dot}{center_dot}rster resonance and dipole blockade between two to five cold rubidium Rydberg atoms in various spatial configurations are presented. The effects of the atoms' spatial uncertainties on the resonance amplitude and spectra are investigated. The feasibility of observing coherent Rabi-like population oscillations at a Fo{center_dot}{center_dot}rster resonance between two cold Rydberg atoms is analyzed. Spectra and the fidelity of the Rydberg dipole blockade are calculated for various experimental conditions, including nonzero detuning from the Fo{center_dot}{center_dot}rster resonance and finite laser linewidth. The results are discussed in the context of quantum-information processing with Rydberg atoms.
Cai, Zhongli; Chattopadhyay, Niladri; Kwon, Yongkyu Luke; Pignol, Jean-Philippe; Lechtman, Eli; Reilly, Raymond M.; Department of Medical Imaging, University of Toronto, Toronto, Ontario M5S 3E2; Toronto General Research Institute, University Health Network, Toronto, Ontario M5G 2C4
2013-11-15
Purpose: The authors aims were to model how various factors influence radiation dose enhancement by gold nanoparticles (AuNPs) and to propose a new modeling approach to the dose enhancement factor (DEF).Methods: The authors used Monte Carlo N-particle (MCNP 5) computer code to simulate photon and electron transport in cells. The authors modeled human breast cancer cells as a single cell, a monolayer, or a cluster of cells. Different numbers of 5, 30, or 50 nm AuNPs were placed in the extracellular space, on the cell surface, in the cytoplasm, or in the nucleus. Photon sources examined in the simulation included nine monoenergetic x-rays (10100 keV), an x-ray beam (100 kVp), and {sup 125}I and {sup 103}Pd brachytherapy seeds. Both nuclear and cellular dose enhancement factors (NDEFs, CDEFs) were calculated. The ability of these metrics to predict the experimental DEF based on the clonogenic survival of MDA-MB-361 human breast cancer cells exposed to AuNPs and x-rays were compared.Results: NDEFs show a strong dependence on photon energies with peaks at 15, 30/40, and 90 keV. Cell model and subcellular location of AuNPs influence the peak position and value of NDEF. NDEFs decrease in the order of AuNPs in the nucleus, cytoplasm, cell membrane, and extracellular space. NDEFs also decrease in the order of AuNPs in a cell cluster, monolayer, and single cell if the photon energy is larger than 20 keV. NDEFs depend linearly on the number of AuNPs per cell. Similar trends were observed for CDEFs. NDEFs using the monolayer cell model were more predictive than either single cell or cluster cell models of the DEFs experimentally derived from the clonogenic survival of cells cultured as a monolayer. The amount of AuNPs required to double the prescribed dose in terms of mg Au/g tissue decreases as the size of AuNPs increases, especially when AuNPs are in the nucleus and the cytoplasm. For 40 keV x-rays and a cluster of cells, to double the prescribed x-ray dose (NDEF = 2) using 30 nm AuNPs, would require 5.1 0.2, 9 1, 10 1, 10 1 mg Au/g tissue in the nucleus, in the cytoplasm, on the cell surface, or in the extracellular space, respectively. Using 50 nm AuNPs, the required amount decreases to 3.1 0.3, 8 1, 9 1, 9 1 mg Au/g tissue, respectively.Conclusions: NDEF is a new metric that can predict the radiation enhancement of AuNPs for various experimental conditions. Cell model, the subcellular location and size of AuNPs, and the number of AuNPs per cell, as well as the x-ray photon energy all have effects on NDEFs. Larger AuNPs in the nucleus of cluster cells exposed to x-rays of 15 or 40 keV maximize NDEFs.
A Geant4 Implementation of a Novel Single-Event Monte Carlo Method...
Office of Scientific and Technical Information (OSTI)
Number: AC04-94AL85000 Resource Type: Conference Resource Relation: Conference: 2013 American Nuclear Society Winter Meeting held November 10-14, 2013 in Washington, DC.; Related...
Cox, Stephen J.; Michaelides, Angelos; Department of Chemistry, University College London, 20 Gordon Street, London WC1H 0AJ ; Towler, Michael D.; Theory of Condensed Matter Group, Cavendish Laboratory, University of Cambridge, J.J. Thomson Avenue, Cambridge CB3 0HE ; Alf, Dario; Department of Earth Sciences, University College London Gower Street, London WC1E 6BT
2014-05-07
High quality reference data from diffusion Monte Carlo calculations are presented for bulk sI methane hydrate, a complex crystal exhibiting both hydrogen-bond and dispersion dominated interactions. The performance of some commonly used exchange-correlation functionals and all-atom point charge force fields is evaluated. Our results show that none of the exchange-correlation functionals tested are sufficient to describe both the energetics and the structure of methane hydrate accurately, while the point charge force fields perform badly in their description of the cohesive energy but fair well for the dissociation energetics. By comparing to ice I{sub h}, we show that a good prediction of the volume and cohesive energies for the hydrate relies primarily on an accurate description of the hydrogen bonded water framework, but that to correctly predict stability of the hydrate with respect to dissociation to ice I{sub h} and methane gas, accuracy in the water-methane interaction is also required. Our results highlight the difficulty that density functional theory faces in describing both the hydrogen bonded water framework and the dispersion bound methane.
Kyriakou, Ioanna; Emfietzoglou, Dimitris; Nojeh, Alireza; Moscovitch, Marko
2013-02-28
A systematic study of electron-beam penetration and backscattering in multi-walled carbon nanotube (MWCNT) materials for beam energies of {approx}0.3 to 30 keV is presented based on event-by-event Monte Carlo simulation of electron trajectories using state-of-the-art scattering cross sections. The importance of different analytic approximations for computing the elastic and inelastic electron-scattering cross sections for MWCNTs is emphasized. We offer a simple parameterization for the total and differential elastic-scattering Mott cross section, using appropriate modifications to the Browning formula and the Thomas-Fermi screening parameter. A discrete-energy-loss approach to inelastic scattering based on dielectric theory is adopted using different descriptions of the differential cross section. The sensitivity of electron penetration and backscattering parameters to the underlying scattering models is examined. Our simulations confirm the recent experimental backscattering data on MWCNT forests and, in particular, the steep increase of the backscattering yield at sub-keV energies as well as the sidewalls escape effect at high-beam energies.
McGrath, Matthew; Kuo, I-F W.; Ngouana, Brice F.; Ghogomu, Julius N.; Mundy, Christopher J.; Marenich, Aleksandr; Cramer, Christopher J.; Truhlar, Donald G.; Siepmann, Joern I.
2013-08-28
The free energy of solvation and dissociation of hydrogen chloride in water is calculated through a combined molecular simulation quantum chemical approach at four temperatures between T = 300 and 450 K. The free energy is first decomposed into the sum of two components: the Gibbs free energy of transfer of molecular HCl from the vapor to the aqueous liquid phase and the standard-state free energy of acid dissociation of HCl in aqueous solution. The former quantity is calculated using Gibbs ensemble Monte Carlo simulations using either Kohn-Sham density functional theory or a molecular mechanics force field to determine the system’s potential energy. The latter free energy contribution is computed using a continuum solvation model utilizing either experimental reference data or micro-solvated clusters. The predicted combined solvation and dissociation free energies agree very well with available experimental data. CJM was supported by the US Department of Energy,Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. Pacific Northwest National Laboratory is operated by Battelle for the US Department of Energy.
Choi, Myunghee; Chan, Vincent S.
2014-02-28
This final report describes the work performed under U.S. Department of Energy Cooperative Agreement DE-FC02-08ER54954 for the period April 1, 2011 through March 31, 2013. The goal of this project was to perform iterated finite-orbit Monte Carlo simulations with full-wall fields for modeling tokamak ICRF wave heating experiments. In year 1, the finite-orbit Monte-Carlo code ORBIT-RF and its iteration algorithms with the full-wave code AORSA were improved to enable systematical study of the factors responsible for the discrepancy in the simulated and the measured fast-ion FIDA signals in the DIII-D and NSTX ICRF fast-wave (FW) experiments. In year 2, ORBIT-RF was coupled to the TORIC full-wave code for a comparative study of ORBIT-RF/TORIC and ORBIT-RF/AORSA results in FW experiments.
Park, Su-Jung; /Bonn U.
2004-02-01
The measurement of the t{bar t} production cross section at {radical}s = 1.96 TeV using the final state with an electron and jets is studied with Monte Carlo event samples. All methods used in the real data analysis to measure efficiencies and to estimate the background contributions are examined. The studies focus on measuring the electron reconstruction efficiencies as well as on improving the electron identification and background suppression. With a generated input cross section of 7 pb the following result is obtained: {sigma}{sub t{bar t}} = (7 {+-} 1.63(stat){sub -1.14}{sup +0.94} (syst)) pb.
Mei, Donghai; Neurock, Matthew; Smith, C Michael
2009-10-22
The kinetics for the selective hydrogenation of acetylene-ethylene mixtures over model Pd(111) and bimetallic Pd-Ag alloy surfaces were examined using first principles based kinetic Monte Carlo (KMC) simulations to elucidate the effects of alloying as well as process conditions (temperature and hydrogen partial pressure). The mechanisms that control the selective and unselective routes which included hydrogenation, dehydrogenation and C-?C bond breaking pathways were analyzed using first-principle density functional theory (DFT) calculations. The results were used to construct an intrinsic kinetic database that was used in a variable time step kinetic Monte Carlo simulation to follow the kinetics and the molecular transformations in the selective hydrogenation of acetylene-ethylene feeds over Pd and Pd-Ag surfaces. The lateral interactions between coadsorbates that occur through-surface and through-space were estimated using DFT-parameterized bond order conservation and van der Waal interaction models respectively. The simulation results show that the rate of acetylene hydrogenation as well as the ethylene selectivity increase with temperature over both the Pd(111) and the Pd-Ag/Pd(111) alloy surfaces. The selective hydrogenation of acetylene to ethylene proceeds via the formation of a vinyl intermediate. The unselective formation of ethane is the result of the over-hydrogenation of ethylene as well as over-hydrogenation of vinyl to form ethylidene. Ethylidene further hydrogenates to form ethane and dehydrogenates to form ethylidyne. While ethylidyne is not reactive, it can block adsorption sites which limit the availability of hydrogen on the surface and thus act to enhance the selectivity. Alloying Ag into the Pd surface decreases the overall rated but increases the ethylene selectivity significantly by promoting the selective hydrogenation of vinyl to ethylene and concomitantly suppressing the unselective path involving the hydrogenation of vinyl to ethylidene and the dehydrogenation ethylidene to ethylidyne. This is consistent with experimental results which suggest only the predominant hydrogenation path involving the sequential addition of hydrogen to form vinyl and ethylene exists over the Pd-Ag alloys. Ag enhances the desorption of ethylene and hydrogen from the surface thus limiting their ability to undergo subsequent reactions. The simulated apparent activation barriers were calculated to be 32-44 kJ/mol on Pd(111) and 26-31 kJ/mol on Pd-Ag/Pd(111) respectively. The reaction was found to be essentially first order in hydrogen over Pd(111) and Pd-Ag/Pd(111) surfaces. The results reveal that increases in the hydrogen partial pressure increase the activity but decrease ethylene selectivity over both Pd and Pd-Ag/Pd(111) surfaces. Pacific Northwest National Laboratory is operated by Battelle for the US Department of Energy.
Sharma, Diksha; Badano, Aldo
2013-03-15
Purpose: hybridMANTIS is a Monte Carlo package for modeling indirect x-ray imagers using columnar geometry based on a hybrid concept that maximizes the utilization of available CPU and graphics processing unit processors in a workstation. Methods: The authors compare hybridMANTIS x-ray response simulations to previously published MANTIS and experimental data for four cesium iodide scintillator screens. These screens have a variety of reflective and absorptive surfaces with different thicknesses. The authors analyze hybridMANTIS results in terms of modulation transfer function and calculate the root mean square difference and Swank factors from simulated and experimental results. Results: The comparison suggests that hybridMANTIS better matches the experimental data as compared to MANTIS, especially at high spatial frequencies and for the thicker screens. hybridMANTIS simulations are much faster than MANTIS with speed-ups up to 5260. Conclusions: hybridMANTIS is a useful tool for improved description and optimization of image acquisition stages in medical imaging systems and for modeling the forward problem in iterative reconstruction algorithms.
Barrera, C A; Moran, M J
2007-08-21
The Neutron Imaging System (NIS) is one of seven ignition target diagnostics under development for the National Ignition Facility. The NIS is required to record hot-spot (13-15 MeV) and downscattered (6-10 MeV) images with a resolution of 10 microns and a signal-to-noise ratio (SNR) of 10 at the 20% contour. The NIS is a valuable diagnostic since the downscattered neutrons reveal the spatial distribution of the cold fuel during an ignition attempt, providing important information in the case of a failed implosion. The present study explores the parameter space of several line-of-sight (LOS) configurations that could serve as the basis for the final design. Six commercially available organic scintillators were experimentally characterized for their light emission decay profile and neutron sensitivity. The samples showed a long lived decay component that makes direct recording of a downscattered image impossible. The two best candidates for the NIS detector material are: EJ232 (BC422) plastic fibers or capillaries filled with EJ399B. A Monte Carlo-based end-to-end model of the NIS was developed to study the imaging capabilities of several LOS configurations and verify that the recovered sources meet the design requirements. The model includes accurate neutron source distributions, aperture geometries (square pinhole, triangular wedge, mini-penumbral, annular and penumbral), their point spread functions, and a pixelated scintillator detector. The modeling results show that a useful downscattered image can be obtained by recording the primary peak and the downscattered images, and then subtracting a decayed version of the former from the latter. The difference images need to be deconvolved in order to obtain accurate source distributions. The images are processed using a frequency-space modified-regularization algorithm and low-pass filtering. The resolution and SNR of these sources are quantified by using two surrogate sources. The simulations show that all LOS configurations have a resolution of 7 microns or better. The 28 m LOS with a 7 x 7 array of 100-micron mini-penumbral apertures or 50-micron square pinholes meets the design requirements and is a very good design alternative.
Tesfamicael, B; Gueye, P; Lyons, D; Mahesh, M; Avery, S
2014-06-01
Purpose: To construct a dose monitoring system based on an endorectal balloon coupled to thin scintillating fibers to study the dose delivered to the rectum during prostate cancer proton therapy Methods: The Geant4 Monte Carlo toolkit version 9.6p02 was used to simulate prostate cancer proton therapy treatments of an endorectal balloon (for immobilization of a 2.9 cm diameter prostate gland) and a set of 34 scintillating fibers symmetrically placed around the balloon and perpendicular to the proton beam direction (for dosimetry measurements) Results: A linear response of the fibers to the dose delivered was observed within <2%, a property that makes them good candidates for real time dosimetry. Results obtained show that the closest fiber recorded about 1/3 of the dose to the target with a 1/r{sup 2} decrease in the dose distribution as one goes toward the frontal and distal top fibers. Very low dose was recorded by the bottom fibers (about 45 times comparatively), which is a clear indication that the overall volume of the rectal wall that is exposed to a higher dose is relatively minimized. Further analysis indicated a simple scaling relationship between the dose to the prostate and the dose to the top fibers (a linear fit gave a slope of ?0.070.07 MeV per treatment Gy) Conclusion: Thin (1 mm 1 mm 100 cm) long scintillating fibers were found to be ideal for real time in-vivo dose measurement to the rectum for prostate cancer proton therapy. The linear response of the fibers to the dose delivered makes them good candidates of dosimeters. With thorough calibration and the ability to define a good correlation between the dose to the target and the dose to the fibers, such dosimeters can be used for real time dose verification to the target.
Chen Huixiao; Lohr, Frank; Fritz, Peter; Wenz, Frederik; Dobler, Barbara; Lorenz, Friedlieb; Muehlnickel, Werner
2010-11-01
Purpose: Dose calculation based on pencil beam (PB) algorithms has its shortcomings predicting dose in tissue heterogeneities. The aim of this study was to compare dose distributions of clinically applied non-intensity-modulated radiotherapy 15-MV plans for stereotactic body radiotherapy between voxel Monte Carlo (XVMC) calculation and PB calculation for lung lesions. Methods and Materials: To validate XVMC, one treatment plan was verified in an inhomogeneous thorax phantom with EDR2 film (Eastman Kodak, Rochester, NY). Both measured and calculated (PB and XVMC) dose distributions were compared regarding profiles and isodoses. Then, 35 lung plans originally created for clinical treatment by PB calculation with the Eclipse planning system (Varian Medical Systems, Palo Alto, CA) were recalculated by XVMC (investigational implementation in PrecisePLAN [Elekta AB, Stockholm, Sweden]). Clinically relevant dose-volume parameters for target and lung tissue were compared and analyzed statistically. Results: The XVMC calculation agreed well with film measurements (<1% difference in lateral profile), whereas the deviation between PB calculation and film measurements was up to +15%. On analysis of 35 clinical cases, the mean dose, minimal dose and coverage dose value for 95% volume of gross tumor volume were 1.14 {+-} 1.72 Gy, 1.68 {+-} 1.47 Gy, and 1.24 {+-} 1.04 Gy lower by XVMC compared with PB, respectively (prescription dose, 30 Gy). The volume covered by the 9 Gy isodose of lung was 2.73% {+-} 3.12% higher when calculated by XVMC compared with PB. The largest differences were observed for small lesions circumferentially encompassed by lung tissue. Conclusions: Pencil beam dose calculation overestimates dose to the tumor and underestimates lung volumes exposed to a given dose consistently for 15-MV photons. The degree of difference between XVMC and PB is tumor size and location dependent. Therefore XVMC calculation is helpful to further optimize treatment planning.
Cao, M; Tenn, S; Lee, C; Yang, Y; Lamb, J; Agazaryan, N; Lee, P; Low, D
2014-06-01
Purpose: To evaluate performance of three commercially available treatment planning systems for stereotactic body radiation therapy (SBRT) of lung cancer using the following algorithms: Boltzmann transport equation based algorithm (AcurosXB AXB), convolution based algorithm Anisotropic Analytic Algorithm (AAA); and Monte Carlo based algorithm (XVMC). Methods: A total of 10 patients with early stage non-small cell peripheral lung cancer were included. The initial clinical plans were generated using the XVMC based treatment planning system with a prescription of 54Gy in 3 fractions following RTOG0613 protocol. The plans were recalculated with the same beam parameters and monitor units using AAA and AXB algorithms. A calculation grid size of 2mm was used for all algorithms. The dose distribution, conformity, and dosimetric parameters for the targets and organs at risk (OAR) are compared between the algorithms. Results: The average PTV volume was 19.6mL (range 4.247.2mL). The volume of PTV covered by the prescribed dose (PTV-V100) were 93.972.00%, 95.072.07% and 95.102.97% for XVMC, AXB and AAA algorithms, respectively. There was no significant difference in high dose conformity index; however, XVMC predicted slightly higher values (p=0.04) for the ratio of 50% prescription isodose volume to PTV (R50%). The percentage volume of total lungs receiving dose >20Gy (LungV20Gy) were 4.032.26%, 3.862.22% and 3.852.21% for XVMC, AXB and AAA algorithms. Examination of dose volume histograms (DVH) revealed small differences in targets and OARs for most patients. However, the AAA algorithm was found to predict considerable higher PTV coverage compared with AXB and XVMC algorithms in two cases. The dose difference was found to be primarily located at the periphery region of the target. Conclusion: For clinical SBRT lung treatment planning, the dosimetric differences between three commercially available algorithms are generally small except at target periphery. XVMC and AXB algorithms are recommended for accurate dose estimation at tissue boundaries.
Fubiani, G.; Boeuf, J. P. [Universit de Toulouse, UPS, INPT, LAPLACE (Laboratoire Plasma et Conversion d'Energie), 118 route de Narbonne, F-31062 Toulouse cedex 9 (France) [Universit de Toulouse, UPS, INPT, LAPLACE (Laboratoire Plasma et Conversion d'Energie), 118 route de Narbonne, F-31062 Toulouse cedex 9 (France); CNRS, LAPLACE, F-31062 Toulouse (France)
2013-11-15
Results from a 3D self-consistent Particle-In-Cell Monte Carlo Collisions (PIC MCC) model of a high power fusion-type negative ion source are presented for the first time. The model is used to calculate the plasma characteristics of the ITER prototype BATMAN ion source developed in Garching. Special emphasis is put on the production of negative ions on the plasma grid surface. The question of the relative roles of the impact of neutral hydrogen atoms and positive ions on the cesiated grid surface has attracted much attention recently and the 3D PIC MCC model is used to address this question. The results show that the production of negative ions by positive ion impact on the plasma grid is small with respect to the production by atomic hydrogen or deuterium bombardment (less than 10%)
ARM - Publications: Science Team Meeting Documents: Evaluation of the Monte
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Carlo Independent Column Approximation (McICA) implementation in the GEOS-5 Single Column Model Evaluation of the Monte Carlo Independent Column Approximation (McICA) implementation in the GEOS-5 Single Column Model Oreopoulos, Lazaros JCET/UMBC and NASA/GSFC Bacmeister, Julio GEST/UMBC and NASA/GSFC Cahalan, Robert NASA/Goddard Space Flight Ctr/913 Barker, Howard Meteorological Service of Canada The McICA method ( Barker et al., 2002; Pincus et al., 2003) has been recently implemented in
Georgescu, Ionu? Mandelshtam, Vladimir A.; Jitomirskaya, Svetlana
2013-11-28
Given a quantum many-body system, the Self-Consistent Phonons (SCP) method provides an optimal harmonic approximation by minimizing the free energy. In particular, the SCP estimate for the vibrational ground state (zero temperature) appears to be surprisingly accurate. We explore the possibility of going beyond the SCP approximation by considering the system Hamiltonian evaluated in the harmonic eigenbasis of the SCP Hamiltonian. It appears that the SCP ground state is already uncoupled to all singly- and doubly-excited basis functions. So, in order to improve the SCP result at least triply-excited states must be included, which then reduces the error in the ground state estimate substantially. For a multidimensional system two numerical challenges arise, namely, evaluation of the potential energy matrix elements in the harmonic basis, and handling and diagonalizing the resulting Hamiltonian matrix, whose size grows rapidly with the dimensionality of the system. Using the example of water hexamer we demonstrate that such calculation is feasible, i.e., constructing and diagonalizing the Hamiltonian matrix in a triply-excited SCP basis, without any additional assumptions or approximations. Our results indicate particularly that the ground state energy differences between different isomers (e.g., cage and prism) of water hexamer are already quite accurate within the SCP approximation.
An advanced deterministic method for spent fuel criticality safety analysis
DeHart, M.D.
1998-01-01
Over the past two decades, criticality safety analysts have come to rely to a large extent on Monte Carlo methods for criticality calculations. Monte Carlo has become popular because of its capability to model complex, non-orthogonal configurations or fissile materials, typical of real world problems. Over the last few years, however, interest in determinist transport methods has been revived, due shortcomings in the stochastic nature of Monte Carlo approaches for certain types of analyses. Specifically, deterministic methods are superior to stochastic methods for calculations requiring accurate neutron density distributions or differential fluxes. Although Monte Carlo methods are well suited for eigenvalue calculations, they lack the localized detail necessary to assess uncertainties and sensitivities important in determining a range of applicability. Monte Carlo methods are also inefficient as a transport solution for multiple pin depletion methods. Discrete ordinates methods have long been recognized as one of the most rigorous and accurate approximations used to solve the transport equation. However, until recently, geometric constraints in finite differencing schemes have made discrete ordinates methods impractical for non-orthogonal configurations such as reactor fuel assemblies. The development of an extended step characteristic (ESC) technique removes the grid structure limitations of traditional discrete ordinates methods. The NEWT computer code, a discrete ordinates code built upon the ESC formalism, is being developed as part of the SCALE code system. This paper will demonstrate the power, versatility, and applicability of NEWT as a state-of-the-art solution for current computational needs.
Distributed Monte Carlo production for D0
Snow, Joel; /Langston U.
2010-01-01
The D0 collaboration uses a variety of resources on four continents to pursue a strategy of flexibility and automation in the generation of simulation data. This strategy provides a resilient and opportunistic system which ensures an adequate and timely supply of simulation data to support D0's physics analyses. A mixture of facilities, dedicated and opportunistic, specialized and generic, large and small, grid job enabled and not, are used to provide a production system that has adapted to newly developing technologies. This strategy has increased the event production rate by a factor of seven and the data production rate by a factor of ten in the last three years despite diminishing manpower. Common to all production facilities is the SAM (Sequential Access to Metadata) data-grid. Job submission to the grid uses SAMGrid middleware which may forward jobs to the OSG, the WLCG, or native SAMGrid sites. The distributed computing and data handling system used by D0 will be described and the results of MC production since the deployment of grid technologies will be presented.
Linac Coherent Light Source Monte Carlo Simulation
Energy Science and Technology Software Center (OSTI)
2006-03-15
This suite consists of codes to generate an initial x-ray photon distribution and to propagate the photons through various objects. The suite is designed specifically for simulating the Linac Coherent Light Source, and x-ray free electron laser (XFEL) being built at the Stanford Linear Accelerator Center. The purpose is to provide sufficiently detailed characteristics of the laser to engineers who are designing the laser diagnostics.
Quantum Process Matrix Computation by Monte Carlo
Energy Science and Technology Software Center (OSTI)
2012-09-11
The software package, processMC, is a python script that allows for the rapid modeling of small , noisy quantum systems and the computation of the averaged quantum evolution map.
Statistical assessment of Monte Carlo distributional tallies
Kiedrowski, Brian C; Solomon, Clell J
2010-12-09
Four tests are developed to assess the statistical reliability of distributional or mesh tallies. To this end, the relative variance density function is developed and its moments are studied using simplified, non-transport models. The statistical tests are performed upon the results of MCNP calculations of three different transport test problems and appear to show that the tests are appropriate indicators of global statistical quality.
An advanced deterministic method for spent-fuel criticality safety analysis
DeHart, M.D.
1998-09-01
Over the past two decades, criticality safety analysts have come to rely to a large extent on Monte Carlo methods for criticality calculations. Monte Carlo has become popular because of its capability to model complex, nonorthogonal configurations or fissile materials, typical of real-world problems. In the last few years, however, interest in determinist transport methods has been revived, due to shortcomings in the stochastic nature of Monte Carlo approaches for certain types of analyses. Specifically, deterministic methods are superior to stochastic methods for calculations requiring accurate neutron density distributions or differential fluxes. Although Monte Carlo methods are well suited for eigenvalue calculations, they lack the localized detail necessary to assess uncertainties and sensitivities important in determining a range of applicability. Monte Carlo methods are also inefficient as a transport solution for multiple-pin depletion methods. Discrete ordinates methods have long been recognized as one of the most rigorous and accurate approximations used to solve the transport equation. However, until recently, geometric constrains in finite differencing schemes have made discrete ordinates methods impractical for nonorthogonal configurations such as reactor fuel assemblies. The development of an extended step characteristic (ESC) technique removes the grid structure limitation of traditional discrete ordinates methods. The NEWT computer code, a discrete ordinates code built on the ESC formalism, is being developed as part of the SCALE code system. This paper demonstrates the power, versatility, and applicability of NEWT as a state-of-the-art solution for current computational needs.
Sandia Energy - Carlos Micheln
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Micheln Home Carlos Micheln Carlos Micheln Engineering Sciences R&D Department: Water Power Technologies Michelen Carlos Micheln joined the Water Power Technologies...
Simulation of atomic diffusion in the Fcc NiAl system: A kinetic Monte
Office of Scientific and Technical Information (OSTI)
Carlo study (Journal Article) | SciTech Connect Simulation of atomic diffusion in the Fcc NiAl system: A kinetic Monte Carlo study Citation Details In-Document Search Title: Simulation of atomic diffusion in the Fcc NiAl system: A kinetic Monte Carlo study The atomic diffusion in fcc NiAl binary alloys was studied by kinetic Monte Carlo simulation. The environment dependent hopping barriers were computed using a pair interaction model whose parameters were fitted to relevant data derived
Simulation of radiation damping in rings, using stepwise ray-tracing methods
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Meot, F.
2015-06-26
The ray-tracing code Zgoubi computes particle trajectories in arbitrary magnetic and/or electric field maps or analytical field models. It includes a built-in fitting procedure, spin tracking many Monte Carlo processes. The accuracy of the integration method makes it an efficient tool for multi-turn tracking in periodic machines. Energy loss by synchrotron radiation, based on Monte Carlo techniques, had been introduced in Zgoubi in the early 2000s for studies regarding the linear collider beam delivery system. However, only recently has this Monte Carlo tool been used for systematic beam dynamics and spin diffusion studies in rings, including eRHIC electron-ion collider projectmore » at the Brookhaven National Laboratory. Some beam dynamics aspects of this recent use of Zgoubi capabilities, including considerations of accuracy as well as further benchmarking in the presence of synchrotron radiation in rings, are reported here.« less
Simulation of radiation damping in rings, using stepwise ray-tracing methods
Meot, F.
2015-06-26
The ray-tracing code Zgoubi computes particle trajectories in arbitrary magnetic and/or electric field maps or analytical field models. It includes a built-in fitting procedure, spin tracking many Monte Carlo processes. The accuracy of the integration method makes it an efficient tool for multi-turn tracking in periodic machines. Energy loss by synchrotron radiation, based on Monte Carlo techniques, had been introduced in Zgoubi in the early 2000s for studies regarding the linear collider beam delivery system. However, only recently has this Monte Carlo tool been used for systematic beam dynamics and spin diffusion studies in rings, including eRHIC electron-ion collider project at the Brookhaven National Laboratory. Some beam dynamics aspects of this recent use of Zgoubi capabilities, including considerations of accuracy as well as further benchmarking in the presence of synchrotron radiation in rings, are reported here.
Applications of FLUKA Monte Carlo Code for Nuclear and Accelerator...
Office of Scientific and Technical Information (OSTI)
Presently the code is maintained on Linux. The validity of the physical models implemented ... in particle accelerators, radiation protection and dosimetry, including the specific ...
Fast Monte Carlo for radiation therapy: the PEREGRINE Project...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
... Close Cite: Bibtex Format Close 0 pages in this document matching the terms "" Search For Terms: Enter terms in the toolbar above to search the full text of this document for ...
A Fast Monte Carlo Simulation for the International Linear Collider...
Office of Scientific and Technical Information (OSTI)
with the full simulation by sacrificing what is in more many cases inappropriate attention to detail for valuable gains in time required for results. less Authors: Furse, D. ...
Uncertainty Quantification with Monte Carlo Hauser-Feshbach Calculatio...
Office of Scientific and Technical Information (OSTI)
LANL Country of Publication: United States Language: English Subject: Atomic and Nuclear Physics; Nuclear Fuel Cycle & Fuel Materials(11); Nuclear Physics & Radiation Physics(73)...
Monte-Carlo particle dynamics in a variable specific impulse...
Office of Scientific and Technical Information (OSTI)
Authors: Ilin, A.V. 1 ; Diaz, F.R.C. ; Squire, J.P. 2 ; Carter, M.D. 3 + Show Author Affiliations Lockheed Martin Space Mission Systems and Services, Houston, TX (United ...
Monte Carlo Hauser-Feshbach Calculations of Prompt Fission Neutrons...
Office of Scientific and Technical Information (OSTI)
Org: DOELANL Country of Publication: United States Language: English Subject: Atomic and Nuclear Physics; Nuclear Fuel Cycle & Fuel Materials(11); Nuclear Physics & Radiation...
Quantum Monte Carlo Calculations of Light Nuclei Using Chiral...
Office of Scientific and Technical Information (OSTI)
GrantContract Number: AC02-05CH11231 Type: Publisher's Accepted Manuscript Journal Name: Physical Review Letters Additional Journal Information: Journal Volume: 113; Journal ...
Tests of Monte Carlo Independent Column Approximation With a...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Jrvenoja Heikki Jrvinen Risnen Finnish Meteorological Institute Figure 1. Root-mean-square sampling errors in local instant- aneous total (LW+SW) net flux at the surface...
Monte Carlo Hauser-Feshbach Calculations of Prompt Fission Neutrons...
Office of Scientific and Technical Information (OSTI)
DOELANL Country of Publication: United States Language: English Subject: Atomic and Nuclear Physics; Nuclear Fuel Cycle & Fuel Materials(11); Nuclear Physics & Radiation...
Monte Carlo Simulation of Light Transport in Tissue, Beta Version
Energy Science and Technology Software Center (OSTI)
2003-12-09
Understanding light-tissue interaction is fundamental in the field of Biomedical Optics. It has important implications for both therapeutic and diagnostic technologies. In this program, light transport in scattering tissue is modeled by absorption and scattering events as each photon travels through the tissue. the path of each photon is determined statistically by calculating probabilities of scattering and absorption. Other meausured quantities are total reflected light, total transmitted light, and total heat absorbed.
A Monte Carlo Approach To Generator Portfolio Planning And Carbon...
solar thermal, and rooftop photovoltaics, as well as hydroelectric, geothermal, and natural gas plants. The portfolios produced by the model take advantage of the aggregation of...
The Monte Carlo Independent Column Approximation Model Intercomparison...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Service of Canada Raisanen, Petri Finnish Meteorological Institute Pincus, Robert NOAA-CIRES Climate Diagnostics Center Morcrette, Jean-Jacques European Centre for...
Diagnostic Mass-Consistent Wind Field Monte Carlo Dispersion Model
Energy Science and Technology Software Center (OSTI)
1991-01-01
MATHEW generates a diagnostic mass-consistent, three-dimensional wind field based on point measurements of wind speed and direction. It accounts for changes in topography within its calculational domain. The modeled wind field is used by the Langrangian ADPIC dispersion model. This code is designed to predict the atmospheric boundary layer transport and diffusion of neutrally bouyant, non-reactive species as well as first-order chemical reactions and radioactive decay (including daughter products).
Monte Carlo Simulations for Homeland Security Using Anthropomorphic Phantoms
Burns, Kimberly A.
2008-01-01
A radiological dispersion device (RDD) is a device which deliberately releases radioactive material for the purpose of causing terror or harm. In the event that a dirty bomb is detonated, there may be airborne radioactive material that can be inhaled as well as settle on an individuals leading to external contamination.
Effects of self-seeding and crystal post-selection on the quality of Monte
Office of Scientific and Technical Information (OSTI)
Carlo-integrated SFX data (Journal Article) | SciTech Connect Journal Article: Effects of self-seeding and crystal post-selection on the quality of Monte Carlo-integrated SFX data Citation Details In-Document Search Title: Effects of self-seeding and crystal post-selection on the quality of Monte Carlo-integrated SFX data Abstract is not provided Authors: Barends, Thomas ; White, Thomas A. ; Barty, Anton ; Foucar, Lutz ; Messerschmidt, Marc ; Alonso-Mori, Roberto [1] ; Botha, Sabine ;
A Hybrid Variance Reduction Method Based on Gaussian Process for Core Simulation
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Hybrid Variance Reduction Method Based on Gaussian Process for Core Simulation Zeyun Wu, Qiong Zhang and Hany S. Abdel-Khalik Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 {zwu3, qzhang7, abdelkhalik}@ncsu.edu INTRODUCTION Variance reduction techniques is usually employed to accelerate the convergence of Monte Carlo (MC) simulation. Hybrid deterministic-MC methods [1, 2, 3] have been recently developed to achieve the goal of global variance reduction.
Hwang, Seho; Shin, Jehyun; Kim, Jongman; Won, Byeongho
2015-03-10
Density log is widely applied for a variety of fields such as the petroleum exploration, mineral exploration, and geotechnical survey. The logging condition of density log is normally open holes but there are frequently cased boreholes. The primary calibration curve by slim hole logging manufacturer is normally the calibration curves for the variation of borehole diameter. In this study, we have performed the correction of steel casing effects using numerical and experimental methods. We have performed numerical modeling using the Monte Carlo N-Particle (MCNP) code based on Monte Carlo method, and field experimental method from open and cased hole log. In this study, we used the FDGS (Formation Density Gamma Sonde) for slim borehole with a 100 mCi 137Cs source, three inch borehole and steel casing. The casing effect between numerical and experimental method is well matched.
Apparatus and method for tracking a molecule or particle in three dimensions
Werner, James H. (Los Alamos, NM); Goodwin, Peter M. (Los Alamos, NM); Lessard, Guillaume (Santa Fe, NM)
2009-03-03
An apparatus and method were used to track the movement of fluorescent particles in three dimensions. Control software was used with the apparatus to implement a tracking algorithm for tracking the motion of the individual particles in glycerol/water mixtures. Monte Carlo simulations suggest that the tracking algorithms in combination with the apparatus may be used for tracking the motion of single fluorescent or fluorescently labeled biomolecules in three dimensions.
CASL-U-2015-0157-000 Stabilization Methods
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
7-000 Stabilization Methods for CMFD Acceleration M. Jarrett, B. Kelley, B. Kochunas, T. Downar, E. Larsen University of Michigan April 19, 2015 CASL-U-2015-0157-000 ANS MC2015 - Joint International Conference on Mathematics and Computation (M&C), Supercomputing in Nuclear Applications (SNA) and the Monte Carlo (MC) Method * Nashville, TN * April 19-23, 2015, on CD-ROM, American Nuclear Society, LaGrange Park, IL (2015) STABILIZATION METHODS FOR CMFD ACCELERATION M. Jarrett, B. Kelley, B.
Office of Environmental Management (EM)
Carlos Valdez, Chair Department of Energy Washington, DC 20585 December 10, 2012 Northern New Mexico Citizens' Advisory Board 94 Cities of Gold Road Santa Fe, New Mexico 87506 Dear Mr. Valdez: Thank you and the Northern New Mexico Citizens' Advisory Board (NNMCAB) for recommendation No. 2012-02, "Expand the Mission of the NNMCAB to Include Advice and Recommendations on the Evaluation and Use of Waste Isolation Pilot Plant (WIPP) as it pertains to the Disposal of Legacy Non-TRU Radioactive
Federal University of Sao Carlos | Open Energy Information
Sao Carlos Jump to: navigation, search Name: Federal University of Sao Carlos Place: Sao Carlos, Sao Paulo, Brazil Zip: 13565-905 Product: Federal university of Sao Carlos....
Eolica Montes de Cierzo | Open Energy Information
Montes de Cierzo Jump to: navigation, search Name: Eolica Montes de Cierzo Place: Navarra, Spain Sector: Wind energy Product: Spanish wind farm developer in the region of Navarra....
Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil; Abhyankar, S.; Ghosh, Donetta L.; Smith, Barry; Huang, Zhenyu; Tartakovsky, Alexandre M.
2015-09-22
Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.
A flexible importance sampling method for integrating subgrid processes
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Raut, E. K.; Larson, V. E.
2015-10-22
Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is integration. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that contains bothmoreprecipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). The resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.less
San Carlos Apache Tribe Solar Feasibility Study
San Carlos Apache Tribe Solar Feasibility Study San Carlos Apache Tribe And Reservation * 90 miles east of Phoenix * Membership: 15,000 * 1.83 million acres. * 2nd highest rated level of solar resource potential * Main employers: Tribe / IHS /BIA / Schools / Casino / Telecom * Utilities: Telecom, MTSS, Utility Authority * Revenue: Casino, farming, water leasing, saw mill, hunting/fishing, sand & gravel/telecom San Carlos Apache Reservation San Carlos Apache Mission Statement The Apache
Mont Vista Capital LLC | Open Energy Information
Vista Capital LLC Jump to: navigation, search Name: Mont Vista Capital LLC Place: New York, New York Zip: 10167 Sector: Services Product: Mont Vista Capital is a leading global...
Method for measuring changes in light absorption of highly scattering media
Bigio, Irving J. (Los Alamos, NM); Johnson, Tamara M. (Los Alamos, NM); Mourant, Judith R. (Los Alamos, NM)
2002-01-01
The noninvasive measurement of variations in absorption that are due to changes in concentrations of biochemically relevant compounds in tissue is important in many clinical settings. One problem with such measurements is that the pathlength traveled by the collected light through the tissue depends on the scattering properties of the tissue. It is demonstrated, using both Monte Carlo simulations and experimental measurements, that for an appropriate separation between light-delivery and light-collection fibers, the pathlength of the collected photons is insensitive to scattering parameters for the range of parameters typically found in tissue. This is important for developing rapid, noninvasive, inexpensive, and accurate methods for measuring absorption changes in tissue.
Monte-Carlo simulation of noise in hard X-ray Transmission Crystal...
Office of Scientific and Technical Information (OSTI)
UPMC, 91128 Palaiseau (France) University of Milano, via Celoria 16, 20133 Milano (Italy) Publication Date: 2014-11-15 OSTI Identifier: 22308598 Resource Type: Journal Article ...
Final report for LDRD13-0130 : exponentially convergent Monte Carlo for electron transport.
Franke, Brian Claude
2013-09-01
This is the final report on the LDRD, though the interested reader is referred to the ANS Transactions paper which more thoroughly documents the technical work of this project.
Monte Carlo Implementation Of Up- Or Down-Scattering Due To Collisions...
Office of Scientific and Technical Information (OSTI)
Subject: 71 CLASSICAL AND QUANTUMM MECHANICS, GENERAL PHYSICS; 71 CLASSICAL AND QUANTUMM MECHANICS, GENERAL PHYSICS; 22 GENERAL STUDIES OF NUCLEAR REACTORS; 73 NUCLEAR PHYSICS AND ...
Zori 1.0: A Parallel Quantum Monte Carlo Electronic StructurePackage...
Office of Scientific and Technical Information (OSTI)
Authors: Aspuru-Guzik, Alan ; Salomon-Ferrer, Romelia ; Austin, Brian ; Perusquia-Flores, Raul ; Griffin, Mary A. ; Oliva, Ricardo A. ; Skinner,David ; Dominik,Domin ; Lester Jr., ...
Monte Carlo Implementation Of Up- Or Down-Scattering Due To Collisions...
Office of Scientific and Technical Information (OSTI)
PHYSICS; 71 CLASSICAL AND QUANTUMM MECHANICS, GENERAL PHYSICS; 22 GENERAL STUDIES OF NUCLEAR REACTORS; 73 NUCLEAR PHYSICS AND RADIATION PHYSICS Word Cloud More Like This Full...
Evaluation of Monte Carlo Electron-Transport Algorithms in the Integrated
Office of Scientific and Technical Information (OSTI)
Tiger Series Codes for Stochastic-Media Simulations. (Conference) | SciTech Connect Patrick ; Prinja, Anil K. Publication Date: 2013-09-01 OSTI Identifier: 1110389 Report Number(s): SAND2013-7609C 473868
Evaluation of Monte Carlo Electron-Transport Algorithms in the Integrated
Office of Scientific and Technical Information (OSTI)
Tiger Series Codes for Stochastic-Media Simulations. (Conference) | SciTech Connect P. ; Prinja, Anil K. Publication Date: 2013-10-01 OSTI Identifier: 1114635 Report Number(s): SAND2013-8831C 477016
Structure of Cu64.5Zr35.5 Metallic glass by reverse Monte Carlo...
Office of Scientific and Technical Information (OSTI)
2 + Show Author Affiliations Ames Laboratory University of Science and Technology of China Publication Date: 2014-02-07 OSTI Identifier: 1134611 Report Number(s): IS-J 8231...
Random-Walk Monte Carlo Simulation of Intergranular Gas Bubble Nucleation in UO2 Fuel
Yongfeng Zhang; Michael R. Tonks; S. B. Biner; D.A. Andersson
2012-11-01
Using a random-walk particle algorithm, we investigate the clustering of fission gas atoms on grain bound- aries in oxide fuels. The computational algorithm implemented in this work considers a planar surface representing a grain boundary on which particles appear at a rate dictated by the Booth flux, migrate two dimensionally according to their grain boundary diffusivity, and coalesce by random encounters. Specifically, the intergranular bubble nucleation density is the key variable we investigate using a parametric study in which the temperature, grain boundary gas diffusivity, and grain boundary segregation energy are varied. The results reveal that the grain boundary bubble nucleation density can vary widely due to these three parameters, which may be an important factor in the observed variability in intergranular bubble percolation among grain boundaries in oxide fuel during fission gas release.
Monte Carlo Fundamentals E B. BROWN and T M. S N
Office of Scientific and Technical Information (OSTI)
constitute or imply its endorsement, recommendation, or favoring by the United States ... independent calculations mi result from job j (all J jobs are identical except for ...
CASL-U-2015-0247-000 The OpenMC Monte Carlo Particle Transport...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Ellis, Nich Horelik, Benoit Forget, Kord Smith Massachusetts Institute of Technology ... Paul Romano 3 , Benoit Forget, 1 and Kord Smith 1 1 Massachusetts Institute of Technology, ...
CASL-U-2015-0170-000-a SHIFT: A New Monte Carlo Package Seth...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
CASL-U-2015-0170-000-a 2 Shift: MC RT Code What makes Shift an HPC transport code? * Modern framework - Rapid development with C++11, Python, CMake, CTest, ... - Integration...
Ding, D.; Chen, X.; Minnich, A. J.
2014-04-07
Recently, a pump beam size dependence of thermal conductivity was observed in Si at cryogenic temperatures using time-domain thermal reflectance (TDTR). These observations were attributed to quasiballistic phonon transport, but the interpretation of the measurements has been semi-empirical. Here, we present a numerical study of the heat conduction that occurs in the full 3D geometry of a TDTR experiment, including an interface, using the Boltzmann transport equation. We identify the radial suppression function that describes the suppression in heat flux, compared to Fourier's law, that occurs due to quasiballistic transport and demonstrate good agreement with experimental data. We also discuss unresolved discrepancies that are important topics for future study.
Bayesian methods for characterizing unknown parameters of material models
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Emery, J. M.; Grigoriu, M. D.; Field Jr., R. V.
2016-02-04
A Bayesian framework is developed for characterizing the unknown parameters of probabilistic models for material properties. In this framework, the unknown parameters are viewed as random and described by their posterior distributions obtained from prior information and measurements of quantities of interest that are observable and depend on the unknown parameters. The proposed Bayesian method is applied to characterize an unknown spatial correlation of the conductivity field in the definition of a stochastic transport equation and to solve this equation by Monte Carlo simulation and stochastic reduced order models (SROMs). As a result, the Bayesian method is also employed tomore » characterize unknown parameters of material properties for laser welds from measurements of peak forces sustained by these welds.« less
San Carlos Apache Tribe- 2012 Project
Broader source: Energy.gov [DOE]
Under this project, the San Carlos Apache Tribe will study the feasibility of solar energy projects within the reservation with the potential to generate a minimum of 1 megawatt (MW).
ARM - Carlos Sousa Interview (English Version)
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
DeployementCarlos Sousa Interview (English Version) Azores Deployment AMF Home Graciosa Island Home Data Plots and Baseline Instruments Satellite Retrievals Experiment Planning CAP-MBL Proposal Abstract and Related Campaigns Science Questions Science Plan (PDF, 4.4M) Rob Wood Website Outreach Backgrounders English Version (PDF, 363K) Portuguese Version (PDF, 327K) AMF Posters, 2009 English Version Portuguese Version Education Flyers English Version Portuguese Version News Campaign Images Carlos
A New Equivalence Theory Method for Treating Doubly Heterogeneous Fuel - II. Verifications
Choi, Sooyoung; Kong, Chidong; Lee, Deokjung; Williams, Mark L.
2015-03-09
A new methodology has been developed recently to treat resonance self-shielding in systems for which the fuel compact region of a reactor lattice consists of small fuel grains dispersed in a graphite matrix. The theoretical development adopts equivalence theory in both micro- and macro-level heterogeneities to provide approximate analytical expressions for the shielded cross sections, which may be interpolated from a table of resonance integrals or Bondarenko factors using a modified background cross section as the interpolation parameter. This paper describes the first implementation of the theoretical equations in a reactor analysis code. In order to reduce discrepancies caused by use of the rational approximation for collision probabilities in the original derivation, a new formulation for a doubly heterogeneous Bell factor is developed in this paper to improve the accuracy of doubly heterogeneous expressions. This methodology is applied to a wide range of pin cell and assembly test problems with varying geometry parameters, material compositions, and temperatures, and the results are compared with continuous-energy Monte Carlo simulations to establish the accuracy and range of applicability of the new approach. It is shown that the new doubly heterogeneous self-shielding method including the Bell factor correction gives good agreement with reference Monte Carlo results.
A New Equivalence Theory Method for Treating Doubly Heterogeneous Fuel - II. Verifications
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Choi, Sooyoung; Kong, Chidong; Lee, Deokjung; Williams, Mark L.
2015-03-09
A new methodology has been developed recently to treat resonance self-shielding in systems for which the fuel compact region of a reactor lattice consists of small fuel grains dispersed in a graphite matrix. The theoretical development adopts equivalence theory in both micro- and macro-level heterogeneities to provide approximate analytical expressions for the shielded cross sections, which may be interpolated from a table of resonance integrals or Bondarenko factors using a modified background cross section as the interpolation parameter. This paper describes the first implementation of the theoretical equations in a reactor analysis code. In order to reduce discrepancies caused bymoreuse of the rational approximation for collision probabilities in the original derivation, a new formulation for a doubly heterogeneous Bell factor is developed in this paper to improve the accuracy of doubly heterogeneous expressions. This methodology is applied to a wide range of pin cell and assembly test problems with varying geometry parameters, material compositions, and temperatures, and the results are compared with continuous-energy Monte Carlo simulations to establish the accuracy and range of applicability of the new approach. It is shown that the new doubly heterogeneous self-shielding method including the Bell factor correction gives good agreement with reference Monte Carlo results.less
Loyalka, Sudarshan
2015-04-09
The purpose of this project was to develop methods and tools that will aid in safety evaluation of nuclear fuels and licensing of nuclear reactors relating to accidents.The objectives were to develop more detailed and faster computations of fission product transport and aerosol evolution as they generally relate to nuclear fuel and/or nuclear reactor accidents. The two tasks in the project related to molecular transport in nuclear fuel and aerosol transport in reactor vessel and containment. For both the tasks, explorations of coupling of Direct Simulation Monte Carlo with Navier-Stokes solvers or the Sectional method were not successful. However, Mesh free methods for the Direct Simulation Monte Carlo method were successfully explored.These explorations permit applications to porous and fractured media, and arbitrary geometries.The computations were carried out in Mathematica and are fully parallelized. The project has resulted in new computational tools (algorithms and programs) that will improve the fidelity of computations to actual physics, chemistry and transport of fission products in the nuclear fuel and aerosol in reactor primary and secondary containments.
SU-E-T-351: Verification of Monitor Unit Calculation for Lung...
Office of Scientific and Technical Information (OSTI)
States Language: English Subject: 60 APPLIED LIFE SCIENCES; COMPUTERIZED TOMOGRAPHY; LUNGS; MONTE CARLO METHOD; RADIATION DOSES; RADIATION MONITORS; RADIOTHERAPY; SPATIAL...
On the dust environment of comet C/2012 S1 (ISON) from 12 AU...
Office of Scientific and Technical Information (OSTI)
COSMOLOGY AND ASTRONOMY; ALFVEN WAVES; COMETS; CONFIGURATION; DENSITY; DUSTS; EMISSION; FRAGMENTATION; GALAXY NUCLEI; IMAGES; MASS; MONTE CARLO METHOD; PARTICLES; SPHERES;...
Toni Smithl; Lyudmila V. Slipchenko; Mark S. Gordon
2008-02-27
This study compares the results of the general effective fragment potential (EFP2) method to the results of a previous combined coupled cluster with single, double, and perturbative triple excitations [CCSD(T)] and symmetry-adapted perturbation theory (SAPT) study [Sinnokrot and Sherrill, J. Am. Chem. Soc., 2004, 126, 7690] on substituent effects in {pi}-{pi} interactions. EFP2 is found to accurately model the binding energies of the benzene-benzene, benzene-phenol, benzene-toluene, benzene-fluorobenzene, and benzene-benzonitrile dimers, as compared with high-level methods [Sinnokrot and Sherrill, J. Am. Chem. Soc., 2004, 126, 7690], but at a fraction of the computational cost of CCSD(T). In addition, an EFP-based Monte Carlo/simulated annealing study was undertaken to examine the potential energy surface of the substituted dimers.
Numerical studies of the flux-to-current ratio method in the KIPT neutron source facility
Cao, Y.; Gohar, Y.; Zhong, Z.
2013-07-01
The reactivity of a subcritical assembly has to be monitored continuously in order to assure its safe operation. In this paper, the flux-to-current ratio method has been studied as an approach to provide the on-line reactivity measurement of the subcritical system. Monte Carlo numerical simulations have been performed using the KIPT neutron source facility model. It is found that the reactivity obtained from the flux-to-current ratio method is sensitive to the detector position in the subcritical assembly. However, if multiple detectors are located about 12 cm above the graphite reflector and 54 cm radially, the technique is shown to be very accurate in determining the k{sub eff} this facility in the range of 0.75 to 0.975. (authors)
Dingkang Zhang; Farzad Rahnema; Abderrafi M. Ougouag
2013-09-01
A local incident flux response expansion transport method is developed to generate transport solutions for coupling to diffusion theory codes regardless of their solution method (e.g., fine mesh, nodal, response based, finite element, etc.) for reactor core calculations in both two-dimensional (2-D) and three-dimensional (3-D) cylindrical geometries. In this approach, a Monte Carlo method is first used to precompute the local transport solution (i.e., response function library) for each unique transport coarse node, in which diffusion theory is not valid due to strong transport effects. The response function library is then used to iteratively determine the albedo coefficients on the diffusion-transport interfaces, which are then used as the coupling parameters within the diffusion code. This interface coupling technique allows a seamless integration of the transport and diffusion methods. The new method retains the detailed heterogeneity of the transport nodes and naturally constructs any local solution within them by a simple superposition of local responses to all incoming fluxes from the contiguous coarse nodes. A new technique is also developed for coupling to fine-mesh diffusion methods/codes. The local transport method/module is tested in 2-D and 3-D pebble-bed reactor benchmark problems consisting of an inner reflector, an annular fuel region, and a controlled outer reflector. It is found that the results predicted by the transport module agree very well with the reference fluxes calculated directly by MCNP in both benchmark problems.
Tanguay, Jesse; Kim, Ho Kyung; Cunningham, Ian A.
2012-01-15
Purpose: X-ray digital subtraction angiography (DSA) is widely used for vascular imaging. However, the need to subtract a mask image can result in motion artifacts and compromised image quality. The current interest in energy-resolving photon-counting (EPC) detectors offers the promise of eliminating motion artifacts and other advanced applications using a single exposure. The authors describe a method of assessing the iodine signal-to-noise ratio (SNR) that may be achieved with energy-resolved angiography (ERA) to enable a direct comparison with other approaches including DSA and dual-energy angiography for the same patient exposure. Methods: A linearized noise-propagation approach, combined with linear expressions of dual-energy and energy-resolved imaging, is used to describe the iodine SNR. The results were validated by a Monte Carlo calculation for all three approaches and compared visually for dual-energy and DSA imaging using a simple angiographic phantom with a CsI-based flat-panel detector. Results: The linearized SNR calculations show excellent agreement with Monte Carlo results. While dual-energy methods require an increased tube heat load of 2x to 4x compared to DSA, and photon-counting detectors are not yet ready for angiographic imaging, the available iodine SNR for both methods as tested is within 10% of that of conventional DSA for the same patient exposure over a wide range of patient thicknesses and iodine concentrations. Conclusions: While the energy-based methods are not necessarily optimized and further improvements are likely, the linearized noise-propagation analysis provides the theoretical framework of a level playing field for optimization studies and comparison with conventional DSA. It is concluded that both dual-energy and photon-counting approaches have the potential to provide similar angiographic image quality to DSA.
The iterative thermal emission method: A more implicit modification of IMC
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Long, A. R.; Gentile, N. A.; Palmer, T. S.
2014-08-19
For over 40 years, the Implicit Monte Carlo (IMC) method has been used to solve challenging problems in thermal radiative transfer. These problems typically contain regions that are optically thick and diffusive, as a consequence of the high degree of “pseudo-scattering” introduced to model the absorption and reemission of photons from a tightly-coupled, radiating material. IMC has several well-known features that could be improved: a) it can be prohibitively computationally expensive, b) it introduces statistical noise into the material and radiation temperatures, which may be problematic in multiphysics simulations, and c) under certain conditions, solutions can be nonphysical, in thatmore » they violate a maximum principle, where IMC-calculated temperatures can be greater than the maximum temperature used to drive the problem.« less
Khledi, Navid; Sardari, Dariush; Arbabi, Azim; Ameri, Ahmad; Mohammadi, Mohammad
2015-02-24
Depending on the location and depth of tumor, the electron or photon beams might be used for treatment. Electron beam have some advantages over photon beam for treatment of shallow tumors to spare the normal tissues beyond of the tumor. In the other hand, the photon beam are used for deep targets treatment. Both of these beams have some limitations, for example the dependency of penumbra with depth, and the lack of lateral equilibrium for small electron beam fields. In first, we simulated the conventional head configuration of Varian 2300 for 16 MeV electron, and the results approved by benchmarking the Percent Depth Dose (PDD) and profile of the simulation and measurement. In the next step, a perforated Lead (Pb) sheet with 1mm thickness placed at the top of the applicator holder tray. This layer producing bremsstrahlung x-ray and a part of the electrons passing through the holes, in result, we have a simultaneous mixed electron and photon beam. For making the irradiation field uniform, a layer of steel placed after the Pb layer. The simulation was performed for 1010, and 44 cm2 field size. This study was showed the advantages of mixing the electron and photon beam by reduction of pure electron's penumbra dependency with the depth, especially for small fields, also decreasing of dramatic changes of PDD curve with irradiation field size.
Praveen, E. Satyanarayana, S. V. M.
2014-04-24
Traditional definition of phase transition involves an infinitely large system in thermodynamic limit. Finite systems such as biological proteins exhibit cooperative behavior similar to phase transitions. We employ recently discovered analysis of inflection points of microcanonical entropy to estimate the transition temperature of the phase transition in q state Potts model on a finite two dimensional square lattice for q=3 (second order) and q=8 (first order). The difference of energy density of states (DOS) ? ln g(E) = ln g(E+ ?E) ?ln g(E) exhibits a point of inflexion at a value corresponding to inverse transition temperature. This feature is common to systems exhibiting both first as well as second order transitions. While the difference of DOS registers a monotonic variation around the point of inflexion for systems exhibiting second order transition, it has an S-shape with a minimum and maximum around the point of inflexion for the case of first order transition.
MaGe - a GEANT4-based Monte Carlo Application Framework for Low-background Germanium Experiments
Boswell, M.; Chan, Yuen-Dat; Detwiler, Jason A.; Finnerty, P.; Henning, R.; Gehman, Victor; Johnson, Robert A.; Jordan, David V.; Kazkaz, Kareem; Knapp, Markus; Kroninger, Kevin; Lenz, Daniel; Leviner, L.; Liu, Jing; Liu, Xiang; MacMullin, S.; Marino, Michael G.; Mokhtarani, A.; Pandola, Luciano; Schubert, Alexis G.; Schubert, J.; Tomei, Claudia; Volynets, Oleksandr
2011-06-13
We describe a physics simulation software framework, MAGE, that is based on the GEANT4 simulation toolkit. MAGE is used to simulate the response of ultra-low radioactive background radiation detectors to ionizing radiation, speci?cally the MAJ ORANA and GE RDA neutrinoless double-beta decay experiments. MAJ ORANA and GERDA use high-purity germanium technology to search for the neutrinoless double-beta decay of the 76 Ge isotope, and MAGE is jointly developed between these two collaborations. The MAGE framework contains simulated geometries of common objects, prototypes, test stands, and the actual experiments. It also implements customized event generators, GE ANT 4 physics lists, and output formats. All of these features are available as class libraries that are typically compiled into a single executable. The user selects the particular experimental setup implementation at run-time via macros. The combination of all these common classes into one framework reduces duplication of efforts, eases comparison between simulated data and experiment, and simpli?es the addition of new detectors to be simulated. This paper focuses on the software framework, custom event generators, and physics list.
Burke, Timothy Patrick; Kiedrowski, Brian; Martin, William R.; Brown, Forrest B.
2015-08-27
KDEs show potential reducing variance for global solutions (flux, reaction rates) when compared to histogram solutions.
Carlos Hernandez Faham LBNL NERSC@40
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Hernandez Faham LBNL NERSC@40 Feb 5, 2014 The Large Underground Xenon (LUX) experiment and NERSC NERSC@40 Feb 5, 2014 Carlos Faham 2 Then and now... The Malloc machine, 1933 Edison, 2014 Solved 10 simultaneous differential equations Can do that, too When researchers talk about neutron stars, dark matter and gravitational lenses, they all start the same way: "Zwicky noticed this problem in the 1930s. Back then, nobody listened . . ." Stephen Maurer "Who the devil are you?" *
Nonlocal exchange correlation in screened-exchange densityfunctional methods
Lee, Byounghak; Wang, Lin-Wang; Spataru, Catalin D.; Louie,Steven G.
2007-04-22
We present a systematic study on the exchange-correlationeffects in screened-exchange local density functional method. Toinvestigate the effects of the screened-exchange potential in the bandgap correction, we have compared the exchange-correlation potential termin the sX-LDA formalism with the self-energy term in the GWapproximation. It is found that the band gap correction of the sX-LDAmethod primarily comes from the downshift of valence band states,resulting from the enhancement of bonding and the increase of ionizationenergy. The band gap correction in the GW method, on the contrary, comesin large part from the increase of theconduction band energies. We alsostudied the effects of the screened-exchange potential in the totalenergy by investigating the exchange-correlation hole in comparison withquantum Monte Carlo calculations. When the Thomas-Fermi screening isused, the sX-LDA method overestimates (underestimates) theexchange-correlation hole in short (long) range. From theexchange-correlation energy analysis we found that the LDA method yieldsbetter absolute total energy than sX-LDA method.
San Carlos, California: Energy Resources | Open Energy Information
Energy Companies in San Carlos, California Cleeves Engines, Inc. LiveFuels Inc Tesla Motors Tesla Motors Inc References US Census Bureau Incorporated place and minor...
Mont Vernon, New Hampshire: Energy Resources | Open Energy Information
Mont Vernon, New Hampshire: Energy Resources Jump to: navigation, search Equivalent URI DBpedia Coordinates 42.8945294, -71.6742393 Show Map Loading map......
South El Monte, California: Energy Resources | Open Energy Information
El Monte, California: Energy Resources Jump to: navigation, search Equivalent URI DBpedia Coordinates 34.0519548, -118.0467339 Show Map Loading map... "minzoom":false,"mapping...
North El Monte, California: Energy Resources | Open Energy Information
El Monte, California: Energy Resources Jump to: navigation, search Equivalent URI DBpedia Coordinates 34.1027861, -118.0242333 Show Map Loading map... "minzoom":false,"mapping...
San Carlos Apache Tribe - Energy Organizational Analysis
Rapp, James; Albert, Steve
2012-04-01
The San Carlos Apache Tribe (SCAT) was awarded $164,000 in late-2011 by the U.S. Department of Energy (U.S. DOE) Tribal Energy Program's "First Steps Toward Developing Renewable Energy and Energy Efficiency on Tribal Lands" Grant Program. This grant funded: ? The analysis and selection of preferred form(s) of tribal energy organization (this Energy Organization Analysis, hereinafter referred to as "EOA"). ? Start-up staffing and other costs associated with the Phase 1 SCAT energy organization. ? An intern program. ? Staff training. ? Tribal outreach and workshops regarding the new organization and SCAT energy programs and projects, including two annual tribal energy summits (2011 and 2012). This report documents the analysis and selection of preferred form(s) of a tribal energy organization.
Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.; Grove, Robert E.
2015-01-01
The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysismore » that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR.« less
Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method
Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.; Grove, Robert E.
2015-01-01
The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysis that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR.
Gehin, J.C.; Worley, B.A.; Renier, J.P.; Wemple, C.A.; Jahshan, S.N.; Ryskammp, J.M.
1995-08-01
This report summarizes the neutronics analysis performed during 1991 and 1992 in support of characterization of the conceptual design of the Advanced Neutron Source (ANS). The methods used in the analysis, parametric studies, and key results supporting the design and safety evaluations of the conceptual design are presented. The analysis approach used during the conceptual design phase followed the same approach used in early ANS evaluations: (1) a strong reliance on Monte Carlo theory for beginning-of-cycle reactor performance calculations and (2) a reliance on few-group diffusion theory for reactor fuel cycle analysis and for evaluation of reactor performance at specific time steps over the fuel cycle. The Monte Carlo analysis was carried out using the MCNP continuous-energy code, and the few- group diffusion theory calculations were performed using the VENTURE and PDQ code systems. The MCNP code was used primarily for its capability to model the reflector components in realistic geometries as well as the inherent circumvention of cross-section processing requirements and use of energy-collapsed cross sections. The MCNP code was used for evaluations of reflector component reactivity effects and of heat loads in these components. The code was also used as a benchmark comparison against the diffusion-theory estimates of key reactor parameters such as region fluxes, control rod worths, reactivity coefficients, and material worths. The VENTURE and PDQ codes were used to provide independent evaluations of burnup effects, power distributions, and small perturbation worths. The performance and safety calculations performed over the subject time period are summarized, and key results are provided. The key results include flux and power distributions over the fuel cycle, silicon production rates, fuel burnup rates, component reactivities, control rod worths, component heat loads, shutdown reactivity margins, reactivity coefficients, and isotope production rates.
Aaltonen, T.; Alvarez Gonzalez, B.; Amerio, S.; Amidei, D.; Anastassov, A.; Annovi, A.; Antos, J.; Apollinari, G.; Appel, J.A.; Apresyan, A.; Arisawa, T.; /Waseda U. /Dubna, JINR
2010-10-01
A precision measurement of the top quark mass m{sub t} is obtained using a sample of t{bar t} events from p{bar p} collisions at the Fermilab Tevatron with the CDF II detector. Selected events require an electron or muon, large missing transverse energy, and exactly four high-energy jets, at least one of which is tagged as coming from a b quark. A likelihood is calculated using a matrix element method with quasi-Monte Carlo integration taking into account finite detector resolution and jet mass effects. The event likelihood is a function of m{sub t} and a parameter {Delta}{sub JES} used to calibrate the jet energy scale in situ. Using a total of 1087 events, a value of m{sub t} = 173.0 {+-} 1.2 GeV/c{sup 2} is measured.
Paglieroni, David W. (Pleasanton, CA); Manay, Siddharth (Livermore, CA)
2011-12-20
A stochastic method and system for detecting polygon structures in images, by detecting a set of best matching corners of predetermined acuteness .alpha. of a polygon model from a set of similarity scores based on GDM features of corners, and tracking polygon boundaries as particle tracks using a sequential Monte Carlo approach. The tracking involves initializing polygon boundary tracking by selecting pairs of corners from the set of best matching corners to define a first side of a corresponding polygon boundary; tracking all intermediate sides of the polygon boundaries using a particle filter, and terminating polygon boundary tracking by determining the last side of the tracked polygon boundaries to close the polygon boundaries. The particle tracks are then blended to determine polygon matches, which may be made available, such as to a user, for ranking and inspection.
Project Reports for San Carlos Apache Tribe- 2012 Project
Broader source: Energy.gov [DOE]
Under this project, the San Carlos Apache Tribe will study the feasibility of solar energy projects within the reservation with the potential to generate a minimum of 1 megawatt (MW).
VWA-0021- In the Matter of Carlos M. Castillo
Broader source: Energy.gov [DOE]
This Decision involves a complaint filed by Carlos M. Castillo (Castillo or “the complainant”) under the Department of Energy (DOE) Contractor Employee Protection Program, 10 C.F.R. Part 708....
Parameterizing deep convection using the assumed probability density function method
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Storer, R. L.; Griffin, B. M.; Hft, J.; Weber, J. K.; Raut, E.; Larson, V. E.; Wang, M.; Rasch, P. J.
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak.moreThe same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.less
Parameterizing deep convection using the assumed probability density function method
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Storer, R. L.; Griffin, B. M.; Höft, J.; Weber, J. K.; Raut, E.; Larson, V. E.; Wang, M.; Rasch, P. J.
2014-06-11
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Parameterizing deep convection using the assumed probability density function method
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Storer, R. L.; Griffin, B. M.; Höft, J.; Weber, J. K.; Raut, E.; Larson, V. E.; Wang, M.; Rasch, P. J.
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Duo at Santa Fe's Monte del Sol Charter
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Duo at Santa Fe's Monte del Sol Charter School takes top award in 25th New Mexico Supercomputing Challenge April 21, 2015 Using nanotechnology robots to kill cancer cells LOS...
Jefferson Lab finds its man Mont (Inside Business) | Jefferson Lab
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
https://www.jlab.org/news/articles/jefferson-lab-finds-its-man-mont-inside-business Jefferson Lab finds its man Mont Hugh Montgomery Hugh Montgomery, a British nuclear physicist selected from more than 50 candidates, takes over as lab director Sept. 2. By Michael Schwartz, Inside Business April 14, 2008 Replacing the head of a world-renowned nuclear physics facility is no easy feat. When Christoph Leemann announced his desire to retire in 2007 as director of the Department of Energy's Thomas
Zhang, D.; Rahnema, F.
2013-07-01
The coarse mesh transport method (COMET) is a highly accurate and efficient computational tool which predicts whole-core neutronics behaviors for heterogeneous reactor cores via a pre-computed eigenvalue-dependent response coefficient (function) library. Recently, a high order perturbation method was developed to significantly improve the efficiency of the library generation method. In that work, the method's accuracy and efficiency was tested in a small PWR benchmark problem. This paper extends the application of the perturbation method to include problems typical of the other water reactor cores such as BWR and CANDU bundles. It is found that the response coefficients predicted by the perturbation method for typical BWR bundles agree very well with those directly computed by the Monte Carlo method. The average and maximum relative errors in the surface-to-surface response coefficients are 0.02%-0.05% and 0.06%-0.25%, respectively. For CANDU bundles, the corresponding quantities are 0.01%-0.05% and 0.04% -0.15%. It is concluded that the perturbation method is highly accurate and efficient with a wide range of applicability. (authors)
Methods for Bayesian power spectrum inference with galaxy surveys
Jasche, Jens; Wandelt, Benjamin D.
2013-12-10
We derive and implement a full Bayesian large scale structure inference method aiming at precision recovery of the cosmological power spectrum from galaxy redshift surveys. Our approach improves upon previous Bayesian methods by performing a joint inference of the three-dimensional density field, the cosmological power spectrum, luminosity dependent galaxy biases, and corresponding normalizations. We account for all joint and correlated uncertainties between all inferred quantities. Classes of galaxies with different biases are treated as separate subsamples. This method therefore also allows the combined analysis of more than one galaxy survey. In particular, it solves the problem of inferring the power spectrum from galaxy surveys with non-trivial survey geometries by exploring the joint posterior distribution with efficient implementations of multiple block Markov chain and Hybrid Monte Carlo methods. Our Markov sampler achieves high statistical efficiency in low signal-to-noise regimes by using a deterministic reversible jump algorithm. This approach reduces the correlation length of the sampler by several orders of magnitude, turning the otherwise numerically unfeasible problem of joint parameter exploration into a numerically manageable task. We test our method on an artificial mock galaxy survey, emulating characteristic features of the Sloan Digital Sky Survey data release 7, such as its survey geometry and luminosity-dependent biases. These tests demonstrate the numerical feasibility of our large scale Bayesian inference frame work when the parameter space has millions of dimensions. This method reveals and correctly treats the anti-correlation between bias amplitudes and power spectrum, which are not taken into account in current approaches to power spectrum estimation, a 20% effect across large ranges in k space. In addition, this method results in constrained realizations of density fields obtained without assuming the power spectrum or bias parameters in advance.
TITLE AUTHORS SUBJECT SUBJECT RELATED DESCRIPTION PUBLISHER AVAILABILI...
Office of Scientific and Technical Information (OSTI)
ASTROPHYSICS COSMOLOGY AND ASTRONOMY ALFVEN WAVES COMETS CONFIGURATION DENSITY DUSTS EMISSION FRAGMENTATION GALAXY NUCLEI IMAGES MASS MONTE CARLO METHOD PARTICLES SPHERES SPIN A...
SEARCHING FOR z {approx} 7.7 Ly{alpha} EMITTERS IN THE COSMOS...
Office of Scientific and Technical Information (OSTI)
GALACTIC EVOLUTION; GALAXIES; HYDROGEN; LUMINOSITY; MASS; MONTE CARLO METHOD; RED SHIFT; STARS; TELESCOPES; UNIVERSE Word Cloud More Like This Full Text Journal Articles...
Probabilistic evaluation of shallow groundwater resources at...
Office of Scientific and Technical Information (OSTI)
atmosphere. This study first develops an integrated Monte Carlo method for simulating CO2 and brine leakage from carbon sequestration and subsequent geochemical interactions in...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Ironically Ulam developed the Monte Carlo method while recovering from brain inflammation. ... to develop sensitive magnetic sensors that measure brain function and detect liquid bombs. ...
ARM - Publications: Science Team Meeting Documents
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
low sun elevations. Simulations were made for different aerosol models using Monte Carlo method. It was found that a simple relation exists between the products of aerosol optical...
Duo at Santa Fe's Monte del Sol Charter
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Duo at Santa Fe's Monte del Sol Charter School takes top award in 25th New Mexico Supercomputing Challenge April 21, 2015 Using nanotechnology robots to kill cancer cells LOS ALAMOS, N.M., April 21, 2015-Meghan Hill and Katelynn James of Santa Fe's Monte del Sol Charter Sol took the top prize in the 25 th New Mexico Supercomputing Challenge Tuesday at Los Alamos National Laboratory for their research project, "Using Concentrated Heat Systems to Shock the P53 Protein to Direct Cancer into
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Gan, Yanjun; Duan, Qingyun; Gong, Wei; Tong, Charles; Sun, Yunwei; Chu, Wei; Ye, Aizhong; Miao, Chiyuan; Di, Zhenhua
2014-01-01
Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones.« less
A Hybrid Method for Accelerated Simulation of Coulomb Collisions in a Plasma
Caflisch, R; Wang, C; Dimarco, G; Cohen, B; Dimits, A
2007-10-09
If the collisional time scale for Coulomb collisions is comparable to the characteristic time scales for a plasma, then simulation of Coulomb collisions may be important for computation of kinetic plasma dynamics. This can be a computational bottleneck because of the large number of simulated particles and collisions (or phase-space resolution requirements in continuum algorithms), as well as the wide range of collision rates over the velocity distribution function. This paper considers Monte Carlo simulation of Coulomb collisions using the binary collision models of Takizuka & Abe and Nanbu. It presents a hybrid method for accelerating the computation of Coulomb collisions. The hybrid method represents the velocity distribution function as a combination of a thermal component (a Maxwellian distribution) and a kinetic component (a set of discrete particles). Collisions between particles from the thermal component preserve the Maxwellian; collisions between particles from the kinetic component are performed using the method of or Nanbu. Collisions between the kinetic and thermal components are performed by sampling a particle from the thermal component and selecting a particle from the kinetic component. Particles are also transferred between the two components according to thermalization and dethermalization probabilities, which are functions of phase space.
Soci t d exploitation du parc olien de Mont d H z cques SARL...
Soci t d exploitation du parc olien de Mont d H z cques SARL Jump to: navigation, search Name: Socit d'exploitation du parc olien de Mont d'Hzcques SARL Place:...
San Carlos Apache Tribe Set to Break Ground on New Solar Project |
Department of Energy San Carlos Apache Tribe Set to Break Ground on New Solar Project San Carlos Apache Tribe Set to Break Ground on New Solar Project March 13, 2014 - 1:05pm Addthis The San Carlos Apache Tribe is making use of its extensive solar resources to power tribal facilities, including this 10-kilowatt (kW) solar PV system, which generates energy to run the tribal radio tower. Photo from San Carlos Apache Tribe, NREL 29202 The San Carlos Apache Tribe is making use of its extensive
Carlos Duarte Priya Gandhi Antony Kim Jared Landsman
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Carlos Duarte Priya Gandhi Antony Kim Jared Landsman Luis Santos Sara Tepfer Taoning Wang Team Negawatt Broader context Selected site Los Angeles, CA (Koreatown District) Built in 1916 & Designated a Historical Monument in 1998 3450 ft 2 Single Family Dwelling Project site CZ9 weather station CZ8 weather station Climate Climate zone 9 Climate zone 8 Climate zone 6 ● Increase urban density ● Rehab an existing building ● Maintain historical preservation status ● Zero Net Energy (ZNE)
Solar Feasibility Study May 2013 - San Carlos Apache Tribe
Rapp, Jim; Duncan, Ken; Albert, Steve
2013-05-01
The San Carlos Apache Tribe (Tribe) in the interests of strengthening tribal sovereignty, becoming more energy self-sufficient, and providing improved services and economic opportunities to tribal members and San Carlos Apache Reservation (Reservation) residents and businesses, has explored a variety of options for renewable energy development. The development of renewable energy technologies and generation is consistent with the Tribe’s 2011 Strategic Plan. This Study assessed the possibilities for both commercial-scale and community-scale solar development within the southwestern portions of the Reservation around the communities of San Carlos, Peridot, and Cutter, and in the southeastern Reservation around the community of Bylas. Based on the lack of any commercial-scale electric power transmission between the Reservation and the regional transmission grid, Phase 2 of this Study greatly expanded consideration of community-scale options. Three smaller sites (Point of Pines, Dudleyville/Winkleman, and Seneca Lake) were also evaluated for community-scale solar potential. Three building complexes were identified within the Reservation where the development of site-specific facility-scale solar power would be the most beneficial and cost-effective: Apache Gold Casino/Resort, Tribal College/Skill Center, and the Dudleyville (Winkleman) Casino.
Alaia, Alessandro; Puppo, Gabriella
2011-06-20
In this work we present a hybrid particle-grid Monte Carlo method for the Boltzmann equation, which is characterized by a significant reduction of the stochastic noise in the kinetic regime. The hybrid method is based on a first order splitting in time to separate the transport from the relaxation step. The transport step is solved by a deterministic scheme, while a hybrid DSMC-based method is used to solve the collision step. Such a hybrid scheme is based on splitting the solution in a collisional and a non-collisional part at the beginning of the collision step, and the DSMC method is used to solve the relaxation step for the collisional part of the solution only. This is accomplished by sampling only the fraction of particles candidate for collisions from the collisional part of the solution, performing collisions as in a standard DSMC method, and then projecting the particles back onto a velocity grid to compute a piecewise constant reconstruction for the collisional part of the solution. The latter is added to a piecewise constant reconstruction of the non-collisional part of the solution, which in fact remains unchanged during the relaxation step. Numerical results show that the stochastic noise is significantly reduced at large Knudsen numbers with respect to the standard DSMC method. Indeed in this algorithm, the particle scheme is applied only on the collisional part of the solution, so only this fraction of the solution is affected by stochastic fluctuations. But since the collisional part of the solution reduces as the Knudsen number increases, stochastic noise reduces as well at large Knudsen numbers.
Barradas, N. P.; Alves, E.; Siketic, Z.; Radovic, I. Bogdanovic
2009-03-10
The accuracy of ion beam analysis experiments depends critically on the stopping power values available. While for H and He ions accuracies normally better than 5% are achieved by usual interpolative schemes such as SRIM, for heavier ions the accuracy is worse. One of the main reasons is that the experimental data bases are very sparse, even for important materials such as Si. New measurements are therefore needed. Measurement of stopping power is often made with transmission in thin films, with the usual problems of film thickness homogeneity. We have previously developed an alternative method based on measuring bulk spectra, and fitting the yield by treating the stopping power as a fit parameter in a Bayesian inference Markov chain Monte Carlo procedure included in the standard IBA code NDF. We report on improvements of the method and on its application to the determination of the stopping power of {sup 7}Li in Si. To validate the method, we also apply it to the stopping of {sup 4}He in Si, which is known with 2% accuracy.
Rising, M. E.; Prinja, A. K.
2012-07-01
A critical neutron transport problem with random material properties is introduced. The total cross section and the average neutron multiplicity are assumed to be uncertain, characterized by the mean and variance with a log-normal distribution. The average neutron multiplicity and the total cross section are assumed to be uncorrected and the material properties for differing materials are also assumed to be uncorrected. The principal component analysis method is used to decompose the covariance matrix into eigenvalues and eigenvectors and then 'realizations' of the material properties can be computed. A simple Monte Carlo brute force sampling of the decomposed covariance matrix is employed to obtain a benchmark result for each test problem. In order to save computational time and to characterize the moments and probability density function of the multiplication factor the polynomial chaos expansion method is employed along with the stochastic collocation method. A Gauss-Hermite quadrature set is convolved into a multidimensional tensor product quadrature set and is successfully used to compute the polynomial chaos expansion coefficients of the multiplication factor. Finally, for a particular critical fuel pin assembly the appropriate number of random variables and polynomial expansion order are investigated. (authors)
Eersel, H. van, E-mail: h.v.eersel@tue.nl; Coehoorn, R. [Department of Applied Physics, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands); Philips Research Laboratories, High Tech Campus 4, 5656 AE Eindhoven (Netherlands); Bobbert, P. A.; Janssen, R. A. J. [Department of Applied Physics, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands)
2014-10-06
We present an advanced molecular-scale organic light-emitting diode (OLED) model, integrating both electronic and excitonic processes. Using this model, we can reproduce the measured efficiency roll-off for prototypical phosphorescent OLED stacks based on the green dye tris[2-phenylpyridine]iridium (Ir(ppy){sub 3}) and the red dye octaethylporphine platinum (PtOEP) and study the cause of the roll-off as function of the current density. Both the voltage versus current density characteristics and roll-off agree well with experimental data. Surprisingly, the results of the simulations lead us to conclude that, contrary to what is often assumed, not triplet-triplet annihilation but triplet-polaron quenching is the dominant mechanism causing the roll-off under realistic operating conditions. Simulations for devices with an optimized recombination profile, achieved by carefully tuning the dye trap depth, show that it will be possible to fabricate OLEDs with a drastically reduced roll-off. It is envisaged that J{sub 90}, the current density at which the efficiency is reduced to 90%, can be increased by almost one order of magnitude as compared to the experimental state-of-the-art.
Non-destructive in-situ method and apparatus for determining radionuclide depth in media
Xu, X. George (Clifton Park, NY); Naessens, Edward P. (West Point, NY)
2003-01-01
A non-destructive method and apparatus which is based on in-situ gamma spectroscopy is used to determine the depth of radiological contamination in media such as concrete. An algorithm, Gamma Penetration Depth Unfolding Algorithm (GPDUA), uses point kernel techniques to predict the depth of contamination based on the results of uncollided peak information from the in-situ gamma spectroscopy. The invention is better, faster, safer, and/cheaper than the current practice in decontamination and decommissioning of facilities that are slow, rough and unsafe. The invention uses a priori knowledge of the contaminant source distribution. The applicable radiological contaminants of interest are any isotopes that emit two or more gamma rays per disintegration or isotopes that emit a single gamma ray but have gamma-emitting progeny in secular equilibrium with its parent (e.g., .sup.60 Co, .sup.235 U, and .sup.137 Cs to name a few). The predicted depths from the GPDUA algorithm using Monte Carlo N-Particle Transport Code (MCNP) simulations and laboratory experiments using .sup.60 Co have consistently produced predicted depths within 20% of the actual or known depth.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Dewji, Shaheen Azim; Bellamy, Michael B.; Hertel, Nolan E.; Leggett, Richard Wayne; Sherbini, Sami; Saba, Mohammad S.; Eckerman, Keith F.
2015-09-01
The U.S. Nuclear Regulatory Commission (USNRC) initiated a contract with Oak Ridge National Laboratory (ORNL) to calculate radiation dose rates to members of the public that may result from exposure to patients recently administered iodine-131 (131I) as part of medical therapy. The main purpose was to compare dose rate estimates based on a point source and target with values derived from more realistic simulations that considered the time-dependent distribution of 131I in the patient and attenuation of emitted photons by the patient’s tissues. The external dose rate estimates were derived using Monte Carlo methods and two representations of the Phantommore » with Movable Arms and Legs, previously developed by ORNL and the USNRC, to model the patient and a nearby member of the public. Dose rates to tissues and effective dose rates were calculated for distances ranging from 10 to 300 cm between the phantoms and compared to estimates based on the point-source method, as well as to results of previous studies that estimated exposure from 131I patients. The point-source method overestimates dose rates to members of the public in very close proximity to an 131I patient but is a broadly accurate method of dose rate estimation at separation distances of 300 cm or more at times closer to administration.« less
Dewji, Shaheen Azim; Bellamy, Michael B.; Hertel, Nolan E.; Leggett, Richard Wayne; Sherbini, Sami; Saba, Mohammad S.; Eckerman, Keith F.
2015-09-01
The U.S. Nuclear Regulatory Commission (USNRC) initiated a contract with Oak Ridge National Laboratory (ORNL) to calculate radiation dose rates to members of the public that may result from exposure to patients recently administered iodine-131 (^{131}I) as part of medical therapy. The main purpose was to compare dose rate estimates based on a point source and target with values derived from more realistic simulations that considered the time-dependent distribution of ^{131}I in the patient and attenuation of emitted photons by the patients tissues. The external dose rate estimates were derived using Monte Carlo methods and two representations of the Phantom with Movable Arms and Legs, previously developed by ORNL and the USNRC, to model the patient and a nearby member of the public. Dose rates to tissues and effective dose rates were calculated for distances ranging from 10 to 300 cm between the phantoms and compared to estimates based on the point-source method, as well as to results of previous studies that estimated exposure from ^{131}I patients. The point-source method overestimates dose rates to members of the public in very close proximity to an ^{131}I patient but is a broadly accurate method of dose rate estimation at separation distances of 300 cm or more at times closer to administration.
Unlted States Environmental Protection Agency Enwronmental Mont!orlng
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Environmental Protection Agency Enwronmental Mont!orlng Systems Laboratory P.O. Box 93478 Las Veaa.s NV 89193-5478 EPA/ 600 '4-88j.021 DOE/DPXIO539/060 June 1988 GEPA Research and Development Off-Site Environmental Monitoring Report Radiation Monitoring Around United States p /-I &--L--J :"/> Nuclear Test Areas _ -1 1987 prepared for the '1 " United States Department of Energy / under Interagency Agreement Number DE-AI08-86NV10622 EPA-600/4-88-021 DOE/DP/0053?-060 May 1988
Research on stochastic power-flow study methods. Final report
Heydt, G. T.
1981-01-01
A general algorithm to determine the effects of uncertainty in bus load and generation on the output of conventional power flow analysis is presented. The use of statistical moments is presented and developed as a means for representing the stochastic process. Statistical moments are used to describe the uncertainties, and facilitate the calculations of single and multivarlate probability density functions of input and output variables. The transformation of the uncertainty through the power flow equations is made by the expansion of the node equations in a multivariate Taylor series about an expected operating point. The series is truncated after the second order terms. Since the power flow equations are nonlinear, the expected values of output quantities is in general not the solution to the conventional load flow problem using expected values of input quantities. The second order transformation offers a correction vector and allows the consideration of larger uncertainties which have caused significant error in the current linear transformation algorithms. Voltage controlled busses are included with consideration of upper and lower limits. The finite reactive power available at generation sites, and fixed ranges of transformer tap movement may have a significant effect on voltage and line power flow statistics. A method is given which considers limitation constraints in the evaluation of all output quantities. The bus voltages, line power flows, transformer taps, and generator reactive power requirements are described by their statistical moments. Their values are expressed in terms of the probability that they are above or below specified limits, and their expected values given that they do fall outside the limits. Thus the algorithm supplies information about severity of overload as well as probability of occurrence. An example is given for an eleven bus system, evaluating each quantity separately. The results are compared with Monte Carlo simulation.
Yang, W.; Wu, H.; Cao, L.
2012-07-01
More and more MOX fuels are used in all over the world in the past several decades. Compared with UO{sub 2} fuel, it contains some new features. For example, the neutron spectrum is harder and more resonance interference effects within the resonance energy range are introduced because of more resonant nuclides contained in the MOX fuel. In this paper, the wavelets scaling function expansion method is applied to study the resonance behavior of plutonium isotopes within MOX fuel. Wavelets scaling function expansion continuous-energy self-shielding method is developed recently. It has been validated and verified by comparison to Monte Carlo calculations. In this method, the continuous-energy cross-sections are utilized within resonance energy, which means that it's capable to solve problems with serious resonance interference effects without iteration calculations. Therefore, this method adapts to treat the MOX fuel resonance calculation problem natively. Furthermore, plutonium isotopes have fierce oscillations of total cross-section within thermal energy range, especially for {sup 240}Pu and {sup 242}Pu. To take thermal resonance effect of plutonium isotopes into consideration the wavelet scaling function expansion continuous-energy resonance calculation code WAVERESON is enhanced by applying the free gas scattering kernel to obtain the continuous-energy scattering source within thermal energy range (2.1 eV to 4.0 eV) contrasting against the resonance energy range in which the elastic scattering kernel is utilized. Finally, all of the calculation results of WAVERESON are compared with MCNP calculation. (authors)
Report on International Collaboration Involving the FE Heater and HG-A Tests at Mont Terri
Houseworth, Jim; Rutqvist, Jonny; Asahina, Daisuke; Chen, Fei; Vilarrasa, Victor; Liu, Hui-Hai; Birkholzer, Jens
2013-11-06
Nuclear waste programs outside of the US have focused on different host rock types for geological disposal of high-level radioactive waste. Several countries, including France, Switzerland, Belgium, and Japan are exploring the possibility of waste disposal in shale and other clay-rich rock that fall within the general classification of argillaceous rock. This rock type is also of interest for the US program because the US has extensive sedimentary basins containing large deposits of argillaceous rock. LBNL, as part of the DOE-NE Used Fuel Disposition Campaign, is collaborating on some of the underground research laboratory (URL) activities at the Mont Terri URL near Saint-Ursanne, Switzerland. The Mont Terri project, which began in 1995, has developed a URL at a depth of about 300 m in a stiff clay formation called the Opalinus Clay. Our current collaboration efforts include two test modeling activities for the FE heater test and the HG-A leak-off test. This report documents results concerning our current modeling of these field tests. The overall objectives of these activities include an improved understanding of and advanced relevant modeling capabilities for EDZ evolution in clay repositories and the associated coupled processes, and to develop a technical basis for the maximum allowable temperature for a clay repository. The R&D activities documented in this report are part of the work package of natural system evaluation and tool development that directly supports the following Used Fuel Disposition Campaign (UFDC) objectives: ? Develop a fundamental understanding of disposal-system performance in a range of environments for potential wastes that could arise from future nuclear-fuel-cycle alternatives through theory, simulation, testing, and experimentation. ? Develop a computational modeling capability for the performance of storage and disposal options for a range of fuel-cycle alternatives, evolving from generic models to more robust models of performance assessment. For the purpose of validating modeling capabilities for thermal-hydro-mechanical (THM) processes, we developed a suite of simulation models for the planned full-scale FE Experiment to be conducted in the Mont Terri URL, including a full three-dimensional model that will be used for direct comparison to experimental data once available. We performed for the first time a THM analysis involving the Barcelona Basic Model (BBM) in a full three-dimensional field setting for modeling the geomechanical behavior of the buffer material and its interaction with the argillaceous host rock. We have simulated a well defined benchmark that will be used for codeto- code verification against modeling results from other international modeling teams. The analysis highlights the complex coupled geomechanical behavior in the buffer and its interaction with the surrounding rock and the importance of a well characterized buffer material in terms of THM properties. A new geomechanical fracture-damage model, TOUGH-RBSN, was applied to investigate damage behavior in the ongoing HG-A test at Mont Terri URL. Two model modifications have been implemented so that the Rigid-Body-Spring-Network (RBSN) model can be used for analysis of fracturing around the HG-A microtunnel. These modifications are (1) a methodology to compute fracture generation under compressive stress conditions and (2) a method to represent anisotropic elastic and strength properties. The method for computing fracture generation under compressive load produces results that roughly follow trends expected for homogeneous and layered systems. Anisotropic properties for the bulk rock were represented in the RBSN model using layered heterogeneity and gave bulk material responses in line with expectations. These model improvements were implemented for an initial model of fracture damage at the HG-A test. While the HG-A test model results show some similarities with the test observations, differences between the model results and observations remain.
Willingham, David G.; Naes, Benjamin E.; Fahey, Albert J.
2015-01-01
A combination of secondary ion mass spectrometry, optical profilometry and a statistically-driven algorithm was used to develop a non-contact volume analysis method to validate the useful yields of nuclear materials. The volume analysis methodology was applied to ion sputter craters created in silicon and uranium substrates sputtered by 18.5 keV O- and 6.0 keV Ar+ ions. Sputter yield measurements were determined from the volume calculations and were shown to be comparable to Monte Carlo calculations and previously reported experimental observations. Additionally, the volume calculations were used to determine the useful yields of Si+, SiO+ and SiO2+ ions from the silicon substrate and U+, UO+ and UO2+ ions from the uranium substrate under 18.5 keV O- and 6.0 keV Ar+ ion bombardment. This work represents the first steps toward validating the interlaboratory and cross-platform performance of mass spectrometry for the analysis of nuclear materials.
Duo at Santa Fe's Monte del Sol Charter School takes top award in 25th
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
New Mexico Supercomputing Challenge 5th New Mexico Supercomputing Challenge Duo at Santa Fe's Monte del Sol Charter School takes top award in 25th New Mexico Supercomputing Challenge Meghan Hill and Katelynn James took the top prize for their research project April 21, 2015 Katelynn James, left, and Meghan Hill of Monte del Sol Charter School in Santa Fe. Katelynn James, left, and Meghan Hill of Monte del Sol Charter School in Santa Fe. Contact Los Alamos National Laboratory Steve Sandoval
Hong, X; Gao, H
2014-06-15
Purpose: The Linear Boltzmann Transport Equation (LBTE) solved through statistical Monte Carlo (MC) method provides the accurate dose calculation in radiotherapy. This work is to investigate the alternative way for accurately solving LBTE using deterministic numerical method due to its possible advantage in computational speed from MC. Methods: Instead of using traditional spherical harmonics to approximate angular scattering kernel, our deterministic numerical method directly computes angular scattering weights, based on a new angular discretization method that utilizes linear finite element method on the local triangulation of unit angular sphere. As a Result, our angular discretization method has the unique advantage in positivity, i.e., to maintain all scattering weights nonnegative all the time, which is physically correct. Moreover, our method is local in angular space, and therefore handles the anisotropic scattering well, such as the forward-peaking scattering. To be compatible with image-guided radiotherapy, the spatial variables are discretized on the structured grid with the standard diamond scheme. After discretization, the improved sourceiteration method is utilized for solving the linear system without saving the linear system to memory. The accuracy of our 3D solver is validated using analytic solutions and benchmarked with Geant4, a popular MC solver. Results: The differences between Geant4 solutions and our solutions were less than 1.5% for various testing cases that mimic the practical cases. More details are available in the supporting document. Conclusion: We have developed a 3D LBTE solver based on a new angular discretization method that guarantees the positivity of scattering weights for physical correctness, and it has been benchmarked with Geant4 for photon dose calculation.
Duo at Santa Fe's Monte del Sol Charter School takes top award...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
5th New Mexico Supercomputing Challenge Duo at Santa Fe's Monte del Sol Charter School takes top award in 25th New Mexico Supercomputing Challenge Meghan Hill and Katelynn James...
Calculating infinite-medium {alpha}-eigenvalue spectra with a transition rate matrix method
Betzler, B. R.; Kiedrowski, B. C.; Brown, F. B.; Martin, W. R.
2013-07-01
The time-dependent behavior of the energy spectrum in neutron transport was investigated with a formulation, based on continuous-time Markov processes, for computing {alpha}-eigenvalues and eigenvectors in an infinite medium. For this, a research Monte Carlo code called TORTE was created and used to estimate elements of a transition rate matrix. TORTE is capable of using both multigroup and continuous-energy nuclear data, and verification was performed. Eigenvalue spectra for infinite homogeneous mixtures were obtained and an eigenfunction expansion was used to investigate transient behavior of the neutron energy spectrum. (authors)
Precision measurement of the top quark mass in the lepton + jets channel
Office of Scientific and Technical Information (OSTI)
using a matrix element method with Quasi-Monte Carlo integration (Thesis/Dissertation) | SciTech Connect Thesis/Dissertation: Precision measurement of the top quark mass in the lepton + jets channel using a matrix element method with Quasi-Monte Carlo integration Citation Details In-Document Search Title: Precision measurement of the top quark mass in the lepton + jets channel using a matrix element method with Quasi-Monte Carlo integration This thesis presents a measurement of the top quark
A New On-the-Fly Sampling Method for Incoherent Inelastic Thermal Neutron Scattering Data in MCNP6
Pavlou, Andrew Theodore; Brown, Forrest B.; Ji, Wei
2014-09-02
At thermal energies, the scattering of neutrons in a system is complicated by the comparable velocities of the neutron and target, resulting in competing upscattering and downscattering events. The neutron wavelength is also similar in size to the target's interatomic spacing making the scattering process a quantum mechanical problem. Because of the complicated nature of scattering at low energies, the thermal data files in ACE format used in continuous-energy Monte Carlo codes are quite large { on the order of megabytes for a single temperature and material. In this paper, a new storage and sampling method is introduced that is orders of magnitude less in size and is used to sample scattering parameters at any temperature on-the-fly. In addition to the reduction in storage, the need to pre-generate thermal scattering data tables at fine temperatures has been eliminated. This is advantageous for multiphysics simulations which may involve temperatures not known in advance. A new module was written for MCNP6 that bypasses the current S(?,?) table lookup in favor of the new format. The new on-the-fly sampling method was tested for graphite for two benchmark problems at ten temperatures: 1) an eigenvalue test with a fuel compact of uranium oxycarbide fuel homogenized into a graphite matrix, 2) a surface current test with a \\broomstick" problem with a monoenergetic point source. The largest eigenvalue difference was 152pcm for T= 1200K. For the temperatures and incident energies chosen for the broomstick problem, the secondary neutron spectrum showed good agreement with the traditional S(?,?) sampling method. These preliminary results show that sampling thermal scattering data on-the-fly is a viable option to eliminate both the storage burden of keeping thermal data at discrete temperatures and the need to know temperatures before simulation runtime.
SU-E-T-127: Dosimetric Evaluation Microbeam Treatment Method Using Monoenergetic Photon 1/4-Beams
Tsiamas, P; Marcus, K; Lewis, J
2014-06-01
Purpose: One of the external radiotherapy techniques with potential to greatly enhance the therapeutic ratio is Microbeam Radiotherapy (MRT). A recent approach to MRT delivers discrete finely spaced ?-beams. The technique is based on two principles: a] there is almost no dose between the dose peaks of two neighboring ?-beams and b] it is not needed for the whole target to be irradiated to achieve tumor control. Preliminary results have shown the ability to increase tumor control without increasing normal tissue complication probability with this technique. The purpose of this study was to dosimetrically evaluate the clinical feasibility of the above concept, taking into consideration factors as beam energy, size, and separation, and tumor depth. Methods: A Monte Carlo (MC) model was used to simulate different configurations of ?-beams. A total of 420 different ?-beams were evaluated for beam sizes, energies, depths and distances between the beams. The different MC beam penetration results were compared vs. simulations, which were conducted at the existing Small animal irradiator (SARRP) facility at our department. Results: Separation between the peak doses of the ?-beams was well maintained in all simulations. This shows that the scatter can be ignored even for cases where the distance from the center of the ?-beams was set to 200 ?m (E = 100 keV) and for depths as great as 5 cm. Monoenergetic 100 keV ?-beam energies were ?25% more penetrative than a 220 kVp SARRP beam at the depth of 8 cm. The effect is more profound as the energy increases. Conclusion: Dosimetric evaluation of this MRT method showed it could feasibly be used to treat tumors at clinically relevant depths. MC results showed that the scatter between the beams remains minimal even for depths of 5 cm and separation of the beams of 200 ?m.
I T E L I N E S S Carlos Saenz Makes the Ultimate Sacrifice
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Carlos Saenz Makes the Ultimate Sacrifice 1 Agencies Collaborate to Tackle Fire Season 2 NTS Groups Garner P2 Best-in-Class Awards 3 NTS Security Contract Awarded to WSI 4 Offsites .... "Go Long-Term!" 5 E-mentors Meet and Greet E-Mentees 5 Occupational Medicine Focuses on Heat Stroke 6 Milestones 7 Calendar 8 In This Issue A publication for all members of the NNSA/NSO family Issue 117 June 2006 S adly, on May 5, 2006, Wackenhut Services, Inc. - Nevada (WSI-NV) was informed that Carlos
OSTI, US Dept of Energy, Office of Scientific and Technical Informatio...
Office of Scientific and Technical Information (OSTI)
A Roadmap to Geothermal Heat Pump Feasibility The Manhattan Project -- Its Long-term Influences Monte Carlo Methods used in Cancer Research Where Do New Scientists Come From? The ...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
of Monte Carlo par- ticles per cycle. This "non-random MC-CMFD" method has a straightforward mathematical formulation, which can be linearized and then Fourier-analyzed. To...
Lee, C; Jung, J; Pelletier, C; Kim, J; Lee, C
2014-06-01
Purpose: Patient cohort of second cancer study often involves radiotherapy patients with no radiological images available: We developed methods to construct a realistic surrogate anatomy by using computational human phantoms. We tested this phantom images both in a commercial treatment planning system (Eclipse) and a custom Monte Carlo (MC) transport code. Methods: We used a reference adult male phantom defined by International Commission on Radiological Protection (ICRP). The hybrid phantom which was originally developed in Non-Uniform Rational B-Spline (NURBS) and polygon mesh format was converted into more common medical imaging format. Electron density was calculated from the material composition of the organs and tissues and then converted into DICOM format. The DICOM images were imported into the Eclipse system for treatment planning, and then the resulting DICOM-RT files were imported into the MC code for MC-based dose calculation. Normal tissue doses were calculation in Eclipse and MC code for an illustrative prostate treatment case and compared to each other. Results: DICOM images were generated from the adult male reference phantom. Densities and volumes of selected organs between the original phantom and ones represented within Eclipse showed good agreements, less than 0.6%. Mean dose from Eclipse and MC code match less than 7%, whereas maximum and minimum doses were different up to 45%. Conclusion: The methods established in this study will be useful for the reconstruction of organ dose to support epidemiological studies of second cancer in cancer survivors treated by radiotherapy. We also work on implementing body size-dependent computational phantoms to better represent patient's anatomy when the height and weight of patients are available.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
computer code team with Feynman Innovation Prize July 22, 2015 'Monte Carlo' methods transformative for modern computational methods LOS ALAMOS, N.M., July 22, 2015-This year's honorees for the Richard P. Feynman Innovation Prize at Los Alamos National Laboratory are the Monte Carlo Radiation Transport Team members, current and past, who have successfully brought an idea to the marketplace through a partnership, resulting in a measurable return on investment in the Laboratory's mission
San Carlos Apache Tribe Energy Organization Analysis & Solar Feasibility Study
Energy Organization Analysis & Solar Feasibility Study 2012 funded by grants from the US Department of Energy Tribal Energy Program . San Carlos Apache Mission Statement The Apache People will live a balanced life in harmony with spirituality, culture, language, and family unity in an ever-changing world and shall create a strategic framework for our tribe to grow and prosper. Reservation Boundary The Tribe and Reservation * 90 miles from Phoenix. * 2,400' to 8,300' elevation. * 1.83
Energy Science and Technology Software Center (OSTI)
002763MLTPL00 Quantum Process Matrix Computation by Monte Carlo https://development.sandia.gov/-kyoung/
OSTI, US Dept of Energy, Office of Scientific and Technical Information |
Office of Scientific and Technical Information (OSTI)
Speeding access to science information from DOE and Beyond Monte Carlo
2008 - 2011 Energy Program Review & 2011 - 2012 ENERGY ORGANIZATION ANALYSIS Burden Basket San Carlos Apache Mission Statement The Apache People will live a balanced life in harmony with spirituality, culture, language, and family unity in an ever-changing world. The Apache People shall create a strategic framework for our tribe to grow and prosper. The Tribe and Reservation * 90 miles east of Phoenix. * 2,400' to 8,300'+. * 1.83 million acres. * 800,000+ acres wooded/forested. * 1M ac.
Adelman, Jahred A.; Arguin, J.F.; Bellettini, G.; Brubaker, E.; Budagov, J.; Chlachidze, G.; Demortier, L.; Gibson, A.; Kim, S.; Kim, Y.K.; Maruyama, T.; Sato, K.; Shochet, M.; Sinervo, P.; Tomura, T.; Velev, G.; Xie, S.; Yang, U.K.; /Chicago U. /Toronto U. /INFN, Pisa /Dubna, JINR /Rockefeller U. /LBL, Berkeley /Tsukuba U. /Fermilab
2006-05-01
We report an updated measurement of the top quark mass in the lepton plus jets channel of t{bar t} events from p{bar p} collisions at {radical}s = 1.96 TeV. This measurement uses a dataset with integrated luminosity of 680 pb{sup -1}, containing 360 t{bar t} candidates separated into four subsamples. A top quark mass is reconstructed for each event by using energy and momentum constraints on the top quark pair decay products. We also employ the reconstructed mass of hadronic W boson decays W {yields} jj to constrain in situ the largest systematic uncertainty of the top quark mass measurement: the jet energy scale. Monte Carlo templates of the reconstructed top quark and W boson mass are produced as a function of the true top quark mass and the jet energy scale. The distribution of reconstructed top quark and W boson mass in the data are compared to the Monte Carlo templates using a likelihood fit to obtain: M{sub top} = 173.4 {+-} 2.8 GeV/c{sup 2}.
Balcomb, J.D.
1981-01-01
Correlation methods have been developed to provide a quick and relatively simple technique for estimating the performance of passive solar systems. The correlations are done with respect to data generated from simulation models. The techniques and accuracies are described. Both the Solar Load Ratio and Un-Utilizability methods are described. The advantages and limitations of correlation methods as design tools are discussed.
Palmer, A. L.; Di Pietro, P.; Alobaidli, S.; Issa, F.; Doran, S.; Bradley, D.; Nisbet, A.
2013-06-15
Purpose: Dose distribution measurement in clinical high dose rate (HDR) brachytherapy is challenging, because of the high dose gradients, large dose variations, and small scale, but it is essential to verify accurate treatment planning and treatment equipment performance. The authors compare and evaluate three dosimetry systems for potential use in brachytherapy dose distribution measurement: Ge-doped optical fibers, EBT3 Gafchromic film with multichannel analysis, and the radiochromic material PRESAGE{sup Registered-Sign} with optical-CT readout. Methods: Ge-doped SiO{sub 2} fibers with 6 {mu}m active core and 5.0 mm length were sensitivity-batched and their thermoluminescent properties used via conventional heating and annealing cycles. EBT3 Gafchromic film of 30 {mu}m active thickness was calibrated in three color channels using a nominal 6 MV linear accelerator. A 48-bit transmission scanner and advanced multichannel analysis method were utilized to derive dose measurements. Samples of the solid radiochromic polymer PRESAGE{sup Registered-Sign }, 60 mm diameter and 100 mm height, were analyzed with a parallel beam optical CT scanner. Each dosimetry system was used to measure the dose as a function of radial distance from a Co-60 HDR source, with results compared to Monte Carlo TG-43 model data. Each system was then used to measure the dose distribution along one or more lines through typical clinical dose distributions for cervix brachytherapy, with results compared to treatment planning system (TPS) calculations. Purpose-designed test objects constructed of Solid Water and held within a full-scatter water tank were utilized. Results: All three dosimetry systems reproduced the general shape of the isolated source radial dose function and the TPS dose distribution. However, the dynamic range of EBT3 exceeded those of doped optical fibers and PRESAGE{sup Registered-Sign }, and the latter two suffered from unacceptable noise and artifact. For the experimental conditions used in this study, the useful range from an isolated HDR source was 5-40 mm for fibers, 3-50 mm for EBT3, and 4-21 mm for PRESAGE{sup Registered-Sign }. Fibers demonstrated some over-response at very low dose levels, suffered from volume averaging effects in the dose distribution measurement, and exhibited up to 9% repeatability variation over three repeated measurements. EBT3 demonstrated excellent agreement with Monte Carlo and TPS dose distributions, with up to 3% repeatability over three measurements. PRESAGE{sup Registered-Sign} gave promising results, being the only true 3D dosimeter, but artifacts and noise were apparent. Conclusions: The comparative response of three emerging dosimetry systems for clinical brachytherapy dose distribution measurement has been investigated. Ge-doped optical fibers have excellent spatial resolution for single-direction measurement but are currently too large for complex dose distribution assessment. The use of PRESAGE{sup Registered-Sign} with optical-CT readout gave promising results in the measurement of true 3D dose distributions but further development work is required to reduce noise and improve dynamic range for brachytherapy dose distribution measurements. EBT3 Gafchromic film with multichannel analysis demonstrated accurate and reproducible measurement of dose distributions in HDR brachytherapy. Calibrated dose measurements were possible with agreement within 1.5% of TPS dose calculations. The suitability of EBT3 as a dosimeter for 2D quality control or commissioning work has been demonstrated.
Weyand, J.D.
1986-11-18
A method is disclosed of making a region exhibiting a range of compositions, comprising plasma spraying various compositions on top of one another onto a base. 2 figs.
Silica separation from reinjection brines at Monte Amiata geothermal plants, Italy
Vitolo, S.; Cialdella, M.L. . Dipartimento di Ingegneria Chimica)
1994-06-01
A process for the separation of silica from geothermal reinjection brines is reported, in which the phases of coagulation, sedimentation and filtration of silica are involved. The effectiveness of lime and calcium chloride as coagulating agents has been investigated and the separating operations have been set out. Attention has been focused on Monte Amiata reinjection geothermal brines, whose scaling effect causes serious problems in the operation and maintenance of reinjection facilities. The study has been conducted using different amounts of added coagulants and at different temperatures, to determine optimal operating conditions. Though calcium chloride was revealed to be effective as a coagulant of the polymeric silica fraction, lime has also proved capable of removing monomeric dissolved silica at high dosages. Investigation on the behavior of coagulated brine has revealed the feasibility of separating the coagulated silica by sedimentation and filtration.
Kwon, Kyung; Fan, Liang-Shih; Zhou, Qiang; Yang, Hui
2014-09-30
A new and efficient direct numerical method with second-order convergence accuracy was developed for fully resolved simulations of incompressible viscous flows laden with rigid particles. The method combines the state-of-the-art immersed boundary method (IBM), the multi-direct forcing method, and the lattice Boltzmann method (LBM). First, the multi-direct forcing method is adopted in the improved IBM to better approximate the no-slip/no-penetration (ns/np) condition on the surface of particles. Second, a slight retraction of the Lagrangian grid from the surface towards the interior of particles with a fraction of the Eulerian grid spacing helps increase the convergence accuracy of the method. An over-relaxation technique in the procedure of multi-direct forcing method and the classical fourth order Runge-Kutta scheme in the coupled fluid-particle interaction were applied. The use of the classical fourth order Runge-Kutta scheme helps the overall IB-LBM achieve the second order accuracy and provides more accurate predictions of the translational and rotational motion of particles. The preexistent code with the first-order convergence rate is updated so that the updated new code can resolve the translational and rotational motion of particles with the second-order convergence rate. The updated code has been validated with several benchmark applications. The efficiency of IBM and thus the efficiency of IB-LBM were improved by reducing the number of the Lagragian markers on particles by using a new formula for the number of Lagrangian markers on particle surfaces. The immersed boundary-lattice Boltzmann method (IBLBM) has been shown to predict correctly the angular velocity of a particle. Prior to examining drag force exerted on a cluster of particles, the updated IB-LBM code along with the new formula for the number of Lagrangian markers has been further validated by solving several theoretical problems. Moreover, the unsteadiness of the drag force is examined when a fluid is accelerated from rest by a constant average pressure gradient toward a steady Stokes flow. The simulation results agree well with the theories for the short- and long-time behavior of the drag force. Flows through non-rotational and rotational spheres in simple cubic arrays and random arrays are simulated over the entire range of packing fractions, and both low and moderate particle Reynolds numbers to compare the simulated results with the literature results and develop a new drag force formula, a new lift force formula, and a new torque formula. Random arrays of solid particles in fluids are generated with Monte Carlo procedure and Zinchenko's method to avoid crystallization of solid particles over high solid volume fractions. A new drag force formula was developed with extensive simulated results to be closely applicable to real processes over the entire range of packing fractions and both low and moderate particle Reynolds numbers. The simulation results indicate that the drag force is barely affected by rotational Reynolds numbers. Drag force is basically unchanged as the angle of the rotating axis varies.
Probabilistic methods in a study of trip setpoints
Kaulitz, D. E.
2012-07-01
Most early vintage Boiling Water Reactors have a high head and high capacity High Pressure Coolant Injection (HPCI) pump to keep the core covered following a loss of coolant accident (LOCA). However, the protection afforded by the HPCI pump for mitigating a LOCA introduces the potential that a spurious start of the HPCI pump could oversupply the reactor vessel and lead to an automatic trip of the main turbine due to high water level. A turbine trip and associated increase in moderator density could challenge the bases of fuel integrity operating limits. To prevent turbine trip during spurious operation of the HPCI pump, the reactor protection system includes instrumentation and logic to sense high water level and automatically trip the HPCI pump prior to reaching the turbine trip setpoint. This paper describes an analysis that was performed to determine if existing reactor vessel water level trip instrumentation, logic and setpoints result in a high probability that the HPCI pump will trip prior to actuation of the turbine trip. Using nominal values for the initial water level and for the HPCI pump and turbine trip setpoints, and using the probability distribution functions for measurement uncertainty in these setpoints, a Monte Carlo simulation was employed to determine probabilities of successfully tripping the HPCI pump prior to tripping of the turbine. The results of the analysis established that the existing setpoints, instrumentation and logic would be expected to reliably prevent a trip of the main turbine. (authors)
Lin, YuPo J.; Hestekin, Jamie; Arora, Michelle; St. Martin, Edward J.
2004-09-28
An electrodeionization method for continuously producing and or separating and/or concentrating ionizable organics present in dilute concentrations in an ionic solution while controlling the pH to within one to one-half pH unit method for continuously producing and or separating and/or concentrating ionizable organics present in dilute concentrations in an ionic solution while controlling the pH to within one to one-half pH unit.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Application of the Projection-Based Embedding Method Taylor B arnes NERSC A nnual M ee4ng, F eb. 2 4, 2 015 Outline Applica'on: I nves'ga'on o f t he Oxida've D ecomposi'on o f...
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
1997-03-28
Based on the project's scope, the purpose of the estimate, and the availability of estimating resources, the estimator can choose one or a combination of techniques when estimating an activity or project. Estimating methods, estimating indirect and direct costs, and other estimating considerations are discussed in this chapter.
Bayesian Calibration of the Community Land Model using Surrogates...
Office of Scientific and Technical Information (OSTI)
CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. ... models; structural error models; Markov chain Monte Word Cloud More Like This ...
Walls, Claudia A.; Kirby, Glen H.; Janney, Mark A.; Omatete, Ogbemi O.; Nunn, Stephen D.; McMillan, April D.
2000-01-01
A method of gelcasting includes the steps of providing a solution of at least hydroxymethylacrylamide (HMAM) and water. At least one inorganic powder is added to the mixture. At least one initiator system is provided to polymerize the HMAM. The initiator polymerizes the HMAM and water, to form a firm hydrogel that contains the inorganic powder. One or more comonomers can be polymerized with the HMAM monomer, to alter the final properties of the gelcast material. Additionally, one or more additives can be included in the polymerization mixture, to alter the properties of the gelcast material.
Marsden, Kenneth C.; Meyer, Mitchell K.; Grover, Blair K.; Fielding, Randall S.; Wolfensberger, Billy W.
2012-12-18
A casting device includes a covered crucible having a top opening and a bottom orifice, a lid covering the top opening, a stopper rod sealing the bottom orifice, and a reusable mold having at least one chamber, a top end of the chamber being open to and positioned below the bottom orifice and a vacuum tap into the chamber being below the top end of the chamber. A casting method includes charging a crucible with a solid material and covering the crucible, heating the crucible, melting the material, evacuating a chamber of a mold to less than 1 atm absolute through a vacuum tap into the chamber, draining the melted material into the evacuated chamber, solidifying the material in the chamber, and removing the solidified material from the chamber without damaging the chamber.
Grover, Blair K.; Hubbell, Joel M.; Sisson, James B.; Casper, William L.
2005-12-20
A method for collecting data regarding a matric potential of a media includes providing a tensiometer having a stainless steel tensiometer casing, the stainless steel tensiometer casing comprising a tip portion which includes a wetted porous stainless steel membrane through which a matric potential of a media is sensed; driving the tensiometer into the media using an insertion tube comprising a plurality of probe casing which are selectively coupled to form the insertion tube as the tensiometer is progressively driven deeper into the media, wherein the wetted porous stainless steel membrane is in contact with the media; and sensing the matric potential the media exerts on the wetted porous stainless steel membrane by a pressure sensor in fluid hydraulic connection with the porous stainless steel membrane. A tensiometer includes a stainless steel casing.
Beyond the Tonks-Girardeau Gas: Strongly Correlated Regime in Quasi-One-Dimensional Bose Gases
Astrakharchik, G.E.; Boronat, J.; Casulleras, J.; Giorgini, S.
2005-11-04
We consider a homogeneous 1D Bose gas with contact interactions and a large attractive coupling constant. This system can be realized in tight waveguides by exploiting a confinement induced resonance of the effective 1D scattering amplitude. By using the diffusion Monte Carlo method we show that, for small densities, the gaslike state is well described by a gas of hard rods. The critical density for cluster formation is estimated using the variational Monte Carlo method. The behavior of the correlation functions and of the frequency of the lowest breathing mode for harmonically trapped systems shows that the gas is more strongly correlated than in the Tonks-Girardeau regime.
The Role of Scale and Model Bias in ADAPT's Photospheric Eatimation
Godinez Vazquez, Humberto C.; Hickmann, Kyle Scott; Arge, Charles Nicholas; Henney, Carl
2015-05-20
The Air Force Assimilative Photospheric flux Transport model (ADAPT), is a magnetic flux propagation based on Worden-Harvey (WH) model. ADAPT would be used to provide a global photospheric map of the Earth. A data assimilation method based on the Ensemble Kalman Filter (EnKF), a method of Monte Carlo approximation tied with Kalman filtering, is used in calculating the ADAPT models.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
5-000 VPSC Implementation in BISON-CASL Code for Modeling Large Deformation Problems Wenfeng Liu ANATECH Corporation Robert Montgomery Pacific Northwest National Laboratory Carlos Tomé and Chris Stanek Los Alamos National Laboratory Jason Hales Idaho National Laboratory April 19, 2015 CASL-U-2015-0175-000 ANS MC2015 - Joint International Conference on Mathematics and Computation (M&C), Supercomputing in Nuclear Applications (SNA) and the Monte Carlo (MC) Method * Nashville, TN * April
Phase diagram of Rydberg atoms with repulsive van der Waals interaction
Osychenko, O. N.; Astrakharchik, G. E.; Boronat, J.; Lutsyshyn, Y.; Lozovik, Yu. E.
2011-12-15
We report a quantum Monte Carlo calculation of the phase diagram of bosons interacting with a repulsive inverse sixth power pair potential, a model for assemblies of Rydberg atoms in the local van der Waals blockade regime. The model can be parametrized in terms of just two parameters, the reduced density and temperature. Solidification happens to the fcc phase. At zero temperature, the transition density is found with the diffusion Monte Carlo method at density {rho}=3.9 (({Dirac_h}/2{pi}){sup 2}/mC{sub 6}){sup 3/4}, where C{sub 6} is the strength of the interaction. The solidification curve at nonzero temperature is studied with the path-integral Monte Carlo approach and is compared with transitions in corresponding harmonic and classical crystals. Relaxation mechanisms are considered in relation to present experiments.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
uR--85-4451 DE86 004750 TflM: IMPROVEL'I MONTE CARLO RENORMALIZATION GROUP METHOD AuTHOR(S) Rajan Gupta, T-DO K. G. Wilson, Cornell University C. Umrigar, Cornell University SUWAITTEOTO: Paper to be published as t)artof i)roccedinas from Quantum Monte Carlo Conference, September ~985,- Los Alamos, NM. DISCLAIMER Th& report wxxprqmmd xx xn mmurrt of work Iportwmdby q xgancyof the United Wta Government, Noitbor tbo Unitaf Stxtat30vemmant nor~nyxpncy tbweof,no rxnyoftbdr am~m~ka xnyvwrxmy,expmw
Running Out Of and Into Oil. Analyzing Global Oil Depletion and Transition Through 2050
Greene, David L.; Hopson, Janet L.; Li, Jia
2003-10-01
This report presents a risk analysis of world conventional oil resource production, depletion, expansion, and a possible transition to unconventional oil resources such as oil sands, heavy oil and shale oil over the period 2000 to 2050. Risk analysis uses Monte Carlo simulation methods to produce a probability distribution of outcomes rather than a single value.
Stochastic Inversion of Seismic Amplitude-Versus-Angle Data (Stinv-AVA)
Energy Science and Technology Software Center (OSTI)
2008-04-03
The software was developed to invert seismic amplitude-versus-angle (AVA) data using a Bayesian framework. The posterior probability distribution function is sampled by effective Markov chain Monte Carlo (MCMC) methods. The software could provide not only estimates of unknown variables but also varieties of information about uncertainty, such as the mean, mode, median, variance, and even probability density of each unknown.
Radiation transport. Progress report, October 1, 1982-March 31, 1983
O'Dell, R.D.
1984-05-01
Research and development progress in radiation transport by the Los Alamos National Laboratory's Group X-6 for the first half of FY 83 is reported. Included are tasks in the areas of Fission Reactor Neutronics, Deterministic Transport Methods, and Monte Carlo Radiation Transport.
Muon simulations for Super-Kamiokande, KamLAND, and CHOOZ
Tang, Alfred; Horton-Smith, Glenn; Kudryavtsev, Vitaly A.; Tonazzo, Alessandra
2006-09-01
Muon backgrounds at Super-Kamiokande, KamLAND, and CHOOZ are calculated using MUSIC. A modified version of the Gaisser sea-level muon distribution and a well-tested Monte Carlo integration method are introduced. Average muon energy, flux, and rate are tabulated. Plots of average energy and angular distributions are given. Implications for muon tracker design in future experiments are discussed.
Simulating variable source problems via post processing of individual particle tallies
Bleuel, D.L.; Donahue, R.J.; Ludewigt, B.A.; Vujic, J.
2000-10-20
Monte Carlo is an extremely powerful method of simulating complex, three dimensional environments without excessive problem simplification. However, it is often time consuming to simulate models in which the source can be highly varied. Similarly difficult are optimization studies involving sources in which many input parameters are variable, such as particle energy, angle, and spatial distribution. Such studies are often approached using brute force methods or intelligent guesswork. One field in which these problems are often encountered is accelerator-driven Boron Neutron Capture Therapy (BNCT) for the treatment of cancers. Solving the reverse problem of determining the best neutron source for optimal BNCT treatment can be accomplished by separating the time-consuming particle-tracking process of a full Monte Carlo simulation from the calculation of the source weighting factors which is typically performed at the beginning of a Monte Carlo simulation. By post-processing these weighting factors on a recorded file of individual particle tally information, the effect of changing source variables can be realized in a matter of seconds, instead of requiring hours or days for additional complete simulations. By intelligent source biasing, any number of different source distributions can be calculated quickly from a single Monte Carlo simulation. The source description can be treated as variable and the effect of changing multiple interdependent source variables on the problem's solution can be determined. Though the focus of this study is on BNCT applications, this procedure may be applicable to any problem that involves a variable source.
Methods for pretreating biomass
Balan, Venkatesh; Dale, Bruce E; Chundawat, Shishir; Sousa, Leonardo
2015-03-03
A method of alkaline pretreatment of biomass, in particular, pretreating biomass with gaseous ammonia.
Stanislav Boldyrev, Jean Carlos ...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
suggest that breakdown of kinetic-magnetic equipartition may, in fact, be a fundamental prop- erty of Alfvnic turbulence (rather than an artifact or some nonuniversal plasma...
Geophysical Methods | Open Energy Information
Methods Magnetic Methods Gravity Methods Radiometric Methods Seismic methods dominates oil and gas exploration, and probably accounts for over 80% of exploration dollars spent...
Uncertainty estimates for derivatives and intercepts
Clark, E.L.
1994-09-01
Straight line least squares fits of experimental data are widely used in the analysis of test results to provide derivatives and intercepts. A method for evaluating the uncertainty in these parameters is described. The method utilizes conventional least squares results and is applicable to experiments where the independent variable is controlled, but not necessarily free of error. A Monte Carlo verification of the method is given.
CASL-U-2015-0162-000 Performance Model
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
2-000 Performance Model Development and Analysis for the 3-D Method of Characteristics Brendan Kochunas and Thomas Downar University of Michigan April 19, 2015 ANS MC2015 - Joint International Conference on Mathematics and Computation (M&C), Supercomputing in Nuclear Applications (SNA) and the Monte Carlo (MC) Method * Nashville, TN * April 19-23, 2015, on CD-ROM, American Nuclear Society, LaGrange Park, IL (2015) PERFORMANCE MODEL DEVELOPMENT AND ANALYSIS FOR THE 3-D METHOD OF
Broader source: Energy.gov [DOE]
This meeting is open to the public, and the board will discuss the Oak Ridge Environmental Management program's FY 2016 budget and prioritization.
Adams, David P; McDonald, Joel Patrick; Jared, Bradley Howell; Hodges, V. Carter; Hirschfeld, Deidre; Blair, Dianna S
2014-04-01
A method of pulsed laser intrinsic marking can provide a unique identifier to detect tampering or counterfeiting.
Advanced Methods for Manufacturing
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Scientists Computational Resources and Multi- Physics Modeling & Simulation Knowledge & ... Manufacturing Methods R&D Test Bed ... loops, process development...
Geobacteraceae strains and methods
Lovley, Derek R.; Nevin, Kelly P.; Yi, Hana
2015-07-07
Embodiments of the present invention provide a method of producing genetically modified strains of electricigenic microbes that are specifically adapted for the production of electrical current in microbial fuel cells, as well as strains produced by such methods and fuel cells using such strains. In preferred embodiments, the present invention provides genetically modified strains of Geobacter sulfurreducens and methods of using such strains.
The Uniform Methods Project: Methods for Determining Energy Efficiency...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures The Uniform Methods Project: Methods for Determining Energy Efficiency Savings...
The Uniform Methods Project: Methods for Determining Energy Efficiency...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures (April 2013) The Uniform Methods Project: Methods for Determining Energy...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Material Point Methods and Multiphysics for Fracture and Multiphase Problems Joseph Teran, UCLA and Alice Koniges, LBL Contact: jteran@math.ucla.edu Material point methods (MPM) provide an intriguing new path for the design of algorithms that are poised to scale to billions of cores [4]. These methods are particularly important for simulating various phases in the presence of extreme deformation and topological change. This brings about the possibility of new simulations enabled at the exascale
Method of degrading trinitrotoluene
Tyndall, R.L.; Vass, A.
1996-01-16
A method is disclosed of eluting trinitrotoluene (TNT) from soil using a dispersant from bacterial intra-amoebic isolate 1s, ATCC 75229.
Method for making tetraorganooxysilanes
Schattenmann, Florian Johannes (Ballston Lake, NY); Lewis, Larry Neil (Scotia, NY)
2001-01-01
A method for the preparation of tetraorganooxysilanes is provided which comprises reaction of a natural silicon dioxide source with an organo carbonate.
Method of degrading trinitrotoluene
Tyndall, Richard L. (Clinton, TN); Vass, Arpad (Oak Ridge, TN)
1996-01-01
A method of eluting trinitrotoluene (TNT) from soil using a dispersant from bacterial intra-amoebic isolate 1s, ATCC 75229.
Martin, F.S.; Silver, G.L.
1991-04-30
A method is described for reducing the concentration of any undesirable metals dissolved in contaminated water, such as waste water. The method involves uniformly reacting the contaminated water with an excess amount of solid particulate calcium sulfite to insolubilize the undesirable metal ions, followed by removal thereof and of the unreacted calcium sulfite.
Decker, David L; Lyles, Brad F; Purcell, Richard G; Hershey, Ronald Lee
2014-05-20
An apparatus and method for supporting a tubing bundle during installation or removal. The apparatus includes a clamp for securing the tubing bundle to an external wireline. The method includes deploying the tubing bundle and wireline together, The tubing bundle is periodically secured to the wireline using a clamp.
Martin, Frank S. (Farmersville, OH); Silver, Gary L. (Centerville, OH)
1991-04-30
A method for reducing the concentration of any undesirable metals dissolved in contaminated water, such as waste water. The method involves uniformly reacting the contaminated water with an excess amount of solid particulate calcium sulfite to insolubilize the undesirable metal ions, followed by removal thereof and of the unreacted calcium sulfite.
Method of forming nanodielectrics
Tuncer, Enis [Knoxville, TN; Polyzos, Georgios [Oak Ridge, TN
2014-01-07
A method of making a nanoparticle filled dielectric material. The method includes mixing nanoparticle precursors with a polymer material and reacting the nanoparticle mixed with the polymer material to form nanoparticles dispersed within the polymer material to form a dielectric composite.
Chainer, Timothy J.; Dang, Hien P.; Parida, Pritish R.; Schultz, Mark D.; Sharma, Arun
2015-08-11
A method aspect for removing heat from a data center may use liquid coolant cooled without vapor compression refrigeration on a liquid cooled information technology equipment rack. The method may also include regulating liquid coolant flow to the data center through a range of liquid coolant flow values with a controller-apparatus based upon information technology equipment temperature threshold of the data center.
Methods for data classification
Garrity, George (Okemos, MI); Lilburn, Timothy G. (Front Royal, VA)
2011-10-11
The present invention provides methods for classifying data and uncovering and correcting annotation errors. In particular, the present invention provides a self-organizing, self-correcting algorithm for use in classifying data. Additionally, the present invention provides a method for classifying biological taxa.
Quantum simulations of strongly coupled quark-gluon plasma
Filinov, V. S.; Ivanov, Yu. B.; Bonitz, M.; Levashov, P. R.; Fortov, V. E.
2012-06-15
A strongly coupled quark-gluon plasma (QGP) of heavy constituent quasi-particles is studied by a path-integral Monte-Carlo method. This approach is a quantum generalization of the classical molecular dynamics by Gelman, Shuryak, and Zahed. It is shown that this method is able to reproduce the QCD lattice equation of state. The results indicate that the QGP reveals liquid-like rather than gaslike properties. Quantum effects turned out to be of prime importance in these simulations.
Loading relativistic Maxwell distributions in particle simulations
Zenitani, Seiji
2015-04-15
Numerical algorithms to load relativistic Maxwell distributions in particle-in-cell (PIC) and Monte-Carlo simulations are presented. For stationary relativistic Maxwellian, the inverse transform method and the Sobol algorithm are reviewed. To boost particles to obtain relativistic shifted-Maxwellian, two rejection methods are proposed in a physically transparent manner. Their acceptance efficiencies are ?50% for generic cases and 100% for symmetric distributions. They can be combined with arbitrary base algorithms.
Stochastic Optimization of Complex Systems (Technical Report) | SciTech
Office of Scientific and Technical Information (OSTI)
Connect Stochastic Optimization of Complex Systems Citation Details In-Document Search Title: Stochastic Optimization of Complex Systems This project focused on methodologies for the solution of stochastic optimization problems based on relaxation and penalty methods, Monte Carlo simulation, parallel processing, and inverse optimization. The main results of the project were the development of a convergent method for the solution of models that include expectation constraints as in
Random-matrix approach to the statistical compound nuclear reaction at low
Office of Scientific and Technical Information (OSTI)
energies using the Monte-Carlo technique (Conference) | SciTech Connect Conference: Random-matrix approach to the statistical compound nuclear reaction at low energies using the Monte-Carlo technique Citation Details In-Document Search Title: Random-matrix approach to the statistical compound nuclear reaction at low energies using the Monte-Carlo technique Authors: Kawano, Toshihiko [1] + Show Author Affiliations Los Alamos National Laboratory Publication Date: 2015-11-10 OSTI Identifier:
Electrolyte pore/solution partitioning by expanded grand canonical ensemble
Office of Scientific and Technical Information (OSTI)
Monte Carlo simulation (Journal Article) | SciTech Connect Title: Electrolyte pore/solution partitioning by expanded grand canonical ensemble Monte Carlo simulation Using a newly developed grand canonical Monte Carlo approach based on fractional exchanges of dissolved ions and water molecules, we studied equilibrium partitioning of both components between laterally extended apolar confinements and surrounding electrolyte solution. Accurate calculations of the Hamiltonian and tensorial
Method for inducing hypothermia
Becker, Lance B. (Chicago, IL); Hoek, Terry Vanden (Chicago, IL); Kasza, Kenneth E. (Palos Park, IL)
2008-09-09
Systems for phase-change particulate slurry cooling equipment and methods to induce hypothermia in a patient through internal and external cooling are provided. Subcutaneous, intravascular, intraperitoneal, gastrointestinal, and lung methods of cooling are carried out using saline ice slurries or other phase-change slurries compatible with human tissue. Perfluorocarbon slurries or other slurry types compatible with human tissue are used for pulmonary cooling. And traditional external cooling methods are improved by utilizing phase-change slurry materials in cooling caps and torso blankets.
Method for inducing hypothermia
Becker, Lance B. (Chicago, IL); Hoek, Terry Vanden (Chicago, IL); Kasza, Kenneth E. (Palos Park, IL)
2003-04-15
Systems for phase-change particulate slurry cooling equipment and methods to induce hypothermia in a patient through internal and external cooling are provided. Subcutaneous, intravascular, intraperitoneal, gastrointestinal, and lung methods of cooling are carried out using saline ice slurries or other phase-change slurries compatible with human tissue. Perfluorocarbon slurries or other slurry types compatible with human tissue are used for pulmonary cooling. And traditional external cooling methods are improved by utilizing phase-change slurry materials in cooling caps and torso blankets.
Method for inducing hypothermia
Becker, Lance B.; Hoek, Terry Vanden; Kasza, Kenneth E.
2005-11-08
Systems for phase-change particulate slurry cooling equipment and methods to induce hypothermia in a patient through internal and external cooling are provided. Subcutaneous, intravascular, intraperitoneal, gastrointestinal, and lung methods of cooling are carried out using saline ice slurries or other phase-change slurries compatible with human tissue. Perfluorocarbon slurries or other slurry types compatible with human tissue are used for pulmonary cooling. And traditional external cooling methods are improved by utilizing phase-change slurry materials in cooling caps and torso blankets.
Miner, Nadine E.; Caudell, Thomas P.
2004-06-08
A sound synthesis method for modeling and synthesizing dynamic, parameterized sounds. The sound synthesis method yields perceptually convincing sounds and provides flexibility through model parameterization. By manipulating model parameters, a variety of related, but perceptually different sounds can be generated. The result is subtle changes in sounds, in addition to synthesis of a variety of sounds, all from a small set of models. The sound models can change dynamically according to changes in the simulation environment. The method is applicable to both stochastic (impulse-based) and non-stochastic (pitched) sounds.
Jackson, D.D.; Hollen, R.M.
1981-02-27
A method of very thoroughly and quikcly cleaning a guaze electrode used in chemical analyses is given, as well as an automobile cleaning apparatus which makes use of the method. The method generates very little waste solution, and this is very important in analyzing radioactive materials, especially in aqueous solutions. The cleaning apparatus can be used in a larger, fully automated controlled potential coulometric apparatus. About 99.98% of a 5 mg plutonium sample was removed in less than 3 minutes, using only about 60 ml of rinse solution and two main rinse steps.
Mourant, Judith R. (Los Alamos, NM); Anderson, Gerhard D. (Velarde, NM); Bigio, Irving J. (Los Alamos, NM); Johnson, Tamara M. (Los Alamos, NM)
1996-01-01
Method for fusing bone. The present invention is a method for joining hard tissue which includes chemically removing the mineral matrix from a thin layer of the surfaces to be joined, placing the two bones together, and heating the joint using electromagnetic radiation. The goal of the method is not to produce a full-strength weld of, for example, a cortical bone of the tibia, but rather to produce a weld of sufficient strength to hold the bone halves in registration while either external fixative devices are applied to stabilize the bone segments, or normal healing processes restore full strength to the tibia.
Tadd, Andrew R; Schwank, Johannes
2013-05-14
A catalytic reforming method is disclosed herein. The method includes sequentially supplying a plurality of feedstocks of variable compositions to a reformer. The method further includes adding a respective predetermined co-reactant to each of the plurality of feedstocks to obtain a substantially constant output from the reformer for the plurality of feedstocks. The respective predetermined co-reactant is based on a C/H/O atomic composition for a respective one of the plurality of feedstocks and a predetermined C/H/O atomic composition for the substantially constant output.
Method for making organyltriorganooxysilanes
Schattenmann, Florian Johannes (Ballston Lake, NY)
2002-01-01
A method for the preparation of organyltriorganooxysilanes containing at least one silicon-carbon bond is provided comprising reacting at least one tetraorganooxysilane with an activated carbon and at least one base.
Method for synthesizing boracities
Wolf, Gary A [Kennewick, WA
1982-01-01
A method for producing boracites is disclosed in which a solution of divalent metal acetate, boric acid, and halogen acid is evaporated to dryness and the resulting solid is heated in an inert atmosphere under pressure.
Method for making organooxysilanes
Schattenmann, Florian Johannes
2003-12-23
A method for the preparation of organooxysilanes containing at least one silicon-carbon bond is provided which comprises reacting at least one tetraorganooxysilane with at least one transition metal organo compound.
Concrete compositions and methods
Chen, Irvin; Lee, Patricia Tung; Patterson, Joshua
2015-06-23
Provided herein are compositions, methods, and systems for cementitious compositions containing calcium carbonate compositions and aggregate. The compositions find use in a variety of applications, including use in a variety of building materials and building applications.
Plasma isotope separation methods
Grossman, M.W. ); Shepp, T.A. )
1991-12-01
Isotope separation has many important industrial, medical, and research applications. Large-scale processes have typically utilized complex cascade systems; for example, the gas centrifuge. Alternatively, high single-stage enrichment processes (as in the case of the calutron) are very energy intensive. Plasma-based methods being developed for the past 15 to 20 years have attempted to overcome these two drawbacks. In this review, six major types of isotope separation methods which involve plasma phenomena are discussed. These methods are: plasma centrifuge, AVLIS (atomic vapor laser isotope separation), ion wave, ICR (ion-cyclotron resonance), calutron, and gas discharge. The emphasis of this paper is to describe the plasma phenomena in these major categories. An attempt was made to include enough references so that more detailed study or evaluation of a particular method could readily be pursued. A brief discussion of isotope separation using mass balance concepts is also carried out.
Method of saccharifying cellulose
Johnson, E.A.; Demain, A.L.; Madia, A.
1983-05-13
A method is disclosed of saccharifying cellulose by incubation with the cellulase of Clostridium thermocellum in a broth containing an efficacious amount of thiol reducing agent. Other incubation parameters which may be advantageously controlled to stimulate saccharification include the concentration of alkaline earth salts, pH, temperature, and duration. By the method of the invention, even native crystalline cellulose such as that found in cotton may be completely saccharified.
Pushing schedule derivation method
Henriquez, B.
1996-12-31
The development of a Pushing Schedule Derivation Method has allowed the company to sustain the maximum production rate at CSH`s Coke Oven Battery, in spite of having single set oven machinery with a high failure index as well as a heat top tendency. The stated method provides for scheduled downtime of up to two hours for machinery maintenance purposes, periods of empty ovens for decarbonization and production loss recovery capability, while observing lower limits and uniformity of coking time.
Method of saccharifying cellulose
Johnson, Eric A. (Brookline, MA); Demain, Arnold L. (Wellesley, MA); Madia, Ashwin (Decatur, IL)
1985-09-10
A method of saccharifying cellulose by incubation with the cellulase of Clostridium thermocellum in a broth containing an efficacious amount of a reducing agent. Other incubation parameters which may be advantageously controlled to stimulate saccharification include the concentration of alkaline earth salts, pH, temperature, and duration. By the method of the invention, even native crystalline cellulose such as that found in cotton may be completely saccharified.
Henn, Fritz
2012-01-24
Methods for treatment of depression-related mood disorders in mammals, particularly humans are disclosed. The methods of the invention include administration of compounds capable of enhancing glutamate transporter activity in the brain of mammals suffering from depression. ATP-sensitive K.sup.+ channel openers and .beta.-lactam antibiotics are used to enhance glutamate transport and to treat depression-related mood disorders and depressive symptoms.
Henn, Fritz
2013-04-09
Methods for treatment of depression-related mood disorders in mammals, particularly humans are disclosed. The methods of the invention include administration of compounds capable of enhancing glutamate transporter activity in the brain of mammals suffering from depression. ATP-sensitive K.sup.+ channel openers and .beta.-lactam antibiotics are used to enhance glutamate transport and to treat depression-related mood disorders and depressive symptoms.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia -Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; et al
2015-12-14
Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification ofmore » uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.« less
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
kinetic equation in the sheath and presheath region. The materials modeling is based on molecular dynamics, accelerated molecular dynamics, and kinetic Monte Carlo simulations....
Torus through Integrated Data Analysis Mark Nornberg Matthew...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
emissivity predicts bremsstrahlung and recombination emission * Parameter search using Markov Chain Monte Carlo - Allows many model parameters with reasonable processing time -...
Consortium for Advanced Simulation of Light Water Reactors (CASL...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
and Monte Carlo transport applications. Exnihilo is based on a package architecture model such that each package provides well-defined capabilities. Exnihilo currently...
Neutron Production by Muon Spallation I: Theory (Technical Report...
Office of Scientific and Technical Information (OSTI)
Monte Carlo package MCNPX. We calculate simulated energy spectra, multiplicities, and angular distributions of direct neutrons and pions from muon spallation. Authors: Luu, T ;...
Search for: All records | SciTech Connect
Office of Scientific and Technical Information (OSTI)
... Peelle's pertinent puzzle using the Monte Carlo technique Kawano, Toshihiko ; Talou, Patrick ; Burr, Thomas ; Pan, Feng We try to understand the long-standing problem of the ...
Search for: All records | SciTech Connect
Office of Scientific and Technical Information (OSTI)
... Kinetic Monte Carlo simulationsmore and density functional theory calculations support ... magnetic islands, tunable with , is of interest for nanomagnetism applications. less ...
Microsoft Word - NRAP-TRS-III-002-2012_Modeling the Performance...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
as a function of time; it illustrates the considerable variability in results stemming from the uncertainty in model inputs. Figure 5: Monte-Carlo simulation results; time...
Government Performance Result Act (GPRA) / Portfolio Decision...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
sizing and control strategy tuning New Powertrain Configurations Monte-Carlo Risk Analysis Detailed models required to represent future technologies 18 Summary GPRAPDS...
Generalized Subtraction Schemes for the Difference Formulation...
Office of Scientific and Technical Information (OSTI)
the cancellation between thermal emission and absorption that is responsible for noise in the Monte Carlo solution of thick systems, but introduces time and space derivative...
Dose distribution from x-ray microbeam arrays applied to radiation therapy:
Office of Scientific and Technical Information (OSTI)
An EGS4 Monte Carlo study (Journal Article) | SciTech Connect Dose distribution from x-ray microbeam arrays applied to radiation therapy: An EGS4 Monte Carlo study Citation Details In-Document Search Title: Dose distribution from x-ray microbeam arrays applied to radiation therapy: An EGS4 Monte Carlo study We present EGS4 Monte Carlo calculations of the spatial distribution of the dose deposited by a single x-ray pencil beam, a planar microbeam, and an array of parallel planar microbeams as
Random-matrix approach to the statistical compound nuclear reaction...
Office of Scientific and Technical Information (OSTI)
nuclear reaction at low energies using the Monte-Carlo technique Citation Details In-Document Search Title: Random-matrix approach to the statistical compound nuclear ...
Search for: All records | SciTech Connect
Office of Scientific and Technical Information (OSTI)
The distribution of galaxies mapped by the Sloan Digital Sky Survey shows that this region ... 'proton-dominated' GRBs in the internal shock scenario through Monte Carlo simulations, ...
ARM - Publications: Science Team Meeting Documents
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
shapes such as spheroids, cylinders, and even hexagonal columns with aspect ratios near unity, but not for general particle shapes. A Monte Carlo approach to single scattering...
High Island Densities and Long Range Repulsive Interactions:...
Office of Scientific and Technical Information (OSTI)
long range repulsive interactions. Kinetic Monte Carlo simulations and density functional theory calculations support this conclusion. In addition to answering an outstanding...
Search for: All records | SciTech Connect
Office of Scientific and Technical Information (OSTI)
a heterogeneous unsaturated fractured rock by its homogeneous equivalent, Monte Carlo ... transport in saturated fractured rock Dai, Zhenxue ; Wolfsberg, Andrew ; Lu, ...
Numerical evaluation of effective unsaturated hydraulic properties...
Office of Scientific and Technical Information (OSTI)
To represent a heterogeneous unsaturated fractured rock by its homogeneous equivalent, Monte Carlo simulations are used to obtain upscaled (effective) flow properties. In this ...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
input model Tom Evans Fausto Franceschini Andrew Godfrey Steve Hamilton Wayne Joubert John Turner Results * Some of the largest Monte Carlo calculations ever performed (1...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
3 Constraints on Excess Absorption: Computations by a Broadband Monte Carlo Model A. M. Vogelmann, I. A. Podgorny, and V. Ramanathan Center for Atmospheric Sciences & Center for...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
constraint) Recognition of cluster (fragment) formation (R- 2.4 fm) Simulation of a large number of events (Monte Carlo approach) 1 M. Papa, A. Bonasera et...
Methods for Neutron Spectrometry
DOE R&D Accomplishments [OSTI]
Brockhouse, Bertram N.
1961-01-09
The appropriate theories and the general philosophy of methods of measurement and treatment of data neutron spectrometry are discussed. Methods of analysis of results for liquids using the Van Hove formulation, and for crystals using the Born-von Karman theory, are reviewed. The most useful of the available methods of measurement are considered to be the crystal spectrometer methods and the pulsed monoenergetic beam/time-of-flight method. Pulsed-beam spectrometers have the advantage of higher counting rates than crystal spectrometers, especially in view of the fact that simultaneous measurements in several counters at different angles of scattering are possible in pulsed-beam spectrometers. The crystal spectrometer permits several valuable new types of specialized experiments to be performed, especially energy distribution measurements at constant momentum transfer. The Chalk River triple-axis crystal-spectrometer is discussed, with reference to its use in making the specialized experiments. The Chalk River rotating crystal (pulsed-beam) spectrometer is described, and a comparison of this type instrument with other pulsed-beam spectrometers is made. A partial outline of the theory of operation of rotating-crystal spectrometers is presented. The use of quartz-crystal filters for fast neutron elimination and for order elimination is discussed. (auth)
Branagan, Daniel J. (Iona, ID); Burch, Joseph V. (Shelley, ID)
2001-01-01
In one aspect, the invention encompasses a method of forming a steel. A metallic glass is formed and at least a portion of the glass is converted to a crystalline steel material having a nanocrystalline scale grain size. In another aspect, the invention encompasses another method of forming a steel. A molten alloy is formed and cooled the alloy at a rate which forms a metallic glass. The metallic glass is devitrified to convert the glass to a crystalline steel material having a nanocrystalline scale grain size. In yet another aspect, the invention encompasses another method of forming a steel. A first metallic glass steel substrate is provided, and a molten alloy is formed over the first metallic glass steel substrate to heat and devitrify at least some of the underlying metallic glass of the substrate.
Peterman, Dean R. [Idaho Falls, ID; Klaehn, John R. [Idaho Falls, ID; Harrup, Mason K. [Idaho Falls, ID; Tillotson, Richard D. [Moore, ID; Law, Jack D. [Pocatello, ID
2010-09-21
Methods of separating actinides from lanthanides are disclosed. A regio-specific/stereo-specific dithiophosphinic acid having organic moieties is provided in an organic solvent that is then contacted with an acidic medium containing an actinide and a lanthanide. The method can extend to separating actinides from one another. Actinides are extracted as a complex with the dithiophosphinic acid. Separation compositions include an aqueous phase, an organic phase, dithiophosphinic acid, and at least one actinide. The compositions may include additional actinides and/or lanthanides. A method of producing a dithiophosphinic acid comprising at least two organic moieties selected from aromatics and alkyls, each moiety having at least one functional group is also disclosed. A source of sulfur is reacted with a halophosphine. An ammonium salt of the dithiophosphinic acid product is precipitated out of the reaction mixture. The precipitated salt is dissolved in ether. The ether is removed to yield the dithiophosphinic acid.
Barnette, Daniel W.
2002-01-01
The present invention provides a method of grid generation that uses the geometry of the problem space and the governing relations to generate a grid. The method can generate a grid with minimized discretization errors, and with minimal user interaction. The method of the present invention comprises assigning grid cell locations so that, when the governing relations are discretized using the grid, at least some of the discretization errors are substantially zero. Conventional grid generation is driven by the problem space geometry; grid generation according to the present invention is driven by problem space geometry and by governing relations. The present invention accordingly can provide two significant benefits: more efficient and accurate modeling since discretization errors are minimized, and reduced cost grid generation since less human interaction is required.