Marcus, Ryan C. [Los Alamos National Laboratory
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
Yeo, Sang Chul
Ammonia (NH[subscript 3]) nitridation on an Fe surface was studied by combining density functional theory (DFT) and kinetic Monte Carlo (kMC) calculations. A DFT calculation was performed to obtain the energy barriers ...
Zimmerman, G.B.
1997-06-24
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
A 9 Monte Carlo Simulations Michael Bachmann
Bachmann, Michael
generally called "thermal fluctua- tions") or "lose" energy by friction effects (dissipation). The total Reweighting methods 9 3.1 Single-histogram reweighting . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-ensemble Monte Carlo methods 12 4.1 Replica-exchange Monte Carlo method (parallel tempering
Applications of FLUKA Monte Carlo Code for Nuclear and Accelerator...
Office of Scientific and Technical Information (OSTI)
Applications of FLUKA Monte Carlo Code for Nuclear and Accelerator Physics Citation Details In-Document Search Title: Applications of FLUKA Monte Carlo Code for Nuclear and...
THE BEGINNING of the MONTE CARLO METHOD
. For a whole host of 125 #12;Monte Carlo reasons, he had become seriously inter- ested in the thermonuclear a preliminary computational model of a thermonuclear reaction for the ENIAC. He felt he could convince
Monte Carlo simulation in systems biology
Schellenberger, Jan
2010-01-01
2 The history of Monte Carlo Sampling in Systems Biology 1.1simulation tools: the systems biology workbench and biospiceCellular and Molecular Biology. ASM Press, Washington
Multiple quadrature by Monte Carlo techniques
Voss, John Dietrich
1966-01-01
of a multiple integral ordinarily hopeless to attempt by 1 classical methods. " In this paper the Monte Carlo Method of numerical quadrature is used to integrate some functions that are extremely difficult and tedious to integrate by any other known... and the table of known values can be extended. The method developed here may also be used to evaluate the distribution at any desired values of the parameters . C HAP TER II THEORETICAL CONSIDERATIONS Hammersley has said: "Every Monte Carlo computation...
Theory of melting at high pressures: Amending density functional theory with quantum Monte Carlo
Shulenburger, L.; Desjarlais, M. P.; Mattsson, T. R.
2014-10-01
We present an improved first-principles description of melting under pressure based on thermodynamic integration comparing Density Functional Theory (DFT) and quantum Monte Carlo (QMC) treatments of the system. The method is applied to address the longstanding discrepancy between density functional theory (DFT) calculations and diamond anvil cell (DAC) experiments on the melting curve of xenon, a noble gas solid where van der Waals binding is challenging for traditional DFT methods. The calculations show excellent agreement with data below 20 GPa and that the high-pressure melt curve is well described by a Lindemann behavior up to at least 80 GPa, a finding in stark contrast to DAC data.
Quantum Mechanical Single Molecule Partition Function from Path Integral Monte Carlo Simulations
Chempath, Shaji; Bell, Alexis T.; Predescu, Cristian
2008-01-01
calculated from path integral Monte Carlo(PIMC) and harmoniccalculated from path integral Monte Carlo (PIMC) andFunction from Path Integral Monte Carlo Simulations Shaji
Grossman, Jeffrey C.
We analyze the density-functional theory (DFT) description of weak interactions by employing diffusion and reptation quantum Monte Carlo (QMC) calculations, for a set of benzene-molecule complexes. While the binding energies ...
Monte Carlo Tools for Jet Quenching
Korinna Zapp
2011-09-07
A thorough understanding of jet quenching on the basis of multi-particle final states and jet observables requires new theoretical tools. This talk summarises the status and propects of the theoretical description of jet quenching in terms of Monte Carlo generators.
Monte Carlo event reconstruction implemented with artificial neural networks
Tolley, Emma Elizabeth
2011-01-01
I implemented event reconstruction of a Monte Carlo simulation using neural networks. The OLYMPUS Collaboration is using a Monte Carlo simulation of the OLYMPUS particle detector to evaluate systematics and reconstruct ...
A MONTE CARLO SIMULATION OF WATER FLOW IN VARIABLY ...
1910-10-30
Se utiliza un m?etodo de simulaci?on Monte Carlo para estudiar el flujo de aguas ... A Monte Carlo simulation method is employed to study groundwater flow in ...
Smart detectors for Monte Carlo radiative transfer
Maarten Baes
2008-09-11
Many optimization techniques have been invented to reduce the noise that is inherent in Monte Carlo radiative transfer simulations. As the typical detectors used in Monte Carlo simulations do not take into account all the information contained in the impacting photon packages, there is still room to optimize this detection process and the corresponding estimate of the surface brightness distributions. We want to investigate how all the information contained in the distribution of impacting photon packages can be optimally used to decrease the noise in the surface brightness distributions and hence to increase the efficiency of Monte Carlo radiative transfer simulations. We demonstrate that the estimate of the surface brightness distribution in a Monte Carlo radiative transfer simulation is similar to the estimate of the density distribution in an SPH simulation. Based on this similarity, a recipe is constructed for smart detectors that take full advantage of the exact location of the impact of the photon packages. Several types of smart detectors, each corresponding to a different smoothing kernel, are presented. We show that smart detectors, while preserving the same effective resolution, reduce the noise in the surface brightness distributions compared to the classical detectors. The most efficient smart detector realizes a noise reduction of about 10%, which corresponds to a reduction of the required number of photon packages (i.e. a reduction of the simulation run time) of 20%. As the practical implementation of the smart detectors is straightforward and the additional computational cost is completely negligible, we recommend the use of smart detectors in Monte Carlo radiative transfer simulations.
Deterministic Simulation for Risk Management QuasiMonte Carlo beats
Papageorgiou, Anargyros
1 Deterministic Simulation for Risk Management QuasiMonte Carlo beats Monte Carlo for Value are widely used in pricing and risk management of complex financial instruments. Recently, quasiMonte Carlo and accuracy. In this paper we address the application of these deterministic methods to risk management. Our
Deterministic Simulation for Risk Management Quasi-Monte Carlo beats
Papageorgiou, Anargyros
1 Deterministic Simulation for Risk Management Quasi-Monte Carlo beats Monte Carlo for Value are widely used in pricing and risk management of complex financial instruments. Recently, quasi-Monte Carlo and accuracy. In this paper we address the application of these deterministic methods to risk management. Our
John von Neumann Institute for Computing Monte Carlo Protein Folding
Hsu, Hsiao-Ping
John von Neumann Institute for Computing Monte Carlo Protein Folding: Simulations of Met://www.fz-juelich.de/nic-series/volume20 #12;#12;Monte Carlo Protein Folding: Simulations of Met-Enkephalin with Solvent-Accessible Area difficulties in applying Monte Carlo methods to protein folding. The solvent-accessible area method, a popular
Chemical accuracy from quantum Monte Carlo for the Benzene Dimer
Azadi, Sam
2015-01-01
We report an accurate study of interactions between Benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory (DFT) using different van der Waals (vdW) functionals. In our QMC calculations, we use accurate correlated trial wave functions including three-body Jastrow factors, and backflow transformations. We consider two benzene molecules in the parallel displaced (PD) geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimal VMC and DMC binding energies of -2.3(4) and -2.7(3) kcal/mol, respectively. The best estimate of the CCSD(T)/CBS limit is -2.65(2) kcal/mol [E. Miliordos et al, J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, compar...
Quantum Monte Carlo Calculations of Light Nuclei
Steven C. Pieper
2004-10-27
Variational Monte Carlo and Green's function Monte Carlo are powerful tools for calculations of properties of light nuclei using realistic two-nucleon and three-nucleon potentials. Recently the GFMC method has been extended to multiple states with the same quantum numbers. The combination of the Argonne v_18 two-nucleon and Illinois-2 three-nucleon potentials gives a good prediction of many energies of nuclei up to 12C. A number of other recent results are presented: comparison of binding energies with those obtained by the no-core shell model; the incompatibility of modern nuclear Hamiltonians with a bound tetra-neutron; difficulties in computing RMS radii of very weakly bound nuclei, such as 6He; center-of-mass effects on spectroscopic factors; and the possible use of an artificial external well in calculations of neutron-rich isotopes.
Quantum Monte Carlo by message passing
Bonca, J.; Gubernatis, J.E.
1993-01-01
We summarize results of quantum Monte Carlo simulations of the degenerate single-impurity Anderson model using the impurity algorithm of Hirsch and Fye. Using methods of Bayesian statistical inference, coupled with the principle of maximum entropy, we extracted the single-particle spectral density from the imaginary-time Green's function. The variations of resulting spectral densities with model parameters agree qualitatively with the spectral densities predicted by NCA calculations. All the simulations were performed on a cluster of 16 IBM R6000/560 workstations under the control of the message-passing software PVM. We described the trivial parallelization of our quantum Monte Carlo code both for the cluster and the CM-5 computer. Other issues for effective parallelization of the impurity algorithm are also discussed.
Quantum Monte Carlo by message passing
Bonca, J.; Gubernatis, J.E.
1993-05-01
We summarize results of quantum Monte Carlo simulations of the degenerate single-impurity Anderson model using the impurity algorithm of Hirsch and Fye. Using methods of Bayesian statistical inference, coupled with the principle of maximum entropy, we extracted the single-particle spectral density from the imaginary-time Green`s function. The variations of resulting spectral densities with model parameters agree qualitatively with the spectral densities predicted by NCA calculations. All the simulations were performed on a cluster of 16 IBM R6000/560 workstations under the control of the message-passing software PVM. We described the trivial parallelization of our quantum Monte Carlo code both for the cluster and the CM-5 computer. Other issues for effective parallelization of the impurity algorithm are also discussed.
Status of Monte-Carlo Event Generators
Hoeche, Stefan; /SLAC
2011-08-11
Recent progress on general-purpose Monte-Carlo event generators is reviewed with emphasis on the simulation of hard QCD processes and subsequent parton cascades. Describing full final states of high-energy particle collisions in contemporary experiments is an intricate task. Hundreds of particles are typically produced, and the reactions involve both large and small momentum transfer. The high-dimensional phase space makes an exact solution of the problem impossible. Instead, one typically resorts to regarding events as factorized into different steps, ordered descending in the mass scales or invariant momentum transfers which are involved. In this picture, a hard interaction, described through fixed-order perturbation theory, is followed by multiple Bremsstrahlung emissions off initial- and final-state and, finally, by the hadronization process, which binds QCD partons into color-neutral hadrons. Each of these steps can be treated independently, which is the basic concept inherent to general-purpose event generators. Their development is nowadays often focused on an improved description of radiative corrections to hard processes through perturbative QCD. In this context, the concept of jets is introduced, which allows to relate sprays of hadronic particles in detectors to the partons in perturbation theory. In this talk, we briefly review recent progress on perturbative QCD in event generation. The main focus lies on the general-purpose Monte-Carlo programs HERWIG, PYTHIA and SHERPA, which will be the workhorses for LHC phenomenology. A detailed description of the physics models included in these generators can be found in [8]. We also discuss matrix-element generators, which provide the parton-level input for general-purpose Monte Carlo.
A Monte Carlo algorithm for degenerate plasmas
Turrell, A.E. Sherlock, M.; Rose, S.J.
2013-09-15
A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the Fermi–Dirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electron–ion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.
Marcus, Ryan C. [Los Alamos National Laboratory
2012-07-24
Overview of this presentation is (1) Exascale computing - different technologies, getting there; (2) high-performance proof-of-concept MCMini - features and results; and (3) OpenCL toolkit - Oatmeal (OpenCL Automatic Memory Allocation Library) - purpose and features. Despite driver issues, OpenCL seems like a good, hardware agnostic tool. MCMini demonstrates the possibility for GPGPU-based Monte Carlo methods - it shows great scaling for HPC application and algorithmic equivalence. Oatmeal provides a flexible framework to aid in the development of scientific OpenCL codes.
Monte Carlo errors with less errors
Ulli Wolff
2006-11-29
We explain in detail how to estimate mean values and assess statistical errors for arbitrary functions of elementary observables in Monte Carlo simulations. The method is to estimate and sum the relevant autocorrelation functions, which is argued to produce more certain error estimates than binning techniques and hence to help toward a better exploitation of expensive simulations. An effective integrated autocorrelation time is computed which is suitable to benchmark efficiencies of simulation algorithms with regard to specific observables of interest. A Matlab code is offered for download that implements the method. It can also combine independent runs (replica) allowing to judge their consistency.
Multicanonical hybrid Monte Carlo for compact QED
G. Arnold; Th. Lippert; K. Schilling
1999-09-14
We demonstrate that substantial progress can be achieved in the study of the phase structure of 4-dimensional compact QED by a joint use of hybrid Monte Carlo and multicanonical algorithms, through an efficient parallel implementation. This is borne out by the observation of considerable speedup of tunnelling between the metastable states, close to the phase transition, on the Wilson line. Our approach leads to a general parallelization scheme for the efficient stochastic sampling of systems where (a part of) the Hamiltonian involves the total action or energy in each update step.
Energy Monte Carlo (EMCEE) | Open Energy Information
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page on QA:QA J-E-1 SECTION J APPENDIX ECoopButtePowerEdisto Electric Coop, Incsource History View New PagesMonte Carlo (EMCEE)
Metodos de Monte Carlo Paulo Roberto de Carvalho Junior
JÂ´unior MÂ´etodos de Monte Carlo #12;Exemplo: CÂ´alculo de Paulo Roberto de Carvalho JÂ´unior MÂ´etodos de Monte Carlo #12;Exemplo: CÂ´alculo de EquaÂ¸c~ao da Circunfer^encia: x2 + y2 = r2 x2 + y2 = 1 AQ Paulo Roberto de Carvalho JÂ´unior MÂ´etodos de Monte Carlo #12;Algoritmo: CÂ´alculo de double calc
Evaluation of Monte Carlo Electron-Transport Algorithms in the...
Office of Scientific and Technical Information (OSTI)
Evaluation of Monte Carlo Electron-Transport Algorithms in the Integrated Tiger Series Codes for Stochastic-Media Simulations. Citation Details In-Document Search Title: Evaluation...
Quantum Monte Carlo Calculations of Light Nuclei Using Chiral...
Office of Scientific and Technical Information (OSTI)
Details In-Document Search This content will become publicly available on November 4, 2015 Title: Quantum Monte Carlo Calculations of Light Nuclei Using Chiral Potentials...
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M.S.; Ricketson, L.F.; Dimits, A.M.; Caflisch, R.E.; Cohen, B.I.
2014-10-01
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ?, the computational cost of the method is O(?{sup ?2}) or O(?{sup ?2}(ln?){sup 2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(?{sup ?3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ?=10{sup ?5}. We discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.
Quantum Monte Carlo methods for nuclear physics
J. Carlson; S. Gandolfi; F. Pederiva; Steven C. Pieper; R. Schiavilla; K. E. Schmidt; R. B. Wiringa
2015-04-29
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Metallic lithium by quantum Monte Carlo
Sugiyama, G.; Zerah, G.; Alder, B.J.
1986-12-01
Lithium was chosen as the simplest known metal for the first application of quantum Monte Carlo methods in order to evaluate the accuracy of conventional one-electron band theories. Lithium has been extensively studied using such techniques. Band theory calculations have certain limitations in general and specifically in their application to lithium. Results depend on such factors as charge shape approximations (muffin tins), pseudopotentials (a special problem for lithium where the lack of rho core states requires a strong pseudopotential), and the form and parameters chosen for the exchange potential. The calculations are all one-electron methods in which the correlation effects are included in an ad hoc manner. This approximation may be particularly poor in the high compression regime, where the core states become delocalized. Furthermore, band theory provides only self-consistent results rather than strict limits on the energies. The quantum Monte Carlo method is a totally different technique using a many-body rather than a mean field approach which yields an upper bound on the energies. 18 refs., 4 figs., 1 tab.
Quantum Monte Carlo methods for nuclear physics
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.
2014-10-19
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-bodymore »interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less
Quantum Monte Carlo methods for nuclear physics
Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.
2014-10-19
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Theory of melting at high pressures: Amending density functional theory with quantum Monte Carlo
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Shulenburger, L.; Desjarlais, M. P.; Mattsson, T. R.
2014-10-01
We present an improved first-principles description of melting under pressure based on thermodynamic integration comparing Density Functional Theory (DFT) and quantum Monte Carlo (QMC) treatments of the system. The method is applied to address the longstanding discrepancy between density functional theory (DFT) calculations and diamond anvil cell (DAC) experiments on the melting curve of xenon, a noble gas solid where van der Waals binding is challenging for traditional DFT methods. The calculations show excellent agreement with data below 20 GPa and that the high-pressure melt curve is well described by a Lindemann behavior up to at least 80 GPa, amore »finding in stark contrast to DAC data.« less
CERN-TH.6275/91 Monte Carlo Event Generation
Sjöstrand, Torbjörn
CERN-TH.6275/91 Monte Carlo Event Generation for LHC T. Sj¨ostrand CERN -- Geneva Abstract The necessity of event generators for LHC physics studies is illustrated, and the Monte Carlo approach is outlined. A survey is presented of existing event generators, followed by a more detailed study
RADIATIVE HEAT TRANSFER WITH QUASI-MONTE CARLO METHODS
RADIATIVE HEAT TRANSFER WITH QUASI-MONTE CARLO METHODS A. Kersch1 W. Moroko2 A. Schuster1 1Siemens of Quasi-Monte Carlo to this problem. 1.1 Radiative Heat Transfer Reactors In the manufacturing of the problems which can be solved by such a simulation is high accuracy modeling of the radiative heat transfer
Exploring theory space with Monte Carlo reweighting
Gainer, James S. [Univ. of Florida, Gainesville, FL (United States); Lykken, Joseph [Fermi National Accelerator Laboratory, Batavia, IL (United States); Matchev, Konstantin T. [Univ. of Florida, Gainesville, FL (United States); Mrenna, Stephen [Fermi National Accelerator Laboratory, Batavia, IL (United States); Park, Myeonghun [The Univ. of Tokyo, Kashiwa (Japan)
2014-10-01
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. In particular, we suggest procedures that allow more efficient collaboration between theorists and experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.
Exploring theory space with Monte Carlo reweighting
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun
2014-10-13
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. In particular, we suggest procedures that allow more efficient collaboration between theoristsmore »and experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less
Monte Carlo Methods for Uncertainty Quantification Mathematical Institute, University of Oxford
Giles, Mike
Monte Carlo Methods for Uncertainty Quantification Mike Giles Mathematical Institute, University of Oxford ERCOFTAC course on Mathematical Methods and Tools in Uncertainty Management and Quantification: Introduction and Monte Carlo basics some model applications random number generation Monte Carlo estimation
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Kinetic Monte Carlo simulations of nanocrystalline film deposition
Ruan, Shiyun
A full diffusion kinetic Monte Carlo algorithm is used to model nanocrystalline film deposition, and study the mechanisms of grain nucleation and microstructure formation in such films. The major finding of this work is ...
Monte Carlo Hauser-Feshbach Calculations of Prompt Fission Neutrons...
Office of Scientific and Technical Information (OSTI)
Technical Report: Monte Carlo Hauser-Feshbach Calculations of Prompt Fission Neutrons and Gamma Rays: Application to Thermal Neutron-Induced Fission Reactions on U-235 and Pu-239...
A Monte Carlo tool for multi-node reliability evaluation
Thalasila, Chander Pravin
1993-01-01
-Area Reliability Program(NARP) is based on the random sampling of generator and transmission line status for each hour. Monte Carlo Approach for Estimating Contingency Statistics along with the Evaluation Subroutine(MACS-ES) advances the generation...
Shift: A Massively Parallel Monte Carlo Radiation Transport Package
Pandya, Tara M [ORNL; Johnson, Seth R [ORNL; Davidson, Gregory G [ORNL; Evans, Thomas M [ORNL; Hamilton, Steven P [ORNL
2015-01-01
This paper discusses the massively-parallel Monte Carlo radiation transport package, Shift, de- veloped at Oak Ridge National Laboratory. It reviews the capabilities, implementation, and parallel performance of this code package. Scaling results demonstrate very good strong and weak scaling behavior of the implemented algorithms. Benchmark results from various reactor problems show that Shift results compare well to other contemporary Monte Carlo codes and experimental results.
Monte Carlo Methods for Uncertainty Quantification Mathematical Institute, University of Oxford
Giles, Mike
Monte Carlo Methods for Uncertainty Quantification Mike Giles Mathematical Institute, University of Oxford ERCOFTAC course on Mathematical Methods and Tools in Uncertainty Management and Quantification Lecture 1: Introduction and Monte Carlo basics some model applications random number generation Monte
Quantum Monte Carlo Calculations of Light Nuclei Using Chiral Potentials
J. E. Lynn; J. Carlson; E. Epelbaum; S. Gandolfi; A. Gezerlis; A. Schwenk
2014-11-09
We present the first Green's function Monte Carlo calculations of light nuclei with nuclear interactions derived from chiral effective field theory up to next-to-next-to-leading order. Up to this order, the interactions can be constructed in a local form and are therefore amenable to quantum Monte Carlo calculations. We demonstrate a systematic improvement with each order for the binding energies of $A=3$ and $A=4$ systems. We also carry out the first few-body tests to study perturbative expansions of chiral potentials at different orders, finding that higher-order corrections are more perturbative for softer interactions. Our results confirm the necessity of a three-body force for correct reproduction of experimental binding energies and radii, and pave the way for studying few- and many-nucleon systems using quantum Monte Carlo methods with chiral interactions.
Monte Carlo: in the beginning and some great expectations
Metropolis, N.
1985-01-01
The central theme will be on the historical setting and origins of the Monte Carlo Method. The scene was post-war Los Alamos Scientific Laboratory. There was an inevitability about the Monte Carlo Event: the ENIAC had recently enjoyed its meteoric rise (on a classified Los Alamos problem); Stan Ulam had returned to Los Alamos; John von Neumann was a frequent visitor. Techniques, algorithms, and applications developed rapidly at Los Alamos. Soon, the fascination of the Method reached wider horizons. The first paper was submitted for publication in the spring of 1949. In the summer of 1949, the first open conference was held at the University of California at Los Angeles. Of some interst perhaps is an account of Fermi's earlier, independent application in neutron moderation studies while at the University of Rome. The quantum leap expected with the advent of massively parallel processors will provide stimuli for very ambitious applications of the Monte Carlo Method in disciplines ranging from field theories to cosmology, including more realistic models in the neurosciences. A structure of multi-instruction sets for parallel processing is ideally suited for the Monte Carlo approach. One may even hope for a modest hardening of the soft sciences.
Kinetic Monte Carlo approach to modeling dislocation mobility
Cai, Wei
surface diffusion and growth processes [3], in which the energy barriers for the atomic mechanisms the evolution of a physical system through numerical sampling of (Markovian) sto- chastic processes. While the traditional Monte Carlo (MC) method is applied to sample systems in or close to the thermal equilibrium, k
ENVIRONMENTAL MODELING: 1 APPLICATIONS: MONTE CARLO SENSITIVITY SIMULATIONS
Dimov, Ivan
SIMULATIONS TO THE PROBLEM OF AIR POLLUTION TRANSPORT 3 1.1 The Danish Eulerian Model #12;Chapter 1 APPLICATIONS: MONTE CARLO SENSITIVITY SIMULATIONS TO THE PROBLEM OF AIR POLLUTION of pollutants in a real-live scenario of air-pollution transport over Europe. First, the developed technique
Path Integral Monte-Carlo Calculations for Relativistic Oscillator
Alexandr Ivanov; Oleg Pavlovsky
2014-11-11
The problem of Relativistic Oscillator has been studied in the framework of Path Integral Monte-Carlo(PIMC) approach. Ultra-relativistic and non-relativistic limits have been discussed. We show that PIMC method can be effectively used for investigation of relativistic systems.
Monte Carlo Simulations of Thermal Conductivity in Nanoporous Si Membranes
1 Monte Carlo Simulations of Thermal Conductivity in Nanoporous Si Membranes Stefanie Wolf1 transport in Si nanomeshes. Phonons are treated semiclassically as particles of specific energy and velocity, ii) the roughness amplitude of the pore surfaces on the thermal conductivity of the nanomeshes. We
A Monte Carlo Approach for Football Play Generation Kennard Laviers
Sukthankar, Gita Reese
A Monte Carlo Approach for Football Play Generation Kennard Laviers School of EECS U. of Central, adversarial games and demonstrate its utility at gen- erating American football plays for Rush Football 2008. In football, like in many other multi-agent games, the actions of all of the agents are not equally crucial
Evolutionary Monte Carlo for protein folding simulations Faming Lianga)
Liang, Faming
Evolutionary Monte Carlo for protein folding simulations Faming Lianga) Department of Statistics to simulations of protein folding on simple lattice models, and to finding the ground state of a protein. In all structures in protein folding. The numerical results show that it is drastically superior to other methods
Particle Physics Phenomenology 1. Introduction and Monte Carlo techniques
Sjöstrand, Torbjörn
Particle Physics Phenomenology 1. Introduction and Monte Carlo techniques Torbj¨orn Sj: Introduction and MC techniques slide 2/81 #12;Course objectives Improve understanding of how physics at the LHC¨ostrand Department of Astronomy and Theoretical Physics Lund University S¨olvegatan 14A, SE-223 62 Lund, Sweden Lund
Monte Carlo sampling from the quantum state space. II
Yi-Lin Seah; Jiangwei Shang; Hui Khoon Ng; David John Nott; Berthold-Georg Englert
2015-04-27
High-quality random samples of quantum states are needed for a variety of tasks in quantum information and quantum computation. Searching the high-dimensional quantum state space for a global maximum of an objective function with many local maxima or evaluating an integral over a region in the quantum state space are but two exemplary applications of many. These tasks can only be performed reliably and efficiently with Monte Carlo methods, which involve good samplings of the parameter space in accordance with the relevant target distribution. We show how the Markov-chain Monte Carlo method known as Hamiltonian Monte Carlo, or hybrid Monte Carlo, can be adapted to this context. It is applicable when an efficient parameterization of the state space is available. The resulting random walk is entirely inside the physical parameter space, and the Hamiltonian dynamics enable us to take big steps, thereby avoiding strong correlations between successive sample points while enjoying a high acceptance rate. We use examples of single and double qubit measurements for illustration.
Thermal Properties of Supercritical Carbon Dioxide by Monte Carlo Simulations
Lisal, Martin
and speed of sound for carbon dioxide (CO2) in the supercritical region, using the fluctuation method based properties of CO2 at supercritical conditions. The molecular simulation results are compared to an analytical on Monte Carlo simu- lations in the isothermalisobaric ensemble. We model CO2 as a quadrupolar two
Types of random numbers and Monte Carlo Methods Pseudorandom number generation
Mascagni, Michael
Types of random numbers and Monte Carlo Methods Pseudorandom number generation Quasirandom number generation Conclusions WE246: Random Number Generation A Practitioner's Overview Prof. Michael Mascagni #12;Types of random numbers and Monte Carlo Methods Pseudorandom number generation Quasirandom number
Romano, Paul K. (Paul Kollath)
2013-01-01
Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there ...
Hybrid Probabilistic Roadmap and Monte Carlo Methods for Biomolecule Conformational Changes
Han, Li
1 Hybrid Probabilistic Roadmap and Monte Carlo Methods for Biomolecule Conformational Changes Li Han 1 Keywords: Conformation space, conformational changes, Monte Carlo, probabilistic roadmaps. 1. In this work, we have developed a hybrid Probabilistic Roadmap and Monte Carlo planner for biomolecule
Molecular physics and chemistry applications of quantum Monte Carlo
Reynolds, P.J.; Barnett, R.N.; Hammond, B.L.; Lester, W.A. Jr.
1985-09-01
We discuss recent work with the diffusion quantum Monte Carlo (QMC) method in its application to molecular systems. The formal correspondence of the imaginary time Schroedinger equation to a diffusion equation allows one to calculate quantum mechanical expectation values as Monte Carlo averages over an ensemble of random walks. We report work on atomic and molecular total energies, as well as properties including electron affinities, binding energies, reaction barriers, and moments of the electronic charge distribution. A brief discussion is given on how standard QMC must be modified for calculating properties. Calculated energies and properties are presented for a number of molecular systems, including He, F, F , H2, N, and N2. Recent progress in extending the basic QMC approach to the calculation of ''analytic'' (as opposed to finite-difference) derivatives of the energy is presented, together with an H2 potential-energy curve obtained using analytic derivatives. 39 refs., 1 fig., 2 tabs.
Calculations of pair production by Monte Carlo methods
Bottcher, C.; Strayer, M.R.
1991-01-01
We describe some of the technical design issues associated with the production of particle-antiparticle pairs in very large accelerators. To answer these questions requires extensive calculation of Feynman diagrams, in effect multi-dimensional integrals, which we evaluate by Monte Carlo methods on a variety of supercomputers. We present some portable algorithms for generating random numbers on vector and parallel architecture machines. 12 refs., 14 figs.
The hybrid Monte Carlo Algorithm and the chiral transition
Gupta, R.
1987-01-01
In this talk the author describes tests of the Hybrid Monte Carlo Algorithm for QCD done in collaboration with Greg Kilcup and Stephen Sharpe. We find that the acceptance in the glubal Metropolis step for Staggered fermions can be tuned and kept large without having to make the step-size prohibitively small. We present results for the finite temperature transition on 4/sup 4/ and 4 x 6/sup 3/ lattices using this algorithm.
Testing trivializing maps in the Hybrid Monte Carlo algorithm
Georg P. Engel; Stefan Schaefer
2011-02-09
We test a recent proposal to use approximate trivializing maps in a field theory to speed up Hybrid Monte Carlo simulations. Simulating the CP^{N-1} model, we find a small improvement with the leading order transformation, which is however compensated by the additional computational overhead. The scaling of the algorithm towards the continuum is not changed. In particular, the effect of the topological modes on the autocorrelation times is studied.
Coupled Electron-Ion Monte Carlo Calculations of Dense Metallic Hydrogen Carlo Pierleoni,1
May 2004; published 27 September 2004) We present an efficient new Monte Carlo method which couples structure and higher melting temperatures of the proton crystal than do Car-Parrinello molecular dynamics is unsatisfactory because energy differences among differ- ent crystalline phases are small requiring accurate total
Monte Carlo Methods for Uncertainty Quantification Mathematical Institute, University of Oxford
Giles, Mike
Carlo Mike Giles (Oxford) Monte Carlo methods May 3031, 2013 2 / 33 SDEs in Finance In computational finance, stochastic differential equations are used to model the behaviour of stocks interest rates exchange rates weather electricity/gas demand crude oil prices . . . Mike Giles (Oxford) Monte Carlo
Joint International Conference on Supercomputing in Nuclear Applications and Monte Carlo 2013 (SNA-Cr alloys are investigated using Density Functional Theory (DFT) formalism, in the form of constrained non temperature, represent the key unknown entities critical to the development of viable fusion reactor design
FZ2MC: A Tool for Monte Carlo Transport Code Geometry Manipulation
Hackel, B M; Nielsen Jr., D E; Procassini, R J
2009-02-25
The process of creating and validating combinatorial geometry representations of complex systems for use in Monte Carlo transport simulations can be both time consuming and error prone. To simplify this process, a tool has been developed which employs extensions of the Form-Z commercial solid modeling tool. The resultant FZ2MC (Form-Z to Monte Carlo) tool permits users to create, modify and validate Monte Carlo geometry and material composition input data. Plugin modules that export this data to an input file, as well as parse data from existing input files, have been developed for several Monte Carlo codes. The FZ2MC tool is envisioned as a 'universal' tool for the manipulation of Monte Carlo geometry and material data. To this end, collaboration on the development of plug-in modules for additional Monte Carlo codes is desired.
Properties of reactive oxygen species by quantum Monte Carlo
Zen, Andrea; Trout, Bernhardt L.; Guidoni, Leonardo
2014-07-07
The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N{sup 3} ? N{sup 4}, where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.
Global neutrino parameter estimation using Markov Chain Monte Carlo
Steen Hannestad
2007-10-10
We present a Markov Chain Monte Carlo global analysis of neutrino parameters using both cosmological and experimental data. Results are presented for the combination of all presently available data from oscillation experiments, cosmology, and neutrinoless double beta decay. In addition we explicitly study the interplay between cosmological, tritium decay and neutrinoless double beta decay data in determining the neutrino mass parameters. We furthermore discuss how the inference of non-neutrino cosmological parameters can benefit from future neutrino mass experiments such as the KATRIN tritium decay experiment or neutrinoless double beta decay experiments.
Quantitative Monte Carlo-based holmium-166 SPECT reconstruction
Elschot, Mattijs; Smits, Maarten L. J.; Nijsen, Johannes F. W.; Lam, Marnix G. E. H.; Zonnenberg, Bernard A.; Bosch, Maurice A. A. J. van den; Jong, Hugo W. A. M. de [Department of Radiology and Nuclear Medicine, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht (Netherlands); Viergever, Max A. [Image Sciences Institute, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht (Netherlands)] [Image Sciences Institute, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht (Netherlands)
2013-11-15
Purpose: Quantitative imaging of the radionuclide distribution is of increasing interest for microsphere radioembolization (RE) of liver malignancies, to aid treatment planning and dosimetry. For this purpose, holmium-166 ({sup 166}Ho) microspheres have been developed, which can be visualized with a gamma camera. The objective of this work is to develop and evaluate a new reconstruction method for quantitative {sup 166}Ho SPECT, including Monte Carlo-based modeling of photon contributions from the full energy spectrum.Methods: A fast Monte Carlo (MC) simulator was developed for simulation of {sup 166}Ho projection images and incorporated in a statistical reconstruction algorithm (SPECT-fMC). Photon scatter and attenuation for all photons sampled from the full {sup 166}Ho energy spectrum were modeled during reconstruction by Monte Carlo simulations. The energy- and distance-dependent collimator-detector response was modeled using precalculated convolution kernels. Phantom experiments were performed to quantitatively evaluate image contrast, image noise, count errors, and activity recovery coefficients (ARCs) of SPECT-fMC in comparison with those of an energy window-based method for correction of down-scattered high-energy photons (SPECT-DSW) and a previously presented hybrid method that combines MC simulation of photopeak scatter with energy window-based estimation of down-scattered high-energy contributions (SPECT-ppMC+DSW). Additionally, the impact of SPECT-fMC on whole-body recovered activities (A{sup est}) and estimated radiation absorbed doses was evaluated using clinical SPECT data of six {sup 166}Ho RE patients.Results: At the same noise level, SPECT-fMC images showed substantially higher contrast than SPECT-DSW and SPECT-ppMC+DSW in spheres ?17 mm in diameter. The count error was reduced from 29% (SPECT-DSW) and 25% (SPECT-ppMC+DSW) to 12% (SPECT-fMC). ARCs in five spherical volumes of 1.96–106.21 ml were improved from 32%–63% (SPECT-DSW) and 50%–80% (SPECT-ppMC+DSW) to 76%–103% (SPECT-fMC). Furthermore, SPECT-fMC recovered whole-body activities were most accurate (A{sup est}= 1.06 × A ? 5.90 MBq, R{sup 2}= 0.97) and SPECT-fMC tumor absorbed doses were significantly higher than with SPECT-DSW (p = 0.031) and SPECT-ppMC+DSW (p = 0.031).Conclusions: The quantitative accuracy of {sup 166}Ho SPECT is improved by Monte Carlo-based modeling of the image degrading factors. Consequently, the proposed reconstruction method enables accurate estimation of the radiation absorbed dose in clinical practice.
Monte Carlo tests of Orbital-Free Density Functional Theory
D. I. Palade
2014-12-12
The relationship between the exact kinetic energy density in a quantum system in the frame of Density Functional Theory and the semiclassical functional expression for the same quantity is investigated. The analysis is performed with Monte Carlo simulations of the Kohn-Sham potentials. We find that the semiclassical form represents the statistical expectation value of the quantum nature. Based on the numerical results, we propose an empirical correction to the existing functional and an associated method to improve the Orbital-Free results.
Quantum Monte Carlo Simulation of Overpressurized Liquid {sup 4}He
Vranjes, L.; Boronat, J.; Casulleras, J.; Cazorla, C.
2005-09-30
A diffusion Monte Carlo simulation of superfluid {sup 4}He at zero temperature and pressures up to 275 bar is presented. Increasing the pressure beyond freezing ({approx}25 bar), the liquid enters the overpressurized phase in a metastable state. In this regime, we report results of the equation of state and the pressure dependence of the static structure factor, the condensate fraction, and the excited-state energy corresponding to the roton. Along this large pressure range, both the condensate fraction and the roton energy decrease but do not become zero. The roton energies obtained are compared with recent experimental data in the overpressurized regime.
Markov Chain Monte Carlo Method without Detailed Balance
Hidemaro Suwa; Synge Todo
2010-10-13
We present a specific algorithm that generally satisfies the balance condition without imposing the detailed balance in the Markov chain Monte Carlo. In our algorithm, the average rejection rate is minimized, and even reduced to zero in many relevant cases. The absence of the detailed balance also introduces a net stochastic flow in a configuration space, which further boosts up the convergence. We demonstrate that the autocorrelation time of the Potts model becomes more than 6 times shorter than that by the conventional Metropolis algorithm. Based on the same concept, a bounce-free worm algorithm for generic quantum spin models is formulated as well.
Validation of Phonon Physics in the CDMS Detector Monte Carlo
McCarthy, K.A.; Leman, S.W.; Anderson, A.J.; /MIT; Brandt, D.; /SLAC; Brink, P.L.; Cabrera, B.; Cherry, M.; /Stanford U.; Do Couto E Silva, E.; /SLAC; Cushman, P.; /Minnesota U.; Doughty, T.; /UC, Berkeley; Figueroa-Feliciano, E.; /MIT; Kim, P.; /SLAC; Mirabolfathi, N.; /UC, Berkeley; Novak, L.; /Stanford U.; Partridge, R.; /SLAC; Pyle, M.; /Stanford U.; Reisetter, A.; /Minnesota U. /St. Olaf Coll.; Resch, R.; /SLAC; Sadoulet, B.; Serfass, B.; Sundqvist, K.M.; /UC, Berkeley /Stanford U.
2012-06-06
The SuperCDMS collaboration is a dark matter search effort aimed at detecting the scattering of WIMP dark matter from nuclei in cryogenic germanium targets. The CDMS Detector Monte Carlo (CDMS-DMC) is a simulation tool aimed at achieving a deeper understanding of the performance of the SuperCDMS detectors and aiding the dark matter search analysis. We present results from validation of the phonon physics described in the CDMS-DMC and outline work towards utilizing it in future WIMP search analyses.
Monte Carlo Tools for charged Higgs boson production
K. Kovarik
2014-12-18
In this short review we discuss two implementations of the charged Higgs boson production process in association with a top quark in Monte Carlo event generators at next-to-leading order in QCD. We introduce the MC@NLO and the POWHEG method of matching next-to-leading order matrix elements with parton showers and compare both methods analyzing the charged Higgs boson production process in association with a top quark. We shortly discuss the case of a light charged Higgs boson where the associated charged Higgs production interferes with the charged Higgs production via t tbar-production and subsequent decay of the top quark.
Multicanonical Hybrid Monte Carlo: Boosting Simulations of Compact QED
G. Arnold; Th. Lippert; K. Schilling
1998-09-21
We demonstrate that substantial progress can be achieved in the study of the phase structure of 4-dimensional compact QED by a joint use of hybrid Monte Carlo and multicanonical algorithms, through an efficient parallel implementation. This is borne out by the observation of considerable speedup of tunnelling between the metastable states, close to the phase transition, on the Wilson line. We estimate that the creation of adequate samples (with order 100 flip-flops) becomes a matter of half a year's runtime at 2 Gflops sustained performance for lattices of size up to 24^4.
A Look at general cavity theory through a code incorporating Monte Carlo techniques
Weyland, Mark Duffy
1989-01-01
material, the wall, being exponentially attenuated into the dosimeter, or the cavity. This assumption was investigated in this research using the Monte Carlo techniques in a modern computer code EGS4, Appropriate geometries were defined in the code and a... and relate the measured dose to that within the material, Monte Carlo techniques have been used to simulate the irradiation of various materials. The computer code EGS4 uses Monte Carlo techniques to simulate the randomness of radiation interactions...
Moffitt, John Russell
1972-01-01
for finite atmospheres with phase functions ranging from isotropic to the extremely anisotropic nimbo- stratus model. The main advantages of the Monte Carlo method were illustrated. One such advantage is that parameters, such as the single scattering... as an isotropic one. Another is that a single "computer run" can produce radiance values for a large number of ground albedos for any reasonable number of detectors placed at any desired depth in the atmosphere. 2. The Monte Carlo Method Monte Carlo, in all...
Quantum Monte Carlo calculations of spectroscopic overlaps in $A \\leq 7$ nuclei
I. Brida; Steven C. Pieper; R. B. Wiringa
2011-06-15
We present Green's function Monte Carlo calculations of spectroscopic overlaps for $A \\leq 7$ nuclei. The realistic Argonne v18 two-nucleon and Illinois-7 three-nucleon interactions are used to generate the nuclear states. The overlap matrix elements are extrapolated from mixed estimates between variational Monte Carlo and Green's function Monte Carlo wave functions. The overlap functions are used to obtain spectroscopic factors and asymptotic normalization coefficients, and they can serve as an input for low-energy reaction calculations.
Direct Monte Carlo simulation of chemical reaction systems: Dissociation and recombination
Anderson, James B.
Direct Monte Carlo simulation of chemical reaction systems: Dissociation and recombination Shannon Carlo simulations of a chemical reaction system with bimolecular and termolecular dissociation8 to be well suited for treating chemical reaction systems with nonequilibrium distributions, coupled gas
Four-quark energies in SU(2) lattice Monte Carlo using a tetrahedral geometry
A. M. Green; J. Lukkarinen; P. Pennanen; C. Michael; S. Furui
1994-12-05
This contribution -- a continuation of earlier work -- reports on recent developments in the calculation and understanding of 4-quark energies generated using lattice Monte Carlo techniques.
Monte Carlo model for electron degradation in methane
Bhardwaj, Anil
2015-01-01
We present a Monte Carlo model for degradation of 1-10,000 eV electrons in an atmosphere of methane. The electron impact cross sections for CH4 are compiled and analytical representations of these cross sections are used as input to the model.model.Yield spectra, which provides information about the number of inelastic events that have taken place in each energy bin, is used to calculate the yield (or population) of various inelastic processes. The numerical yield spectra, obtained from the Monte Carlo simulations, is represented analytically, thus generating the Analytical Yield Spectra (AYS). AYS is employed to obtain the mean energy per ion pair and efficiencies of various inelastic processes.Mean energy per ion pair for neutral CH4 is found to be 26 (27.8) eV at 10 (0.1) keV. Efficiency calculation showed that ionization is the dominant process at energies >50 eV, for which more than 50% of the incident electron energy is used. Above 25 eV, dissociation has an efficiency of 27%. Below 10 eV, vibrational e...
Monte Carlo simulation of quantum Zeno effect in the brain
Danko Georgiev
2014-12-11
Environmental decoherence appears to be the biggest obstacle for successful construction of quantum mind theories. Nevertheless, the quantum physicist Henry Stapp promoted the view that the mind could utilize quantum Zeno effect to influence brain dynamics and that the efficacy of such mental efforts would not be undermined by environmental decoherence of the brain. To address the physical plausibility of Stapp's claim, we modeled the brain using quantum tunneling of an electron in a multiple-well structure such as the voltage sensor in neuronal ion channels and performed Monte Carlo simulations of quantum Zeno effect exerted by the mind upon the brain in the presence or absence of environmental decoherence. The simulations unambiguously showed that the quantum Zeno effect breaks down for timescales greater than the brain decoherence time. To generalize the Monte Carlo simulation results for any n-level quantum system, we further analyzed the change of brain entropy due to the mind probing actions and proved a theorem according to which local projections cannot decrease the von Neumann entropy of the unconditional brain density matrix. The latter theorem establishes that Stapp's model is physically implausible but leaves a door open for future development of quantum mind theories provided the brain has a decoherence-free subspace.
Quantum Monte Carlo calculations of $A=9,10$ nuclei
Steven C. Pieper; K. Varga; R. B. Wiringa
2002-06-24
We report on quantum Monte Carlo calculations of the ground and low-lying excited states of $A=9,10$ nuclei using realistic Hamiltonians containing the Argonne $v_{18}$ two-nucleon potential alone or with one of several three-nucleon potentials, including Urbana IX and three of the new Illinois models. The calculations begin with correlated many-body wave functions that have an $\\alpha$-like core and multiple p-shell nucleons, $LS$-coupled to the appropriate $(J^{\\pi};T)$ quantum numbers for the state of interest. After optimization, these variational trial functions are used as input to a Green's function Monte Carlo calculation of the energy, using a constrained path algorithm. We find that the Hamiltonians that include Illinois three-nucleon potentials reproduce ten states in $^9$Li, $^9$Be, $^{10}$Be, and $^{10}$B with an rms deviation as little as 900 keV. In particular, we obtain the correct 3$^+$ ground state for $^{10}$B, whereas the Argonne $v_{18}$ alone or with Urbana IX predicts a 1$^+$ ground state. In addition, we calculate isovector and isotensor energy differences, electromagnetic moments, and one- and two-body density distributions.
Brachytherapy structural shielding calculations using Monte Carlo generated, monoenergetic data
Zourari, K.; Peppa, V.; Papagiannis, P.; Ballester, Facundo; Siebert, Frank-André
2014-04-15
Purpose: To provide a method for calculating the transmission of any broad photon beam with a known energy spectrum in the range of 20–1090 keV, through concrete and lead, based on the superposition of corresponding monoenergetic data obtained from Monte Carlo simulation. Methods: MCNP5 was used to calculate broad photon beam transmission data through varying thickness of lead and concrete, for monoenergetic point sources of energy in the range pertinent to brachytherapy (20–1090 keV, in 10 keV intervals). The three parameter empirical model introduced byArcher et al. [“Diagnostic x-ray shielding design based on an empirical model of photon attenuation,” Health Phys. 44, 507–517 (1983)] was used to describe the transmission curve for each of the 216 energy-material combinations. These three parameters, and hence the transmission curve, for any polyenergetic spectrum can then be obtained by superposition along the lines of Kharrati et al. [“Monte Carlo simulation of x-ray buildup factors of lead and its applications in shielding of diagnostic x-ray facilities,” Med. Phys. 34, 1398–1404 (2007)]. A simple program, incorporating a graphical user interface, was developed to facilitate the superposition of monoenergetic data, the graphical and tabular display of broad photon beam transmission curves, and the calculation of material thickness required for a given transmission from these curves. Results: Polyenergetic broad photon beam transmission curves of this work, calculated from the superposition of monoenergetic data, are compared to corresponding results in the literature. A good agreement is observed with results in the literature obtained from Monte Carlo simulations for the photon spectra emitted from bare point sources of various radionuclides. Differences are observed with corresponding results in the literature for x-ray spectra at various tube potentials, mainly due to the different broad beam conditions or x-ray spectra assumed. Conclusions: The data of this work allow for the accurate calculation of structural shielding thickness, taking into account the spectral variation with shield thickness, and broad beam conditions, in a realistic geometry. The simplicity of calculations also obviates the need for the use of crude transmission data estimates such as the half and tenth value layer indices. Although this study was primarily designed for brachytherapy, results might also be useful for radiology and nuclear medicine facility design, provided broad beam conditions apply.
Population Monte Carlo algorithms Yukito Iba The Institute of Statistical Mathematics
Iba, Yukito
279 ¤ Population Monte Carlo algorithms Yukito Iba The Institute of Statistical Mathematics iba algorithm Summary We give a cross-disciplinary survey on "population" Monte Carlo algorithms. In these algorithms, a set of "walkers" or "particles" is used as a representation of a high-dimensional vector
MONTE CARLO SIMULATION METHOD By Ronald R. Charpentier and Timothy R. Klett
Laughlin, Robert B.
EMCEE and Emc2 are Monte-Carlo simulation programs for assessing undiscovered conventional oil and gasChapter MC MONTE CARLO SIMULATION METHOD By Ronald R. Charpentier and Timothy R. Klett in U in the toolbar to return. U.S. GEOLOGICAL SURVEY WORLD PETROLEUM ASSESSMENT 2000-- DESCRIPTION AND RESULTS U
Path Integral Monte Carlo Calculation of the Deuterium Hugoniot B. Militzer and D. M. Ceperley
Militzer, Burkhard
Path Integral Monte Carlo Calculation of the Deuterium Hugoniot B. Militzer and D. M. Ceperley-Champaign, Urbana, IL 61801 (January 21, 2000) Restricted path integral Monte Carlo simulations have been used#11;ects and the dependence on the time step of the path integral. Further, we compare the results
Author's personal copy Monte Carlo methods for design and analysis of radiation detectors
Shultis, J. Kenneth
Author's personal copy Monte Carlo methods for design and analysis of radiation detectors William L Radiation detectors Inverse problems Detector design a b s t r a c t An overview of Monte Carlo as a practical method for designing and analyzing radiation detectors is provided. The emphasis is on detectors
Direct Monte Carlo simulation of chemical reaction systems: Simple bimolecular reactions
Anderson, James B.
Direct Monte Carlo simulation of chemical reaction systems: Simple bimolecular reactions Shannon D and understanding the behavior of gas phase chemical reaction systems. This Monte Carlo method, originated by Bird. Extension to chemical reactions offers a powerful tool for treating reaction systems with nonthermal
Communication: Monte Carlo calculation of the exchange energy Roi Baer and Daniel Neuhauser
Baer, Roi
Communication: Monte Carlo calculation of the exchange energy Roi Baer and Daniel Neuhauser subject to AIP license or copyright; see http://jcp.aip.org/about/rights_and_permissions #12;THE JOURNAL OF CHEMICAL PHYSICS 137, 051103 (2012) Communication: Monte Carlo calculation of the exchange energy Roi Baer1
Monte Carlo Methods for Uncertainty Quantification Mathematical Institute, University of Oxford
Giles, Mike
Carlo Mike Giles (Oxford) Monte Carlo methods May 3031, 2013 2 / 33 #12;SDEs in Finance In computational finance, stochastic differential equations are used to model the behaviour of stocks interest rates exchange rates weather electricity/gas demand crude oil prices . . . Mike Giles (Oxford) Monte
Schulze, Tim
An Energy Localization Principle and its Application to Fast Kinetic Monte Carlo Simulation of Michigan, Ann Arbor, MI 48109-1109 Abstract Simulation of heteroepitaxial growth using kinetic Monte Carlo (KMC) is often based on rates determined by differences in elastic energy between two configurations
Kinetic Monte Carlo simulations of the response of carbon nanotubes to electron irradiation
Krasheninnikov, Arkady V.
Kinetic Monte Carlo simulations of the response of carbon nanotubes to electron irradiation J of Technology, Finland (Dated: January 12, 2007) Irradiation is increasingly used nowadays to tailor of nanotubes to irradiation is still lacking, we have implemented the kinetic Monte Carlo method with Bortz
A New Monte Carlo Simulation Method for Tolerance Analysis of Kinematically Constrained Assemblies
A New Monte Carlo Simulation Method for Tolerance Analysis of Kinematically Constrained Assemblies Abstract A generalized Monte Carlo simulation method is presented for tolerance analysis of mechanical assemblies with small kinematic adjustments. This is a new tool for assembly tolerance analysis based
Hybrid Probabilistic RoadMap -Monte Carlo Motion Planning for Closed Chain Systems with
Han, Li
Hybrid Probabilistic RoadMap - Monte Carlo Motion Planning for Closed Chain Systems with Spherical@clarku.edu Abstract-- In this paper we propose a hybrid Probabilistic RoadMap - Monte Carlo (PRM-MC) motion planner and connect a large number of robot configurations in order to build a roadmap that reflects the properties
Continuous Contour Monte Carlo for Marginal Density Estimation With an Application to a
Liang, Faming
; Gelman and Meng 1998), reverse logistic regression (Geyer 1994), marginal likelihood (Chib 1995; Chib; Reversible jump Markov chain Monte Carlo; Stochastic approximation; Wang-Landau algorithm. 1. INTRODUCTION;Continuous Contour Monte Carlo 609 variety of approaches including reversible jump MCMC (Green 1995; Green
Monte Carlo Simulation of Dense Polymer Melts Using Event Chain Algorithms
Tobias Alexander Kampmann; Horst-Holger Boltz; Jan Kierfeld
2015-07-23
We propose an efficient Monte Carlo algorithm for the off-lattice simulation of dense hard sphere polymer melts using cluster moves, called event chains, which allow for a rejection-free treatment of the excluded volume. Event chains also allow for an efficient preparation of initial configurations in polymer melts. We parallelize the event chain Monte Carlo algorithm to further increase simulation speeds and suggest additional local topology-changing moves ("swap" moves) to accelerate equilibration. By comparison with other Monte Carlo and molecular dynamics simulations, we verify that the event chain algorithm reproduces the correct equilibrium behavior of polymer chains in the melt. By comparing intrapolymer diffusion time scales, we show that event chain Monte Carlo algorithms can achieve simulation speeds comparable to optimized molecular dynamics simulations. The event chain Monte Carlo algorithm exhibits Rouse dynamics on short time scales. In the absence of swap moves, we find reptation dynamics on intermediate time scales for long chains.
Monte Carlo Simulation Tool Installation and Operation Guide
Aguayo Navarrete, Estanislao; Ankney, Austin S.; Berguson, Timothy J.; Kouzes, Richard T.; Orrell, John L.; Troy, Meredith D.; Wiseman, Clinton G.
2013-09-02
This document provides information on software and procedures for Monte Carlo simulations based on the Geant4 toolkit, the ROOT data analysis software and the CRY cosmic ray library. These tools have been chosen for its application to shield design and activation studies as part of the simulation task for the Majorana Collaboration. This document includes instructions for installation, operation and modification of the simulation code in a high cyber-security computing environment, such as the Pacific Northwest National Laboratory network. It is intended as a living document, and will be periodically updated. It is a starting point for information collection by an experimenter, and is not the definitive source. Users should consult with one of the authors for guidance on how to find the most current information for their needs.
Improved version of the PHOBOS Glauber Monte Carlo
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Loizides, C.; Nagle, J.; Steinberg, P.
2015-09-01
“Glauber” models are used to calculate geometric quantities in the initial state of heavy ion collisions, such as impact parameter, number of participating nucleons and initial eccentricity. Experimental heavy-ion collaborations, in particular at RHIC and LHC, use Glauber Model calculations for various geometric observables for determination of the collision centrality. In this document, we describe the assumptions inherent to the approach, and provide an updated implementation (v2) of the Monte Carlo based Glauber Model calculation, which originally was used by the PHOBOS collaboration. The main improvement w.r.t. the earlier version (v1) (Alver et al. 2008) is the inclusion of Tritium,more »Helium-3, and Uranium, as well as the treatment of deformed nuclei and Glauber–Gribov fluctuations of the proton in p +A collisions. A users’ guide (updated to reflect changes in v2) is provided for running various calculations.« less
The Quantum Energy Density: Improved Efficiency for Quantum Monte Carlo
Krogel, Jaron T; Kim, Jeongnim; Ceperley, David M
2013-01-01
We establish a physically meaningful representation of a quantum energy density for use in Quantum Monte Carlo calculations. The energy density operator, defined in terms of Hamiltonian components and density operators, returns the correct Hamiltonian when integrated over a volume containing a cluster of particles. This property is demonstrated for a helium-neon "gas," showing that atomic energies obtained from the energy density correspond to eigenvalues of isolated systems. The formation energies of defects or interfaces are typically calculated as total energy differences. Using a model of delta-doped silicon (where dopant atoms form a thin plane) we show how interfacial energies can be calculated more efficiently with the energy density, since the region of interest is small. We also demonstrate how the energy density correctly transitions to the bulk limit away from the interface where the correct energy is obtainable from a separate total energy calculation.
Strain in the mesoscale kinetic Monte Carlo model for sintering
Bjørk, R; Tikare, V; Olevsky, E; Pryds, N
2014-01-01
Shrinkage strains measured from microstructural simulations using the mesoscale kinetic Monte Carlo (kMC) model for solid state sintering are discussed. This model represents the microstructure using digitized discrete sites that are either grain or pore sites. The algorithm used to simulate densification by vacancy annihilation removes an isolated pore site at a grain boundary and collapses a column of sites extending from the vacancy to the surface of sintering compact, through the center of mass of the nearest grain. Using this algorithm, the existing published kMC models are shown to produce anisotropic strains for homogeneous powder compacts with aspect ratios different from unity. It is shown that the line direction biases shrinkage strains in proportion the compact dimension aspect ratios. A new algorithm that corrects this bias in strains is proposed; the direction for collapsing the column is determined by choosing a random sample face and subsequently a random point on that face as the end point for...
Quantum Monte Carlo Calculations of $A\\leq6$ Nuclei
B. S. Pudliner; V. R. Pandharipande; J. Carlson; R. B. Wiringa
1995-02-13
The energies of $^{3}H$, $^{3}He$, and $^{4}He$ ground states, the ${\\frac{3}{2}}^{-}$ and ${\\frac{1}{2}}^{-}$ scattering states of $^{5}He$, the ground states of $^{6}He$, $^{6}Li$, and $^{6}Be$ and the $3^{+}$ and $0^{+}$ excited states of $^{6}Li$ have been accurately calculated with the Green's function Monte Carlo method using realistic models of two- and three-nucleon interactions. The splitting of the $A=3$ isospin $T=\\frac{1}{2}$ and $A=6$ isospin $T=1$, $J^{\\pi} = 0^{+}$ multiplets is also studied. The observed energies and radii are generally well reproduced, however, some definite differences between theory and experiment can be identified.
Quantum Monte Carlo simulation of spin-polarized H
Markic, L. Vranjes; Boronat, J.; Casulleras, J.
2007-02-01
The ground-state properties of spin polarized hydrogen H{down_arrow} are obtained by means of diffusion Monte Carlo calculations. Using the most accurate to date ab initio H{down_arrow}-H{down_arrow} interatomic potential we have studied its gas phase, from the very dilute regime until densities above its freezing point. At very small densities, the equation of state of the gas is very well described in terms of the gas parameter {rho}a{sup 3}, with a the s-wave scattering length. The solid phase has also been studied up to high pressures. The gas-solid phase transition occurs at a pressure of 173 bar, a much higher value than suggested by previous approximate descriptions.
Improving multivariate Horner schemes with Monte Carlo tree search
J. Kuipers; J. A. M. Vermaseren; A. Plaat; H. J. van den Herik
2012-07-30
Optimizing the cost of evaluating a polynomial is a classic problem in computer science. For polynomials in one variable, Horner's method provides a scheme for producing a computationally efficient form. For multivariate polynomials it is possible to generalize Horner's method, but this leaves freedom in the order of the variables. Traditionally, greedy schemes like most-occurring variable first are used. This simple textbook algorithm has given remarkably efficient results. Finding better algorithms has proved difficult. In trying to improve upon the greedy scheme we have implemented Monte Carlo tree search, a recent search method from the field of artificial intelligence. This results in better Horner schemes and reduces the cost of evaluating polynomials, sometimes by factors up to two.
Lifting -- A Nonreversible Markov Chain Monte Carlo Algorithm
Vucelja, Marija
2015-01-01
Markov Chain Monte Carlo algorithms are invaluable numerical tools for exploring stationary properties of physical systems -- in particular when direct sampling is not feasible. They are widely used in many areas of physics and other sciences. Most common implementations are done with reversible Markov chains -- Markov chains that obey detailed balance. Reversible Markov chains are sufficient in order for the physical system to relax to equilibrium, but it is not necessary. Here we review several works that use "lifted" or nonreversible Markov chains, which violate detailed balance, yet still converge to the correct stationary distribution (they obey the global balance condition). In certain cases, the acceleration is a square root improvement at most, to the conventional reversible Markov chains. We introduce the problem in a way that makes it accessible to non-specialists. We illustrate the method on several representative examples (sampling on a ring, sampling on a torus, an Ising model on a complete graph...
SU-E-T-188: Film Dosimetry Verification of Monte Carlo Generated Electron Treatment Plans
Enright, S; Asprinio, A; Lu, L
2014-06-01
Purpose: The purpose of this study was to compare dose distributions from film measurements to Monte Carlo generated electron treatment plans. Irradiation with electrons offers the advantages of dose uniformity in the target volume and of minimizing the dose to deeper healthy tissue. Using the Monte Carlo algorithm will improve dose accuracy in regions with heterogeneities and irregular surfaces. Methods: Dose distributions from GafChromic{sup ™} EBT3 films were compared to dose distributions from the Electron Monte Carlo algorithm in the Eclipse{sup ™} radiotherapy treatment planning system. These measurements were obtained for 6MeV, 9MeV and 12MeV electrons at two depths. All phantoms studied were imported into Eclipse by CT scan. A 1 cm thick solid water template with holes for bonelike and lung-like plugs was used. Different configurations were used with the different plugs inserted into the holes. Configurations with solid-water plugs stacked on top of one another were also used to create an irregular surface. Results: The dose distributions measured from the film agreed with those from the Electron Monte Carlo treatment plan. Accuracy of Electron Monte Carlo algorithm was also compared to that of Pencil Beam. Dose distributions from Monte Carlo had much higher pass rates than distributions from Pencil Beam when compared to the film. The pass rate for Monte Carlo was in the 80%–99% range, where the pass rate for Pencil Beam was as low as 10.76%. Conclusion: The dose distribution from Monte Carlo agreed with the measured dose from the film. When compared to the Pencil Beam algorithm, pass rates for Monte Carlo were much higher. Monte Carlo should be used over Pencil Beam for regions with heterogeneities and irregular surfaces.
Koh, Wonshill
2013-02-22
The light propagation in highly scattering turbid media composed of the particles with different size distribution is studied using a Monte Carlo simulation model implemented in Standard C. Monte Carlo method has been widely utilized to study...
Automatic Generation of a JET 3D Neutronics Model from CAD Geometry Data for Monte Carlo Calculations
Straub, John E.
Statistical-Temperature Monte Carlo and Molecular Dynamics Algorithms Jaegil Kim,* John E. Straub. A novel molecular dynamics algorithm (STMD) applicable to complex systems and a Monte Carlo algorithmRevLett.97.050601 PACS numbers: 05.10.ÿa, 02.70.Rr, 87.18.Bb The Wang-Landau (WL) Monte Carlo (MC) algorithm
Using Stochastic Discounted Cash Flow and Real Option Monte Carlo Simulation to Analyse the Impacts in the presence of a windfall profits tax. Real options Monte Carlo simulation is used to characterise from the project. The results highlight that Monte Carlo simulation paired with the real option
Complete Monte Carlo Simulation of Neutron Scattering Experiments
Drosg, M.
2011-12-13
In the far past, it was not possible to accurately correct for the finite geometry and the finite sample size of a neutron scattering set-up. The limited calculation power of the ancient computers as well as the lack of powerful Monte Carlo codes and the limitation in the data base available then prevented a complete simulation of the actual experiment. Using e.g. the Monte Carlo neutron transport code MCNPX [1], neutron scattering experiments can be simulated almost completely with a high degree of precision using a modern PC, which has a computing power that is ten thousand times that of a super computer of the early 1970s. Thus, (better) corrections can also be obtained easily for previous published data provided that these experiments are sufficiently well documented. Better knowledge of reference data (e.g. atomic mass, relativistic correction, and monitor cross sections) further contributes to data improvement. Elastic neutron scattering experiments from liquid samples of the helium isotopes performed around 1970 at LANL happen to be very well documented. Considering that the cryogenic targets are expensive and complicated, it is certainly worthwhile to improve these data by correcting them using this comparatively straightforward method. As two thirds of all differential scattering cross section data of {sup 3}He(n,n){sup 3}He are connected to the LANL data, it became necessary to correct the dependent data measured in Karlsruhe, Germany, as well. A thorough simulation of both the LANL experiments and the Karlsruhe experiment is presented, starting from the neutron production, followed by the interaction in the air, the interaction with the cryostat structure, and finally the scattering medium itself. In addition, scattering from the hydrogen reference sample was simulated. For the LANL data, the multiple scattering corrections are smaller by a factor of five at least, making this work relevant. Even more important are the corrections to the Karlsruhe data due to the inclusion of the missing outgoing self-attenuation that amounts to up to 15%.
A study of the contrast of a submerged disc using Monte Carlo techniques
Hagan, Donald Frank
1980-01-01
in the simulation of lioht interactions within the Earth's ocean system. Using the Monte Carlo computer program the contrast of a Secchi disc and its ocean background was calculated. A Secchi disc 1s a horizontal disc in the ocean that is v1ewed from the surface... of samples which requires more computation time. Before the advent of high speed computers, the Monte Carlo Method was generally useless because of the massive amount of computation it required. The Monte Carlo Method is fairly simple in application...
Auxiliary Field Diffusion Monte Carlo calculation of nuclei with A<40 with tensor interactions
S. Gandolfi; F. Pederiva; S. Fantoni; K. E. Schmidt
2007-04-13
We calculate the ground-state energy of 4He, 8He, 16O, and 40Ca using the auxiliary field diffusion Monte Carlo method in the fixed phase approximation and the Argonne v6' interaction which includes a tensor force. Comparison of our light nuclei results to those of Green's function Monte Carlo calculations shows the accuracy of our method for both open and closed shell nuclei. We also apply it to 16O and 40Ca to show that quantum Monte Carlo methods are now applicable to larger nuclei.
Perfetti, Christopher M [ORNL; Rearden, Bradley T [ORNL
2014-01-01
This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.
Monte Carlo simulation of the terrestrial hydrogen exosphere
Hodges, R.R. Jr. [Univ. of Texas, Dallas, TX (United States)
1994-12-01
Methods for Monte Carlo simulation of planetary exospheres have evolved from early work on the lunar atmosphere, where the regolith surface provides a well defined exobase. A major limitation of the successor simulations of the exospheres of Earth and Venus is the use of an exobase surface as an artifice to separate the collisional processes of the thermosphere from a collisionles exosphere. In this paper a new generalized approach to exosphere simulation is described, wherein the exobase is replaced by a barometric depletion of the major constitents of the thermosphere. Exospheric atoms in the thermosphere-exosphere transition region, and in the outer exosphere as well, travel in ballistic trajectories that are interrupted by collisons with the background gas, and by charge exchange interactions with ionospheric particles. The modified simulator has been applied to the terrestrial hydrogen exosphere problem, using velocity dependent differential cross sections to provide statistically correct collisional scattering in H-O and H-H(+) interactions. Global models are presented for both solstice and equinox over the effective solar cycle range of the F{sub 10.7} index (80 to 230). Simulation results show significant differences with previous terrestrial exosphere models, as well as with the H distributions of the MSIS-86 thermosphere model.
Nuclear Force from Monte Carlo Simulations of Lattice Quantum Chromodynamics
S. Aoki; T. Hatsuda; N. Ishii
2008-10-24
The nuclear force acting between protons and neutrons is studied in the Monte Carlo simulations of the fundamental theory of the strong interaction, the quantum chromodynamics defined on the hypercubic space-time lattice. After a brief summary of the empirical nucleon-nucleon (NN) potentials which can fit the NN scattering experiments in high precision, we outline the basic formulation to derive the potential between the extended objects such as the nucleons composed of quarks. The equal-time Bethe-Salpeter amplitude is a key ingredient for defining the NN potential on the lattice. We show the results of the numerical simulations on a $32^4$ lattice with the lattice spacing $a \\simeq 0.137 $fm (lattice volume (4.4 fm)$^4$) in the quenched approximation. The calculation was carried out using the massively parallel computer Blue Gene/L at KEK. We found that the calculated NN potential at low energy has basic features expected from the empirical NN potentials; attraction at long and medium distances and the repulsive core at short distance. Various future directions along this line of research are also summarized.
Performance of three-photon PET imaging: Monte Carlo simulations
Kacperski, K; Kacperski, Krzysztof; Spyrou, Nicholas M.
2005-01-01
We have recently introduced the idea of making use of three-photon positron annihilations in positron emission tomography. In this paper the basic characteristics of the three-gamma imaging in PET are studied by means of Monte Carlo simulations and analytical computations. Two typical configurations of human and small animal scanners are considered. Three-photon imaging requires high energy resolution detectors. Parameters currently attainable by CdZnTe semiconductor detectors, the technology of choice for the future development of radiation imaging, are assumed. Spatial resolution is calculated as a function of detector energy resolution and size, position in the field of view, scanner size, and the energies of the three gamma annihilation photons. Possible ways to improve the spatial resolution obtained for nominal parameters: 1.5 cm and 3.2 mm FWHM for human and small animal scanners, respectively, are indicated. Counting rates of true and random three-photon events for typical human and small animal scann...
Non-adiabatic molecular dynamics by accelerated semiclassical Monte Carlo
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
White, Alexander J.; Gorshkov, Vyacheslav N.; Tretiak, Sergei; Mozyrsky, Dmitry
2015-07-07
Non-adiabatic dynamics, where systems non-radiatively transition between electronic states, plays a crucial role in many photo-physical processes, such as fluorescence, phosphorescence, and photoisomerization. Methods for the simulation of non-adiabatic dynamics are typically either numerically impractical, highly complex, or based on approximations which can result in failure for even simple systems. Recently, the Semiclassical Monte Carlo (SCMC) approach was developed in an attempt to combine the accuracy of rigorous semiclassical methods with the efficiency and simplicity of widely used surface hopping methods. However, while SCMC was found to be more efficient than other semiclassical methods, it is not yet as efficientmore »as is needed to be used for large molecular systems. Here, we have developed two new methods: the accelerated-SCMC and the accelerated-SCMC with re-Gaussianization, which reduce the cost of the SCMC algorithm up to two orders of magnitude for certain systems. In many cases shown here, the new procedures are nearly as efficient as the commonly used surface hopping schemes, with little to no loss of accuracy. This implies that these modified SCMC algorithms will be of practical numerical solutions for simulating non-adiabatic dynamics in realistic molecular systems.« less
High order Chin actions in path integral Monte Carlo
Sakkos, K.; Casulleras, J.; Boronat, J.
2009-05-28
High order actions proposed by Chin have been used for the first time in path integral Monte Carlo simulations. Contrary to the Takahashi-Imada action, which is accurate to the fourth order only for the trace, the Chin action is fully fourth order, with the additional advantage that the leading fourth-order error coefficients are finely tunable. By optimizing two free parameters entering in the new action, we show that the time step error dependence achieved is best fitted with a sixth order law. The computational effort per bead is increased but the total number of beads is greatly reduced and the efficiency improvement with respect to the primitive approximation is approximately a factor of 10. The Chin action is tested in a one-dimensional harmonic oscillator, a H{sub 2} drop, and bulk liquid {sup 4}He. In all cases a sixth-order law is obtained with values of the number of beads that compare well with the pair action approximation in the stringent test of superfluid {sup 4}He.
Monte Carlo sampling from the quantum state space. I
Jiangwei Shang; Yi-Lin Seah; Hui Khoon Ng; David John Nott; Berthold-Georg Englert
2015-04-27
High-quality random samples of quantum states are needed for a variety of tasks in quantum information and quantum computation. Searching the high-dimensional quantum state space for a global maximum of an objective function with many local maxima or evaluating an integral over a region in the quantum state space are but two exemplary applications of many. These tasks can only be performed reliably and efficiently with Monte Carlo methods, which involve good samplings of the parameter space in accordance with the relevant target distribution. We show how the standard strategies of rejection sampling, importance sampling, and Markov-chain sampling can be adapted to this context, where the samples must obey the constraints imposed by the positivity of the statistical operator. For a comparison of these sampling methods, we generate sample points in the probability space for two-qubit states probed with a tomographically incomplete measurement, and then use the sample for the calculation of the size and credibility of the recently-introduced optimal error regions [see New J. Phys. 15 (2013) 123026]. Another illustration is the computation of the fractional volume of separable two-qubit states.
Ensemble bayesian model averaging using markov chain Monte Carlo sampling
Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P
2008-01-01
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.
Monte Carlo Simulations of Lattice Models for Single Polymer Systems
Hsiao-Ping Hsu
2015-03-03
Single linear polymer chains in dilute solutions under good solvent conditions are studied by Monte Carlo simulations with the pruned-enriched Rosenbluth method up to the chain length $N \\sim {\\cal O}(10^4)$. Based on the standard simple cubic lattice model (SCLM) with fixed bond length and the bond fluctuation model (BFM) with bond lengths in a range between $2$ and $\\sqrt{10}$, we investigate the conformations of polymer chains described by self-avoiding walks (SAWs) on the simple cubic lattice, and by random walks (RWs) and non-reversible random walks (NRRWs) in the absence of excluded volume (EV) interactions. In addition to flexible chains, we also extend our study to semiflexible chains for different stiffness controlled by a bending potential. The persistence lengths of chains extracted from the orientational correlations are estimated for all cases. We show that chains based on the BFM are more flexible than those based on the SCLM for a fixed bending energy. The microscopic differences between these two lattice models are discussed and the theoretical predictions of scaling laws given in the literature are checked and verified. Our simulations clarify that a different mapping ratio between the coarse-grained models and the atomistically realistic description of polymers is required in a coarse-graining approach due to the different crossovers to the asymptotic behavior.
A review of Monte Carlo simulations of polymers with PERM
Hsiao-Ping Hsu; Peter Grassberger
2011-07-06
In this review, we describe applications of the pruned-enriched Rosenbluth method (PERM), a sequential Monte Carlo algorithm with resampling, to various problems in polymer physics. PERM produces samples according to any given prescribed weight distribution, by growing configurations step by step with controlled bias, and correcting "bad" configurations by "population control". The latter is implemented, in contrast to other population based algorithms like e.g. genetic algorithms, by depth-first recursion which avoids storing all members of the population at the same time in computer memory. The problems we discuss all concern single polymers (with one exception), but under various conditions: Homopolymers in good solvents and at the $\\Theta$ point, semi-stiff polymers, polymers in confining geometries, stretched polymers undergoing a forced globule-linear transition, star polymers, bottle brushes, lattice animals as a model for randomly branched polymers, DNA melting, and finally -- as the only system at low temperatures, lattice heteropolymers as simple models for protein folding. PERM is for some of these problems the method of choice, but it can also fail. We discuss how to recognize when a result is reliable, and we discuss also some types of bias that can be crucial in guiding the growth into the right directions.
MARKOV CHAIN MONTE CARLO FOR AUTOMATED TRACKING OF GENEALOGY IN MICROSCOPY VIDEOS
MARKOV CHAIN MONTE CARLO FOR AUTOMATED TRACKING OF GENEALOGY IN MICROSCOPY VIDEOS KATHLEEN CHAMPION of the nuclei in the images and their genealogies. Evan Tice '09 has already developed some code that aims
Parallel Markov Chain Monte Carlo Methods for Large Scale Statistical Inverse Problems
Wang, Kainan
2014-04-18
but also the uncertainty of these estimations. Markov chain Monte Carlo (MCMC) is a useful technique to sample the posterior distribution and information can be extracted from the sampled ensemble. However, MCMC is very expensive to compute, especially...
Exponentially-convergent Monte Carlo for the One-dimensional Transport Equation
Peterson, Jacob Ross
2014-04-23
singular problems. Computational results are presented demonstrating the efficacy of the new approach. We tested our ECMC algorithm against standard Monte Carlo and found the ECMC method to be generally much more efficient. For a manufacture solution...
Improvements and applications of the Uniform Fission Site method in Monte Carlo
Hunter, Jessica Lynn
2014-01-01
Monte Carlo methods for reactor analysis have been in development with the eventual goal of full-core analysis. To attain results with reasonable uncertainties, large computational resources are needed. Variance reduction ...
APR1400 LBLOCA uncertainty quantification by Monte Carlo method and comparison with Wilks' formula
Hwang, M.; Bae, S.; Chung, B. D. [Korea Atomic Energy Research Inst., 150 Dukjin-dong, Yuseong-gu, Daejeon (Korea, Republic of)
2012-07-01
An analysis of the uncertainty quantification for the PWR LBLOCA by the Monte Carlo calculation has been performed and compared with the tolerance level determined by Wilks' formula. The uncertainty range and distribution of each input parameter associated with the LBLOCA accident were determined by the PIRT results from the BEMUSE project. The Monte-Carlo method shows that the 95. percentile PCT value can be obtained reliably with a 95% confidence level using the Wilks' formula. The extra margin by the Wilks' formula over the true 95. percentile PCT by the Monte-Carlo method was rather large. Even using the 3 rd order formula, the calculated value using the Wilks' formula is nearly 100 K over the true value. It is shown that, with the ever increasing computational capability, the Monte-Carlo method is accessible for the nuclear power plant safety analysis within a realistic time frame. (authors)
Walsh, Jonathan A. (Jonathan Alan)
2014-01-01
This thesis presents the development and analysis of computational methods for efficiently accessing and utilizing nuclear data in Monte Carlo neutron transport code simulations. Using the OpenMC code, profiling studies ...
Pasciak, Alexander Samuel
2007-04-25
Advancements in parallel and cluster computing have made many complex Monte Carlo simulations possible in the past several years. Unfortunately, cluster computers are large, expensive, and still not fast enough to make the ...
Wang, Li-Fang, Ph. D. Massachusetts Institute of Technology
2007-01-01
In this thesis research, a coherent scattering model for microwave remote sensing of vegetation canopy is developed on the basis of Monte Carlo simulations. An accurate model of vegetation structure is essential for the ...
Matrix Elements with Vetoes in the CASCADE Monte Carlo Event Generator
Michal Deak; Francesco Hautmann; Hannes Jung; Krzysztof Kutak
2012-06-08
We illustrate a study based on a veto technique to match parton showers and matrix elements in the Cascade Monte Carlo event generator, and present a numerical application to gluon matrix elements for jet production.
Shifting Preferences and Time-Varying Parameters in Demand Analysis: A Monte Carlo Study
Kanyama, Isaac Kalonda
2011-05-31
Using Monte Carlo experiments, I address two issues in demand analysis. The first relates to the performance of local flexible functional forms in recovering the time-varying elasticities of a true model, and in correctly identifying goods...
Monte Carlo and thermal hydraulic coupling using low-order nonlinear diffusion acceleration
Herman, Bryan R. (Bryan Robert)
2014-01-01
Monte Carlo (MC) methods for reactor analysis are most often employed as a benchmark tool for other transport and diffusion methods. In this work, we identify and resolve a few of the issues associated with using MC as a ...
Show me the way to Monte Carlo: density-based trajectory Steven Strachan1
Murray-Smith, Roderick
with a combination of Global Positioning System data, a music player, inertial sen- sing, magnetic bearing data, magnetic bearing data and Monte Carlo samp- ling and modulates a listener's music in order to guide them
Xu, Sheng, S.M. Massachusetts Institute of Technology
2013-01-01
In order to use Monte Carlo methods for reactor simulations beyond benchmark activities, the traditional way of preparing and using nuclear cross sections needs to be changed, since large datasets of cross sections at many ...
Protein folding and phylogenetic tree reconstruction using stochastic approximation Monte Carlo
Cheon, Sooyoung
2007-09-17
Recently, the stochastic approximation Monte Carlo algorithm has been proposed by Liang et al. (2005) as a general-purpose stochastic optimization and simulation algorithm. An annealing version of this algorithm was developed for real small protein...
Northum, Jeremy Dell
2011-08-08
The purpose of this study was to determine how well the Monte Carlo transport code FLUKA can simulate a tissue-equivalent proportional counter (TEPC) and produce the expected delta ray events when exposed to high energy ...
Quadratic Diffusion Monte-Carlo Algorithms for Solving Atomic Many-Body Problems
Chin, Siu A.
1990-01-01
The diffusion Monte Carlo algorithm with and without importance sampling is analyzed in terms of the algorithm's underlying transfer matrix. The crucial role played by the Langevin algorithm in the importance-sampling ...
Fourth-order diffusion Monte Carlo algorithms for solving quantum many-body problems
Forbert, HA; Chin, Siu A.
2001-01-01
By decomposing the important sampled imaginary time Schrodinger evolution operator to fourth order with positive coefficients, we derived a number of distinct fourth-order diffusion Monte Carlo algorithms. These sophisticated ...
Radiative transfer in the earth's atmosphere-ocean system using Monte Carlo techniques
Bradley, Paul Andrew
1987-01-01
TRANSFER PROBLEM MONTE CARLO METHOD Assumptions of the Model Photon Pathlength Emulation Techniques Sampling Scattering Functions: Angles and Probabilities Emulation of an Interface Computing the Radiance by Statistical Estimation Determination... radiance values in both the atmosphere and the ocean from the scattering functions and other input data, with a Monte Carlo computer code. The polarization ot the radiation was taken into account by Kattawar et al. s in their computation...
PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code
Iandola, F N; O'Brien, M J; Procassini, R J
2010-11-29
Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.
THEORETICAL STUDY OF MULTILAYER LUMINESCENT SOLAR CONCENTRATORS USING A MONTE CARLO APPROACH
cost is subject to highly volatile market. Solar concentrators usually make use of mobile mirrors ableTHEORETICAL STUDY OF MULTILAYER LUMINESCENT SOLAR CONCENTRATORS USING A MONTE CARLO APPROACH a theoretical study of luminescent solar concentrators (LSCs) based on a ray-tracing technique with a Monte
Quantum Monte Carlo methods and lithium cluster properties
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) [0.1981], 0.1895(9) [0.1874(4)], 0.1530(34) [0.1599(73)], 0.1664(37) [0.1724(110)], 0.1613(43) [0.1675(110)] Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) [0.0203(12)], 0.0188(10) [0.0220(21)], 0.0247(8) [0.0310(12)], 0.0253(8) [0.0351(8)] Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
Quantum Monte Carlo methods and lithium cluster properties. [Atomic clusters
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) (0.1981), 0.1895(9) (0.1874(4)), 0.1530(34) (0.1599(73)), 0.1664(37) (0.1724(110)), 0.1613(43) (0.1675(110)) Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) (0.0203(12)), 0.0188(10) (0.0220(21)), 0.0247(8) (0.0310(12)), 0.0253(8) (0.0351(8)) Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
MONTE CARLO SIMULATION OF METASTABLE OXYGEN PHOTOCHEMISTRY IN COMETARY ATMOSPHERES
Bisikalo, D. V.; Shematovich, V. I. [Institute of Astronomy of the Russian Academy of Sciences, Moscow (Russian Federation); Gérard, J.-C.; Hubert, B. [Laboratory for Planetary and Atmospheric Physics (LPAP), University of Liège, Liège (Belgium); Jehin, E.; Decock, A. [Origines Cosmologiques et Astrophysiques (ORCA), University of Liège (Belgium); Hutsemékers, D. [Extragalactic Astrophysics and Space Observations (EASO), University of Liège (Belgium); Manfroid, J., E-mail: B.Hubert@ulg.ac.be [High Energy Astrophysics Group (GAPHE), University of Liège (Belgium)
2015-01-01
Cometary atmospheres are produced by the outgassing of material, mainly H{sub 2}O, CO, and CO{sub 2} from the nucleus of the comet under the energy input from the Sun. Subsequent photochemical processes lead to the production of other species generally absent from the nucleus, such as OH. Although all comets are different, they all have a highly rarefied atmosphere, which is an ideal environment for nonthermal photochemical processes to take place and influence the detailed state of the atmosphere. We develop a Monte Carlo model of the coma photochemistry. We compute the energy distribution functions (EDF) of the metastable O({sup 1}D) and O({sup 1}S) species and obtain the red (630 nm) and green (557.7 nm) spectral line shapes of the full coma, consistent with the computed EDFs and the expansion velocity. We show that both species have a severely non-Maxwellian EDF, that results in broad spectral lines and the suprathermal broadening dominates due to the expansion motion. We apply our model to the atmosphere of comet C/1996 B2 (Hyakutake) and 103P/Hartley 2. The computed width of the green line, expressed in terms of speed, is lower than that of the red line. This result is comparable to previous theoretical analyses, but in disagreement with observations. We explain that the spectral line shape does not only depend on the exothermicity of the photochemical production mechanisms, but also on thermalization, due to elastic collisions, reducing the width of the emission line coming from the O({sup 1}D) level, which has a longer lifetime.
Improved quantum Monte Carlo calculation of the ground-state energy of the hydrogen molecule
Anderson, James B.
Improved quantum Monte Carlo calculation of the ground-state energy of the hydrogen molecule Bin Carlo calculation of the nonrelativistic ground-state energy of the hydrogen molecule, without the use calculations of the energy of the hydrogen molecule and increasingly accurate experimental measurements
Structural Stability and Defect Energetics of ZnO from Diffusion Quantum Monte Carlo
Santana Palacio, Juan A [ORNL; Krogel, Jaron T [ORNL; Kim, Jeongnim [ORNL; Kent, Paul R [ORNL; Reboredo, Fernando A [ORNL
2015-01-01
We have applied the many-body ab-initio diffusion quantum Monte Carlo (DMC) method to study Zn and ZnO crystals under pressure, and the energetics of the oxygen vacancy, zinc interstitial and hydrogen impurities in ZnO. We show that DMC is an accurate and practical method that can be used to characterize multiple properties of materials that are challenging for density functional theory approximations. DMC agrees with experimental measurements to within 0.3 eV, including the band-gap of ZnO, the ionization potential of O and Zn, and the atomization energy of O2, ZnO dimer, and wurtzite ZnO. DMC predicts the oxygen vacancy as a deep donor with a formation energy of 5.0(2) eV under O-rich conditions and thermodynamic transition levels located between 1.8 and 2.5 eV from the valence band maximum. Our DMC results indicate that the concentration of zinc interstitial and hydrogen impurities in ZnO should be low under n-type, and Zn- and H-rich conditions because these defects have formation energies above 1.4 eV under these conditions. Comparison of DMC and hybrid functionals shows that these DFT approximations can be parameterized to yield a general correct qualitative description of ZnO. However, the formation energy of defects in ZnO evaluated with DMC and hybrid functionals can differ by more than 0.5 eV.
Final Report: 06-LW-013, Nuclear Physics the Monte Carlo Way
Ormand, W E
2009-03-01
This is document reports the progress and accomplishments achieved in 2006-2007 with LDRD funding under the proposal 06-LW-013, 'Nuclear Physics the Monte Carlo Way'. The project was a theoretical study to explore a novel approach to dealing with a persistent problem in Monte Carlo approaches to quantum many-body systems. The goal was to implement a solution to the notorious 'sign-problem', which if successful, would permit, for the first time, exact solutions to quantum many-body systems that cannot be addressed with other methods. In this document, we outline the progress and accomplishments achieved during FY2006-2007 with LDRD funding in the proposal 06-LW-013, 'Nuclear Physics the Monte Carlo Way'. This project was funded under the Lab Wide LDRD competition at Lawrence Livermore National Laboratory. The primary objective of this project was to test the feasibility of implementing a novel approach to solving the generic quantum many-body problem, which is one of the most important problems being addressed in theoretical physics today. Instead of traditional methods based matrix diagonalization, this proposal focused a Monte Carlo method. The principal difficulty with Monte Carlo methods, is the so-called 'sign problem'. The sign problem, which will discussed in some detail later, is endemic to Monte Carlo approaches to the quantum many-body problem, and is the principal reason that they have not been completely successful in the past. Here, we outline our research in the 'shifted-contour method' applied the Auxiliary Field Monte Carlo (AFMC) method.
Fission matrix-based Monte Carlo criticality analysis of fuel storage pools
Farlotti, M.; Larsen, E. W.
2013-07-01
Standard Monte Carlo transport procedures experience difficulties in solving criticality problems in fuel storage pools. Because of the strong neutron absorption between fuel assemblies, source convergence can be very slow, leading to incorrect estimates of the eigenvalue and the eigenfunction. This study examines an alternative fission matrix-based Monte Carlo transport method that takes advantage of the geometry of a storage pool to overcome this difficulty. The method uses Monte Carlo transport to build (essentially) a fission matrix, which is then used to calculate the criticality and the critical flux. This method was tested using a test code on a simple problem containing 8 assemblies in a square pool. The standard Monte Carlo method gave the expected eigenfunction in 5 cases out of 10, while the fission matrix method gave the expected eigenfunction in all 10 cases. In addition, the fission matrix method provides an estimate of the error in the eigenvalue and the eigenfunction, and it allows the user to control this error by running an adequate number of cycles. Because of these advantages, the fission matrix method yields a higher confidence in the results than standard Monte Carlo. We also discuss potential improvements of the method, including the potential for variance reduction techniques. (authors)
Crossing the mesoscale no-mans land via parallel kinetic Monte Carlo.
Garcia Cardona, Cristina (San Diego State University); Webb, Edmund Blackburn, III; Wagner, Gregory John; Tikare, Veena; Holm, Elizabeth Ann; Plimpton, Steven James; Thompson, Aidan Patrick; Slepoy, Alexander (U. S. Department of Energy, NNSA); Zhou, Xiao Wang; Battaile, Corbett Chandler; Chandross, Michael Evan
2009-10-01
The kinetic Monte Carlo method and its variants are powerful tools for modeling materials at the mesoscale, meaning at length and time scales in between the atomic and continuum. We have completed a 3 year LDRD project with the goal of developing a parallel kinetic Monte Carlo capability and applying it to materials modeling problems of interest to Sandia. In this report we give an overview of the methods and algorithms developed, and describe our new open-source code called SPPARKS, for Stochastic Parallel PARticle Kinetic Simulator. We also highlight the development of several Monte Carlo models in SPPARKS for specific materials modeling applications, including grain growth, bubble formation, diffusion in nanoporous materials, defect formation in erbium hydrides, and surface growth and evolution.
Monte Carlo implementation of a guiding-center Fokker-Planck kinetic equation
Hirvijoki, E.; Snicker, A.; Kurki-Suonio, T. [Department of Applied Physics, Aalto University, FI-00076 Aalto (Finland)] [Department of Applied Physics, Aalto University, FI-00076 Aalto (Finland); Brizard, A. [Department of Physics, Saint Michael's College, Colchester, Vermont 05439 (United States)] [Department of Physics, Saint Michael's College, Colchester, Vermont 05439 (United States)
2013-09-15
A Monte Carlo method for the collisional guiding-center Fokker-Planck kinetic equation is derived in the five-dimensional guiding-center phase space, where the effects of magnetic drifts due to the background magnetic field nonuniformity are included. It is shown that, in the limit of a homogeneous magnetic field, our guiding-center Monte Carlo collision operator reduces to the guiding-center Monte Carlo Coulomb operator previously derived by Xu and Rosenbluth [Phys. Fluids B 3, 627 (1991)]. Applications of the present work will focus on the collisional transport of energetic ions in complex nonuniform magnetized plasmas in the large mean-free-path (collisionless) limit, where magnetic drifts must be retained.
A Proposal for a Standard Interface Between Monte Carlo Tools And One-Loop Programs
Binoth, T.; Boudjema, F.; Dissertori, G.; Lazopoulos, A.; Denner, A.; Dittmaier, S.; Frederix, R.; Greiner, N.; Hoeche, Stefan; Giele, W.; Skands, P.; Winter, J.; Gleisberg, T.; Archibald, J.; Heinrich, G.; Krauss, F.; Maitre, D.; Huber, M.; Huston, J.; Kauer, N.; Maltoni, F.; /Louvain U., CP3 /Milan Bicocca U. /INFN, Turin /Turin U. /Granada U., Theor. Phys. Astrophys. /CERN /NIKHEF, Amsterdam /Heidelberg U. /Oxford U., Theor. Phys.
2011-11-11
Many highly developed Monte Carlo tools for the evaluation of cross sections based on tree matrix elements exist and are used by experimental collaborations in high energy physics. As the evaluation of one-loop matrix elements has recently been undergoing enormous progress, the combination of one-loop matrix elements with existing Monte Carlo tools is on the horizon. This would lead to phenomenological predictions at the next-to-leading order level. This note summarises the discussion of the next-to-leading order multi-leg (NLM) working group on this issue which has been taking place during the workshop on Physics at TeV Colliders at Les Houches, France, in June 2009. The result is a proposal for a standard interface between Monte Carlo tools and one-loop matrix element programs.
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, William P. (Tracy, CA); Hartmann-Siantar, Christine L. (San Ramon, CA); Rathkopf, James A. (Livermore, CA)
1999-01-01
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, W.P.; Hartmann-Siantar, C.L.; Rathkopf, J.A.
1999-02-09
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media. 57 figs.
Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis
Wilson, Paul; Evans, Thomas; Tautges, Tim
2012-12-24
This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well-suited to coupling with the unstructured meshes that are used in other physics simulations.
Monte Carlo techniques of simulation applied to a single item inventory system
Aldred, William Murray
1965-01-01
of MASTER OF SCIENCE August 1965 Major SubJect: Computer Science MONTE CARLO TECHNIQUES OF SIMULATION APPLIED TO A SINGLE ITEM INVENTORY SYSTEM A Thesis By WILLIAM MURRAY ALDRED, JR. Approved as to style and content by: (Chairman of Committee (Head... as it operates. Now that the basic principles and requirements of a simulati. on study have been outlined, it seems appropriate to discuss one of the better methods of reducing the data to a form suitable for simulation by a computer. Monte Carlo Technique...
Monte Carlo simulations of the HP model (the "Ising model" of protein folding)
Li, Ying Wai; Landau, David P; 10.1016/j.cpc.2010.12.049
2011-01-01
Using Wang-Landau sampling with suitable Monte Carlo trial moves (pull moves and bond-rebridging moves combined) we have determined the density of states and thermodynamic properties for a short sequence of the HP protein model. For free chains these proteins are known to first undergo a collapse "transition" to a globule state followed by a second "transition" into a native state. When placed in the proximity of an attractive surface, there is a competition between surface adsorption and folding that leads to an intriguing sequence of "transitions". These transitions depend upon the relative interaction strengths and are largely inaccessible to "standard" Monte Carlo methods.
Spin-orbit induced backflow in neutron matter with auxiliary field diffusion Monte Carlo
L. Brualla; S. Fantoni; A. Sarsa; K. E Schmidt; S. A. Vitiello
2003-04-14
The energy per particle of zero-temperature neutron matter is investigated, with particular emphasis on the role of the $\\vec L\\cdot\\vec S$ interaction. An analysis of the importance of explicit spin--orbit correlations in the description of the system is carried out by the auxiliary field diffusion Monte Carlo method. The improved nodal structure of the guiding function, constructed by explicitly considering these correlations, lowers the energy. The proposed spin--backflow orbitals can conveniently be used also in Green's Function Monte Carlo calculations of light nuclei.
Chung, Kiwhan
1996-01-01
While the use of Monte Carlo method has been prevalent in nuclear engineering, it has yet to fully blossom in the study of solute transport in porous media. By using an etched-glass micromodel, an attempt is made to apply Monte Carlo method...
Sailhac, Pascal
Inversion of surface nuclear magnetic resonance data by an adapted Monte Carlo method applied, France Abstract Inversion of surface nuclear magnetic resonance (SNMR) provides important information Science B.V. All rights reserved. Keywords: Inversion; Surface nuclear magnetic resonance; Monte Carlo 1
Anderson, James B.
Direct Monte Carlo simulation of chemical reaction systems: Internal energy transfer and an energy a direct Monte Carlo simulation of an energy-dependent t&molecular reaction system of the type A+ B simulation of a unimo- lecular reaction with an energy-dependent rate constant k3 and with explicit treatment
Mezei, Mihaly
An improved acceptance criterion for local move Monte Carlo method in which trial steps change only sevenEfficient Monte Carlo sampling for long molecular chains using local moves, tested on a solvated, New York University, New York, New York 10029 Received 20 February 2002; accepted 27 November 2002
Usefulness of the reversible jump Markov chain Monte Carlo model in regional flood frequency
Ribatet, Mathieu
Usefulness of the reversible jump Markov chain Monte Carlo model in regional flood frequency; revised 3 May 2007; accepted 17 May 2007; published 3 August 2007. [1] Regional flood frequency analysis and the index flood approach. Results show that the proposed estimator is absolutely suited to regional
Introduction to Markov Chain Monte Carlo Simulations and their Statistical Analysis
Bernd A. Berg
2004-10-19
This article is a tutorial on Markov chain Monte Carlo simulations and their statistical analysis. The theoretical concepts are illustrated through many numerical assignments from the author's book on the subject. Computer code (in Fortran) is available for all subjects covered and can be downloaded from the web.
Calculating Risk of Cost Using Monte Carlo Simulations with Fuzzy Parameters in Civil Engineering
Pownuk, Andrzej
of project, contractor's lack experience, poor labor productivity, project changes [10, 6]. The risk of costCalculating Risk of Cost Using Monte Carlo Simulations with Fuzzy Parameters in Civil Engineering@zeus.polsl.gliwice.pl, http://zeus.polsl.gliwice.pl/ pownuk August 1, 2004 Abstract. Risk is a part of almost all civil
Use of single scatter electron monte carlo transport for medical radiation sciences
Svatos, Michelle M. (Oakland, CA)
2001-01-01
The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.
Assessing fire risk using Monte Carlo simulations of fire spread Yohay Carmel a,
Assessing fire risk using Monte Carlo simulations of fire spread Yohay Carmel a, *, Shlomit Paz b of Haifa, Haifa, Israel 1. Introduction Fires are a major source of forest destruction in the Mediterranean., 2000). Mediterranean fires are largely determined by climatic conditions; long, dry summers with high
Autologistic Regression Analysis of Spatial-Temporal Binary Data via Monte Carlo
Aukema, Brian
Autologistic Regression Analysis of Spatial-Temporal Binary Data via Monte Carlo Maximum Likelihood regression analysis of binary data that are measured on a spatial lattice and repeatedly over discrete time points. We propose a spatial- temporal autologistic regression model and draw statistical inference via
Quantum Monte Carlo study of a disordered 2D Josephson junction array
Stroud, David
Quantum Monte Carlo study of a disordered 2D Josephson junction array W.A. Al-Saidi *, D. Stroud not be established even * Corresponding author. E-mail addresses: al-saidi.1@osu.edu (W.A. Al-Saidi), stroud
Monte-Carlo simulations of polymer crystallization in dilute solution C.-M. Chena)
Chen, Chi-Ming
carbon atoms, and we also investigate chain folding of very long polymers. For monodisperse flexibleMonte-Carlo simulations of polymer crystallization in dilute solution C.-M. Chena) and Paul G Received 7 July 1997; accepted 8 December 1997 Polymer crystallization in dilute solution is studied
Monte Carlo Tree Search for Simulated Car Racing Jacob Fischer1
Togelius, Julian
Monte Carlo Tree Search for Simulated Car Racing Jacob Fischer1 , Nikolaj Falsted1 , Mathias be modified to achieve this. In this paper, we investi- gate the application of MCTS to simulated car racing algorithm. Similarly, simulated car racing presents interesting challenges to artificial intelligence (AI
Performance Characteristics of Cathode Materials for Lithium-Ion Batteries: A Monte Carlo Strategy
Subramanian, Venkat
Performance Characteristics of Cathode Materials for Lithium-Ion Batteries: A Monte Carlo Strategy to study the performance of cathode materials in lithium-ion batteries. The methodology takes into account. Published September 26, 2008. Lithium-ion batteries are state-of-the-art power sources1 for por- table
Alcouffe, R.E.
1985-01-01
A difficult class of problems for the discrete-ordinates neutral particle transport method is to accurately compute the flux due to a spatially localized source. Because the transport equation is solved for discrete directions, the so-called ray effect causes the flux at space points far from the source to be inaccurate. Thus, in general, discrete ordinates would not be the method of choice to solve such problems. It is better suited for calculating problems with significant scattering. The Monte Carlo method is suited to localized source problems, particularly if the amount of collisional interactions in minimal. However, if there are many scattering collisions and the flux at all space points is desired, then the Monte Carlo method becomes expensive. To take advantage of the attributes of both approaches, we have devised a first collision source method to combine the Monte Carlo and discrete-ordinates solutions. That is, particles are tracked from the source to their first scattering collision and tallied to produce a source for the discrete-ordinates calculation. A scattered flux is then computed by discrete ordinates, and the total flux is the sum of the Monte Carlo and discrete ordinates calculated fluxes. In this paper, we present calculational results using the MCNP and TWODANT codes for selected two-dimensional problems that show the effectiveness of this method.
Monte Carlo Adaptive Technique for Sensitivity Analysis of a Large-scale Air Pollution Model
Dimov, Ivan
Monte Carlo Adaptive Technique for Sensitivity Analysis of a Large-scale Air Pollution Model Ivan of input parameters contribution into output variability of a large- scale air pollution model]. This model simulates the transport of air pollutants and has been developed by Dr. Z. Zlatev and his
Monte Carlo simulation of liquid bridge rupture: Application to lung physiology Adriano M. Alencar,1
Alencar, Adriano Mesquita
Monte Carlo simulation of liquid bridge rupture: Application to lung physiology Adriano M. Alencar of certain lung diseases, the surface properties and the amount of fluids coating the airways changes of similar bridges that exist in diseased lungs. DOI: 10.1103/PhysRevE.74.026311 PACS number s : 47.90 a, 04
arXiv:physics/000104722Jan2000 Path Integral Monte Carlo Calculation of the Deuterium Hugoniot
Militzer, Burkhard
arXiv:physics/000104722Jan2000 Path Integral Monte Carlo Calculation of the Deuterium Hugoniot B University of Illinois at Urbana-Champaign, Urbana, IL 61801 (January 21, 2000) Restricted path integral of the path integral. Further, we compare the results obtained with a free particle nodal restriction
10,000 STANDARD SOLAR MODELS: A MONTE CARLO SIMULATION John N. Bahcall1
Bahcall, John
if a given prediction from solar models agrees or disagrees with a measured value. We proceed by constructing quanti- ties to describe the statistical significance of comparisons between solar model predictions systematic attempt to use Monte Carlo simula- tions to determine the uncertainties in solar model predictions
10,000 STANDARD SOLAR MODELS: A MONTE CARLO SIMULATION John N. Bahcall 1
Bahcall, John
if a given prediction from solar models agrees or disagrees with a measured value. We proceed by constructing quanti ties to describe the statistical significance of comparisons between solar model predictions systematic attempt to use Monte Carlo simula tions to determine the uncertainties in solar model predictions
Monte Carlo Methods for Equilibrium and Nonequilibrium Problems in Interfacial Electrochemistry
Gregory Brown; Per Arne Rikvold; S. J. Mitchell; M. A. Novotny
1998-05-11
We present a tutorial discussion of Monte Carlo methods for equilibrium and nonequilibrium problems in interfacial electrochemistry. The discussion is illustrated with results from simulations of three specific systems: bromine adsorption on silver (100), underpotential deposition of copper on gold (111), and electrodeposition of urea on platinum (100).
Ryan, Dominic
Monte Carlo simulations of transverse spin freezing in the three-dimensional frustrated Heisenberg of the spins freeze leading to a noncollinear spin structure dominated by ferromagnetic correlations. The phase as the transverse degrees of freedom order.' Theoretical support for a transverse spin freezing tran- sition
A Monte Carlo Method Used for the Identification of the Muscle Spindle
Rigas, Alexandros
the behavior of the muscle spindle by using a logistic regression model. The system receives input from. Key words: Exact logistic regression, likelihood function, Monte Carlo technique, muscle spin- dle. 21 is part of the skeletal muscles and is responsible for the initiation of move- ment and the maintenance
MUSiC - An Automated Scan for Deviations between Data and Monte Carlo Simulation
Meyer, Arnd
2010-02-10
A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.
Monte Carlo Characterization of a Pulsed Laser-Wakefield Driven Monochromatic
Umstadter, Donald
Monte Carlo Characterization of a Pulsed Laser-Wakefield Driven Monochromatic X-Ray Source S. D facility at the University of Nebraska- Lincoln (UNL) is a 100-TW, 30-fs pulsed Ti:sapphire laser system. Diocles is routinely used to accelerate electron beams by means of laser-wakefield acceleration, which
Menut, Laurent
the a priori uncertainties in anthropogenic NOx and volatile organic compounds (VOC) emissions: (1) The a posteriori probability density function (pdf) for NOx emissions is not modified in its averageBayesian Monte Carlo analysis applied to regional-scale inverse emission modeling for reactive
A quantum Monte Carlo calculation of the ground state energy of the hydrogen molecule
Anderson, James B.
A quantum Monte Carlo calculation of the ground state energy of the hydrogen molecule Carol A report here calculations of the ground state energy for the relatively simple system of the hydrogen-1264 (Received 20 August 1990; accepted 6 November 1990) We have calculated the ground state energy
Thermodynamics and quark susceptibilities: a Monte-Carlo approach to the PNJL model
Weise, Wolfram
Thermodynamics and quark susceptibilities: a Monte-Carlo approach to the PNJL model M on the thermodynamics of the model, both in the case of pure gauge theory and including two quark flavors. In the two- flavor case, we calculate the second-order Taylor expansion coefficients of the thermodynamic grand
Bayes and Big Data: The Consensus Monte Carlo Algorithm Steven L. Scott1
Cortes, Corinna
Bayes and Big Data: The Consensus Monte Carlo Algorithm Steven L. Scott1 , Alexander W. Blocker1 of Business October 31, 2013 Abstract A useful definition of "big data" is data that is too big to comfortably by splitting data across multiple machines. Communication between large numbers of machines is expensive
The polarized emissivity of a wind-roughened sea surface: A Monte Carlo model
Theiler, James
The polarized emissivity of a wind-roughened sea surface: A Monte Carlo model Bradley G. Henderson-infrared emissivity of a wind-roughened sea surface. The model includes the effects of both shadowing and the reflected component of surface emission. By using Stokes vectors to quantify the radiation along a given ray
Auxiliary Field Diffusion Monte Carlo calculation of ground state properties of neutron drops
Francesco Pederiva; A. Sarsa; K. E. Schmidt; S. Fantoni
2004-03-23
The Auxiliary Field Diffusion Monte Carlo method has been applied to simulate droplets of 7 and 8 neutrons. Results for realistic nucleon-nucleon interactions, which include tensor, spin--orbit and three--body forces, plus a standard one--body confining potential, have been compared with analogous calculations obtained with Green's Function Monte Carlo methods. We have studied the dependence of the binding energy, the one--body density and the spin--orbit splittings of $^7n$ on the depth of the confining potential. The results obtained show an overall agreement between the two quantum Monte Carlo methods, although there persist differences in the evaluation of spin--orbit forces, as previously indicated by bulk neutron matter calculations. Energy density functional models, largely used in astrophysical applications, seem to provide results significantly different from those of quantum simulations. Given its scaling behavior in the number of nucleons, the Auxiliary Field Diffusion Monte Carlo method seems to be one of the best candidate to perform {\\sl ab initio} calculations on neutron rich nuclei.
Alfè, Dario
Structural properties and enthalpy of formation of magnesium hydride from quantum Monte Carlo calculations to study the structural properties of magnesium hydride MgH2 , including the pressure. INTRODUCTION The energetics of metal hydrides has recently become an issue of large scientific
First-row hydrides: Dissociation and ground state energies using quantum Monte Carlo
Anderson, James B.
First-row hydrides: Dissociation and ground state energies using quantum Monte Carlo Arne Lu, Pennsylvania 16802 Received 20 May 1996; accepted 24 July 1996 Accurate ground state energies comparable FN-DQMC method. The residual energy, the nodal error due to the error in the nodal structure
Monte Carlo Methods for Uncertainty Quantification Mathematical Institute, University of Oxford
Giles, Mike
/ 24 #12;SDEs in Finance In computational finance, stochastic differential equations are used to model the behaviour of stocks interest rates exchange rates weather electricity/gas demand crude oil prices . . . Mike Giles (Oxford) Monte Carlo methods 2 3 / 24 #12;SDEs in Finance Stochastic differential equations
Monte Carlo Methods for Uncertainty Quantification Mathematical Institute, University of Oxford
Giles, Mike
in Finance In computational finance, stochastic differential equations are used to model the behaviour of stocks interest rates exchange rates weather electricity/gas demand crude oil prices . . . Mike Giles (Oxford) Monte Carlo methods 2 3 / 24 SDEs in Finance Stochastic differential equations are just ordinary
Multivariate Population Balances via Moment and Monte Carlo Simulation Methods: An Important Sol application of current/future importance, a multivariate description is required, for which the existing, hopefully, motivate a broader attack on important multivariate population balance problems, including those
Rotating and static sources for gamma knife radiosurgery systems: Monte Carlo studies
Yu, Peter K.N.
Rotating and static sources for gamma knife radiosurgery systems: Monte Carlo studies J. Y. C of the 201 static sources Leksell gamma knife, LGK . The rotating sources of RGSs simulate an infinite number by the surrounding normal brain tissues, which is a resultant of 201 static 60 Co sources. Each individual beam
QUEEG: A Monte Carlo Event Generator for Quasielastic Scattering on Deuterium
Gilfoyle, Jerry
QUEEG: A Monte Carlo Event Generator for Quasielastic Scattering on Deuterium G.P. Gilfoyle1 , J. Examples of the use of the event generator are shown. The source and Makefiles are available in the CLAS12 an event generator for quasielastic scattering off nucleons in deuterium. This work was motivated
Monte Carlo Simulation of Alzheimer's Disease in the United States: 2010-2060
Feres, Renato
Monte Carlo Simulation of Alzheimer's Disease in the United States: 2010-2060 Michael Blech concerns facing the United States over the next 50 years. This progressive disease is currently the sixth on the United States population, and second, the simulation models both prevalence and mortality. Both
Monte Carlo model for analysis of thermal runaway electrons in streamer tips in transient luminous
Pasko, Victor
Monte Carlo model for analysis of thermal runaway electrons in streamer tips in transient luminous in transient luminous events (TLEs) termed sprites, which occur in the altitude range 4090 km in the Earth modeling results indicate that the $10 Ek fields are able to accelerate a fraction of low-energy (several e
Monte Carlo study of a luminosity detector for the International Linear Collider
H. Abramowicz; R. Ingbir; S. Kananov; A. Levy
2005-08-11
This paper presents the status of Monte Carlo simulation of one of the luminosity detectors considered for the future e+e- International Linear Collider (ILC). The detector consists of a tungsten/silicon sandwich calorimeter with pad readout. The study was performed for Bhabha scattering events assuming a zero crossing angle for the beams.
A Scalable Parallel Monte Carlo Method for Free Energy Simulations of Molecular Systems
Chan, Derek Y C
A Scalable Parallel Monte Carlo Method for Free Energy Simulations of Molecular Systems MALEK O for problems where the energy dominates the entropy. An example is parallel tempering, in which simulations the free energy of the system as a direct output of the simulation. Traditional Metropolis MC samples phase
Ab-initio Kinetic Monte Carlo Model of Ionic Conduction in Bulk Yttria-stabilized Zirconia
Cai, Wei
Ab-initio Kinetic Monte Carlo Model of Ionic Conduction in Bulk Yttria-stabilized Zirconia Eunseok in bulk single crystal Yttria-stabilized Zirconia. An interacting energy barrier model is developed dynamics to simulate the vacancy diffusion in Yttria-stabilized Zirconia (YSZ). They concluded
Hale, Barbara N.
CALCULATION OF SCALED NUCLEATION RATES FOR WATER USING MONTE CARLO GENERATED CLUSTER FREE ENERGYMattio All Rights Reserved #12;iii ABSTRACT Helmholtz free energy differences, -dFn , are calculated inconsistent with the experimental properties of water. Summation of the scaled TIP4P free energy differences
Local and chain dynamics in miscible polymer blends: A Monte Carlo simulation study
Luettmer-Strathmann, Jutta
44325-4001 Received 7 November 2005; accepted 1 March 2006; published online 5 May 2006 Local chain of the chains. These are combined with a local mobility determined from the acceptance rate and the effectiveLocal and chain dynamics in miscible polymer blends: A Monte Carlo simulation study Jutta Luettmer
Green's function Monte Carlo calculation for the ground state of helium trimers
Cabral, F.; Kalos, M.H.
1981-02-01
The ground state energy of weakly bound boson trimers interacting via Lennard-Jones (12,6) pair potentials is calculated using a Monte Carlo Green's Function Method. Threshold coupling constants for self binding are obtained by extrapolation to zero binding.
Instabilities in Molecular Dynamics Integrators used in Hybrid Monte Carlo Simulations
B. Joo; UKQCD Collaboration
2001-10-11
We discuss an instability in the leapfrog integration algorithm, widely used in current Hybrid Monte Carlo (HMC) simulations of lattice QCD. We demonstrate the instability in the simple harmonic oscillator (SHO) system where it is manifest. We demonstrate the instability in HMC simulations of lattic QCD with dynamical Wilson-Clover fermions and discuss implications for future simulations of lattice QCD.
The effects of mapping CT images to Monte Carlo materials on GEANT4 proton simulation accuracy
Barnes, Samuel; McAuley, Grant; Slater, James; Wroe, Andrew
2013-04-15
Purpose: Monte Carlo simulations of radiation therapy require conversion from Hounsfield units (HU) in CT images to an exact tissue composition and density. The number of discrete densities (or density bins) used in this mapping affects the simulation accuracy, execution time, and memory usage in GEANT4 and other Monte Carlo code. The relationship between the number of density bins and CT noise was examined in general for all simulations that use HU conversion to density. Additionally, the effect of this on simulation accuracy was examined for proton radiation. Methods: Relative uncertainty from CT noise was compared with uncertainty from density binning to determine an upper limit on the number of density bins required in the presence of CT noise. Error propagation analysis was also performed on continuously slowing down approximation range calculations to determine the proton range uncertainty caused by density binning. These results were verified with Monte Carlo simulations. Results: In the presence of even modest CT noise (5 HU or 0.5%) 450 density bins were found to only cause a 5% increase in the density uncertainty (i.e., 95% of density uncertainty from CT noise, 5% from binning). Larger numbers of density bins are not required as CT noise will prevent increased density accuracy; this applies across all types of Monte Carlo simulations. Examining uncertainty in proton range, only 127 density bins are required for a proton range error of <0.1 mm in most tissue and <0.5 mm in low density tissue (e.g., lung). Conclusions: By considering CT noise and actual range uncertainty, the number of required density bins can be restricted to a very modest 127 depending on the application. Reducing the number of density bins provides large memory and execution time savings in GEANT4 and other Monte Carlo packages.
Baes, Maarten
2008-01-01
that is inherent in Monte Carlo radiative transfer simulations. As the typical detectors used in Monte Carlo negligible, we recommend the use of smart detectors in Monte Carlo radiative transfer simulations. Key wordsMon. Not. R. Astron. Soc. 391, 617623 (2008) doi:10.1111/j.1365-2966.2008.13941.x Smart detectors
Chrissanthopoulos, A.; Jovari, P.; Kaban, I.; Gruner, S.; Kavetskyy, T.; Borc, J.; Wang, W.; Ren, J.; Chen, G.; Yannopoulos, S.N.
2012-08-15
We report an investigation of the structure and vibrational modes of Ge-In-S-AgI bulk glasses using X-ray diffraction, EXAFS spectroscopy, Reverse Monte-Carlo (RMC) modelling, Raman spectroscopy, and density functional theoretical (DFT) calculations. The combination of these techniques made it possible to elucidate the short- and medium-range structural order of these glasses. Data interpretation revealed that the AgI-free glass structure is composed of a network where GeS{sub 4/2} tetrahedra are linked with trigonal InS{sub 3/2} units; S{sub 3/2}Ge-GeS{sub 3/2} ethane-like species linked with InS{sub 4/2}{sup -} tetrahedra form sub-structures which are dispersed in the network structure. The addition of AgI into the Ge-In-S glassy matrix causes appreciable structural changes, enriching the Indium species with Iodine terminal atoms. The existence of trigonal species InS{sub 2/2}I and tetrahedral units InS{sub 3/2}I{sup -} and InS{sub 2/2}I{sub 2}{sup -} is compatible with the EXAFS and RMC analysis. Their vibrational properties (harmonic frequencies and Raman activities) calculated by DFT are in very good agreement with the experimental values determined by Raman spectroscopy. - Graphical abstract: Experiment (XRD, EXAFS, RMC, Raman scattering) and density functional calculations are employed to study the structure of AgI-doped Ge-In-S glasses. The role of mixed structural units as illustrated in the figure is elucidated. Highlights: Black-Right-Pointing-Pointer Doping Ge-In-S glasses with AgI causes significant changes in glass structure. Black-Right-Pointing-Pointer Experiment and DFT are combined to elucidate short- and medium-range structural order. Black-Right-Pointing-Pointer Indium atoms form both (InS{sub 4/2}){sup -} tetrahedra and InS{sub 3/2} planar triangles. Black-Right-Pointing-Pointer (InS{sub 4/2}){sup -} tetrahedra bond to (S{sub 3/2}Ge-GeS{sub 3/2}){sup 2+} ethane-like units forming neutral sub-structures. Black-Right-Pointing-Pointer Mixed chalcohalide species (InS{sub 3/2}I){sup -} offer vulnerable sites for the uptake of Ag{sup +}.
Miura, Shinichi [Institute for Molecular Science, 38 Myodaiji, Okazaki 444-8585 (Japan)
2007-03-21
In this paper, we present a path integral hybrid Monte Carlo (PIHMC) method for rotating molecules in quantum fluids. This is an extension of our PIHMC for correlated Bose fluids [S. Miura and J. Tanaka, J. Chem. Phys. 120, 2160 (2004)] to handle the molecular rotation quantum mechanically. A novel technique referred to be an effective potential of quantum rotation is introduced to incorporate the rotational degree of freedom in the path integral molecular dynamics or hybrid Monte Carlo algorithm. For a permutation move to satisfy Bose statistics, we devise a multilevel Metropolis method combined with a configurational-bias technique for efficiently sampling the permutation and the associated atomic coordinates. Then, we have applied the PIHMC to a helium-4 cluster doped with a carbonyl sulfide molecule. The effects of the quantum rotation on the solvation structure and energetics were examined. Translational and rotational fluctuations of the dopant in the superfluid cluster were also analyzed.
Numerical thermalization in particle-in-cell simulations with Monte-Carlo collisions
Lai, P. Y.; Lin, T. Y.; Lin-Liu, Y. R.; Chen, S. H.
2014-12-15
Numerical thermalization in collisional one-dimensional (1D) electrostatic (ES) particle-in-cell (PIC) simulations was investigated. Two collision models, the pitch-angle scattering of electrons by the stationary ion background and large-angle collisions between the electrons and the neutral background, were included in the PIC simulation using Monte-Carlo methods. The numerical results show that the thermalization times in both models were considerably reduced by the additional Monte-Carlo collisions as demonstrated by comparisons with Turner's previous simulation results based on a head-on collision model [M. M. Turner, Phys. Plasmas 13, 033506 (2006)]. However, the breakdown of Dawson's scaling law in the collisional 1D ES PIC simulation is more complicated than that was observed by Turner, and the revised scaling law of the numerical thermalization time with numerical parameters are derived on the basis of the simulation results obtained in this study.
Calculating alpha Eigenvalues in a Continuous-Energy Infinite Medium with Monte Carlo
Betzler, Benjamin R. [Los Alamos National Laboratory; Kiedrowski, Brian C. [Los Alamos National Laboratory; Brown, Forrest B. [Los Alamos National Laboratory; Martin, William R. [Los Alamos National Laboratory
2012-09-04
The {alpha} eigenvalue has implications for time-dependent problems where the system is sub- or supercritical. We present methods and results from calculating the {alpha}-eigenvalue spectrum for a continuous-energy infinite medium with a simplified Monte Carlo transport code. We formulate the {alpha}-eigenvalue problem, detail the Monte Carlo code physics, and provide verification and results. We have a method for calculating the {alpha}-eigenvalue spectrum in a continuous-energy infinite-medium. The continuous-time Markov process described by the transition rate matrix provides a way of obtaining the {alpha}-eigenvalue spectrum and kinetic modes. These are useful for the approximation of the time dependence of the system.
Rasch, Kevin M.; Hu, Shuming; Mitas, Lubos [Center for High Performance Simulation and Department of Physics, North Carolina State University, Raleigh, North Carolina 27695 (United States)] [Center for High Performance Simulation and Department of Physics, North Carolina State University, Raleigh, North Carolina 27695 (United States)
2014-01-28
We elucidate the origin of large differences (two-fold or more) in the fixed-node errors between the first- vs second-row systems for single-configuration trial wave functions in quantum Monte Carlo calculations. This significant difference in the valence fixed-node biases is studied across a set of atoms, molecules, and also Si, C solid crystals. We show that the key features which affect the fixed-node errors are the differences in electron density and the degree of node nonlinearity. The findings reveal how the accuracy of the quantum Monte Carlo varies across a variety of systems, provide new perspectives on the origins of the fixed-node biases in calculations of molecular and condensed systems, and carry implications for pseudopotential constructions for heavy elements.
Quantum Monte Carlo calculations of excited states in A = 6--8 nuclei
Steven C. Pieper; R. B. Wiringa; J. Carlson
2004-10-13
A variational Monte Carlo method is used to generate sets of orthogonal trial functions, Psi_T(J^pi,T), for given quantum numbers in various light p-shell nuclei. These Psi_T are then used as input to Green's function Monte Carlo calculations of first, second, and higher excited (J^pi,T) states. Realistic two- and three-nucleon interactions are used. We find that if the physical excited state is reasonably narrow, the GFMC energy converges to a stable result. With the combined Argonne v_18 two-nucleon and Illinois-2 three-nucleon interactions, the results for many second and higher states in A = 6--8 nuclei are close to the experimental values.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Tringe, J. W.; Ileri, N.; Levie, H. W.; Stroeve, P.; Ustach, V.; Faller, R.; Renaud, P.
2015-08-01
We use Molecular Dynamics and Monte Carlo simulations to examine molecular transport phenomena in nanochannels, explaining four orders of magnitude difference in wheat germ agglutinin (WGA) protein diffusion rates observed by fluorescence correlation spectroscopy (FCS) and by direct imaging of fluorescently-labeled proteins. We first use the ESPResSo Molecular Dynamics code to estimate the surface transport distance for neutral and charged proteins. We then employ a Monte Carlo model to calculate the paths of protein molecules on surfaces and in the bulk liquid transport medium. Our results show that the transport characteristics depend strongly on the degree of molecular surface coverage.more »Atomic force microscope characterization of surfaces exposed to WGA proteins for 1000 s show large protein aggregates consistent with the predicted coverage. These calculations and experiments provide useful insight into the details of molecular motion in confined geometries.« less
MCViNE -- An object oriented Monte Carlo neutron ray tracing simulation package
Lin, Jiao Y Y; Granroth, Garrett E; Abernathy, Douglas L; Lumsden, Mark D; Winn, Barry; Aczel, Adam A; Aivazis, Michael; Fultz, Brent
2015-01-01
MCViNE (Monte-Carlo VIrtual Neutron Experiment) is a versatile Monte Carlo (MC) neutron ray-tracing program that provides researchers with tools for performing computer modeling and simulations that mirror real neutron scattering experiments. By adopting modern software engineering practices such as using composite and visitor design patterns for representing and accessing neutron scatterers, and using recursive algorithms for multiple scattering, MCViNE is flexible enough to handle sophisticated neutron scattering problems including, for example, neutron detection by complex detector systems, and single and multiple scattering events in a variety of samples and sample environments. In addition, MCViNE can take advantage of simulation components in linear-chain-based MC ray tracing packages widely used in instrument design and optimization, as well as NumPy-based components that make prototypes useful and easy to develop. These developments have enabled us to carry out detailed simulations of neutron scatteri...
Monte Carlo simulation to investigate the formation of molecular hydrogen and its deuterated forms
Sahu, DIpen; Majumdar, Liton; Chakrabarti, Sandip K
2015-01-01
$H_2$ is the most abundant interstellar species. Its deuterated forms ($HD$ and $D_2$) are also significantly abundant. Huge abundances of these molecules could be explained by considering the chemistry occurring on the interstellar dust. Because of its simplicity, Rate equation method is widely used to study the formation of grain-surface species. However, since recombination efficiency of formation of any surface species are heavily dependent on various physical and chemical parameters, Monte Carlo method would be best method suited to take care of randomness of the processes. We perform Monte Carlo simulation to study the formation of $H_2$, $HD$ and $D_2$ on interstellar ices. Adsorption energies of surface species are the key inputs for the formation of any species on interstellar dusts but binding energies of deuterated species are yet to known with certainty. A zero point energy correction exists between hydrogenated and deuterated species which should be considered while modeling the chemistry on the ...
Rubery, M. S.; Horsfield, C. J. [Plasma Physics Department, AWE plc, Reading RG7 4PR (United Kingdom)] [Plasma Physics Department, AWE plc, Reading RG7 4PR (United Kingdom); Herrmann, H.; Kim, Y.; Mack, J. M.; Young, C.; Evans, S.; Sedillo, T.; McEvoy, A.; Caldwell, S. E. [Plasma Physics Department, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)] [Plasma Physics Department, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Grafil, E.; Stoeffl, W. [Physics, Lawrence Livermore National Laboratory, Livermore, California 94551 (United States)] [Physics, Lawrence Livermore National Laboratory, Livermore, California 94551 (United States); Milnes, J. S. [Photek Limited UK, 26 Castleham Road, St. Leonards-on-sea TN38 9NS (United Kingdom)] [Photek Limited UK, 26 Castleham Road, St. Leonards-on-sea TN38 9NS (United Kingdom)
2013-07-15
The gas Cherenkov detectors at NIF and Omega measure several ICF burn characteristics by detecting multi-MeV nuclear ? emissions from the implosion. Of primary interest are ? bang-time (GBT) and burn width defined as the time between initial laser-plasma interaction and peak in the fusion reaction history and the FWHM of the reaction history respectively. To accurately calculate such parameters the collaboration relies on Monte Carlo codes, such as GEANT4 and ACCEPT, for diagnostic properties that cannot be measured directly. This paper describes a series of experiments performed at the High Intensity ? Source (HI?S) facility at Duke University to validate the geometries and material data used in the Monte Carlo simulations. Results published here show that model-driven parameters such as intensity and temporal response can be used with less than 50% uncertainty for all diagnostics and facilities.
A Monte Carlo study of double logarithms in the small x region
Chachamis, G
2015-01-01
We investigate the effect of the resummation of collinear double logarithms in the BFKL gluon Green function using the Monte Carlo event generator BFKLex. The resummed collinear terms in transverse momentum space were calculated in Ref. [1] and correspond to the addition to the NLO BFKL kernel of a Bessel function of the first kind whose argument contains the strong coupling and a double logarithm of the ratio of the squared transverse momenta of the reggeized gluons. We discuss how these additional terms improve the collinear convergence of the whole approach and reduce the asymptotic growth with energy of cross sections. Taking advantage of the Monte Carlo implementation, we show how the new results reduce the diffusion of the gluon ladder into infrared and ultraviolet transverse momentum scales, while strongly affecting final state configurations by reducing the mini-jet multiplicity.
The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units
Hall, Clifford [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States) [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); Ji, Weixiao [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States)] [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); Blaisten-Barojas, Estela, E-mail: blaisten@gmu.edu [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States) [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States)
2014-02-01
We present a CPU–GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm, which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU–GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU–GPU duets. -- Highlights: •We parallelize the Metropolis Monte Carlo (MMC) algorithm on one CPU—GPU duet. •The Adaptive Tempering Monte Carlo employs MMC and profits from this CPU—GPU implementation. •Our benchmark shows a size scaling-up speedup of 62 for systems with 225,000 particles. •The testbed involves a polymeric system of oligopyrroles in the condensed phase. •The CPU—GPU parallelization includes dipole—dipole and Mie—Jones classic potentials.
Fully Differential Monte-Carlo Generator Dedicated to TMDs and Bessel-Weighted Asymmetries
Aghasyan, Mher M.; Avakian, Harut A.
2013-10-01
We present studies of double longitudinal spin asymmetries in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator, which includes quark intrinsic transverse momentum within the generalized parton model based on the fully differential cross section for the process. Additionally, we apply Bessel-weighting to the simulated events to extract transverse momentum dependent parton distribution functions and also discuss possible uncertainties due to kinematic correlation effects.
Equation of state of strongly coupled quark--gluon plasma -- Path integral Monte Carlo results
V. S. Filinov; M. Bonitz; Y. B. Ivanov; V. V. Skokov; P. R. Levashov; V. E. Fortov
2009-05-04
A strongly coupled plasma of quark and gluon quasiparticles at temperatures from $ 1.1 T_c$ to $3 T_c$ is studied by path integral Monte Carlo simulations. This method extends previous classical nonrelativistic simulations based on a color Coulomb interaction to the quantum regime. We present the equation of state and find good agreement with lattice results. Further, pair distribution functions and color correlation functions are computed indicating strong correlations and liquid-like behavior.
Wang, Huihui; Meng, Lin; Liu, Dagang; Liu, Laqun [School of Physical Electronics, University of Electronic Science and Technology of China, Chengdu 610054 (China)] [School of Physical Electronics, University of Electronic Science and Technology of China, Chengdu 610054 (China)
2013-12-15
A particle-in-cell/Monte Carlo code is developed to rescale the microwave breakdown theory which is put forward by Vyskrebentsev and Raizer. The results of simulations show that there is a distinct error in this theory when the high energy tail of electron energy distribution function increases. A rescaling factor is proposed to modify this theory, and the change rule of the rescaling factor is presented.
An analysis of 4-quark energies in SU(2) lattice Monte Carlo
Sadataka Furui; Bilal Masud
1998-09-12
Energies of four-quark systems with the tetrahedral geometry measured by the static quenched SU(2) lattice Monte Carlo method are analyzed by parametrizing the gluon overlap factor in the form exp(-[bs EA+{\\sqrt bs}FP]) where A and P are the area and the perimeter defined mainly by the positions of the four quarks, bs is the string constant in the 2-quark potentials and E, F are constants.
Hybrid Monte Carlo with Wilson Dirac operator on the Fermi GPU
Abhijit Chakrabarty; Pushan Majumdar
2012-07-10
In this article we present our implementation of a Hybrid Monte Carlo algorithm for Lattice Gauge Theory using two degenerate flavours of Wilson-Dirac fermions on a Fermi GPU. We find that using registers instead of global memory speeds up the code by almost an order of magnitude. To map the array variables to scalars, so that the compiler puts them in the registers, we use code generators. Our final program is more than 10 times faster than a generic single CPU.
Maximum likelihood parameter estimation in time series models using sequential Monte Carlo
Yildirim, Sinan
2013-06-11
, respectively. This approach is useful to handle the case where the columns of Y are generated sequentially in time, such as in audio signal processing. Usually very large number of columns in Y leads to the necessity of online algorithms to learn the model... .6 (dashed lines). For illustrative purposes, every 1000th estimate is shown . . . . . . . . . . . . . . . . . . . . . . . 130 6.1 Histograms of Monte Carlo estimates of gradients of log p?,?,?? (Y ?,?,?) w.r.t. the parameters of the ?-stable distribution...
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore »geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less
Using a Monte-Carlo-based approach to evaluate the uncertainty on fringe projection technique
Molimard, Jérôme
2013-01-01
A complete uncertainty analysis on a given fringe projection set-up has been performed using Monte-Carlo approach. In particular the calibration procedure is taken into account. Two applications are given: at a macroscopic scale, phase noise is predominant whilst at microscopic scale, both phase noise and calibration errors are important. Finally, uncertainty found at macroscopic scale is close to some experimental tests (~100 {\\mu}m).
NuWro Monte Carlo generator of neutrino interactions - first electron scattering results
Jakub Zmuda; Krzysztof M. Graczyk; Cezary Juszczak; Jan T. Sobczyk
2015-11-05
NuWro Monte Carlo generator of events is presented. It is a numerical environment containing all necessary ingredients to simulate interactions of neutrinos with nucleons and nuclei in realistic experimental situation in wide neutrino energy range. It can be used both for data analysis as well as studies of nuclear effects in neutrino interactions. The first results and functionalities of eWro - module of NuWro dedicated to electron nucleus scattering - are also presented.
Pérez-Andújar, Angélica [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 (United States)] [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 (United States); Zhang, Rui; Newhauser, Wayne [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and The University of Texas Graduate School of Biomedical Sciences at Houston, 6767 Bertner Avenue, Houston, Texas 77030 (United States)] [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and The University of Texas Graduate School of Biomedical Sciences at Houston, 6767 Bertner Avenue, Houston, Texas 77030 (United States)
2013-12-15
Purpose: Stray neutron radiation is of concern after radiation therapy, especially in children, because of the high risk it might carry for secondary cancers. Several previous studies predicted the stray neutron exposure from proton therapy, mostly using Monte Carlo simulations. Promising attempts to develop analytical models have also been reported, but these were limited to only a few proton beam energies. The purpose of this study was to develop an analytical model to predict leakage neutron equivalent dose from passively scattered proton beams in the 100-250-MeV interval.Methods: To develop and validate the analytical model, the authors used values of equivalent dose per therapeutic absorbed dose (H/D) predicted with Monte Carlo simulations. The authors also characterized the behavior of the mean neutron radiation-weighting factor, w{sub R}, as a function of depth in a water phantom and distance from the beam central axis.Results: The simulated and analytical predictions agreed well. On average, the percentage difference between the analytical model and the Monte Carlo simulations was 10% for the energies and positions studied. The authors found that w{sub R} was highest at the shallowest depth and decreased with depth until around 10 cm, where it started to increase slowly with depth. This was consistent among all energies.Conclusion: Simple analytical methods are promising alternatives to complex and slow Monte Carlo simulations to predict H/D values. The authors' results also provide improved understanding of the behavior of w{sub R} which strongly depends on depth, but is nearly independent of lateral distance from the beam central axis.
Perfetti, Christopher M [ORNL] [ORNL; Martin, William R [University of Michigan] [University of Michigan; Rearden, Bradley T [ORNL] [ORNL; Williams, Mark L [ORNL] [ORNL
2012-01-01
Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the SHIFT Monte Carlo code within the Scale code package. The methods were used for several simple test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods.
Study of predominant hadronic modes of $?$-lepton using a Monte Carlo generator TAUOLA
O. Shekhovtsova
2015-08-22
TAUOLA is a Monte Carlo generator dedicated to generating tau-lepton decays and it is used in the analysis of experimental data both at B-factories and LHC. TAUOLA is a long-term project that started in the 90's and has been under development up to now. In this note we discuss the status of the predominant hadronic tau-lepton decays into two ($Br \\simeq 25.52\\%$) and three pions ($Br \\simeq 18.67\\%$).
Dornheim, Tobias; Groth, Simon; Filinov, Alexey; Bonitz, Michael
2015-01-01
The uniform electron gas (UEG) at finite temperature is of high current interest due to its key relevance for many applications including dense plasmas and laser excited solids. In particular, density functional theory heavily relies on accurate thermodynamic data for the UEG. Until recently, the only existing first-principle results had been obtained for $N=33$ electrons with restricted path integral Monte Carlo (RPIMC), for low to moderate density, $r_s = \\overline{r}/a_B \\gtrsim 1$. This data has been complemented by Configuration path integral Monte Carlo (CPIMC) simulations for $r_s \\leq 1$ that substantially deviate from RPIMC towards smaller $r_s$ and low temperature. In this work, we present results from an independent third method---the recently developed permutation blocking path integral Monte Carlo (PB-PIMC) approach [T. Dornheim \\textit{et al.}, NJP \\textbf{17}, 073017 (2015)] which we extend to the UEG. Interestingly, PB-PIMC allows us to perform simulations over the entire density range down to...
Hart, S. W. D.; Maldonado, G. Ivan; Celik, Cihangir; Leal, Luiz C
2014-01-01
For many Monte Carlo codes cross sections are generally only created at a set of predetermined temperatures. This causes an increase in error as one moves further and further away from these temperatures in the Monte Carlo model. This paper discusses recent progress in the Scale Monte Carlo module KENO to create problem dependent, Doppler broadened, cross sections. Currently only broadening the 1D cross sections and probability tables is addressed. The approach uses a finite difference method to calculate the temperature dependent cross-sections for the 1D data, and a simple linear-logarithmic interpolation in the square root of temperature for the probability tables. Work is also ongoing to address broadening theS (alpha , beta) tables. With the current approach the temperature dependent cross sections are Doppler broadened before transport starts, and, for all but a few isotopes, the impact on cross section loading is negligible. Results can be compared with those obtained by using multigroup libraries, as KENO currently does interpolation on the multigroup cross sections to determine temperature dependent cross-sections. Current results compare favorably with these expected results.
Nonequilibrium candidate Monte Carlo: A new tool for efficient equilibrium simulation
Nilmeier, Jerome P.; Crooks, Gavin E.; Minh, David D. L.; Chodera, John D.
2011-11-08
Metropolis Monte Carlo simulation is a powerful tool for studying the equilibrium properties of matter. In complex condensed-phase systems, however, it is difficult to design Monte Carlo moves with high acceptance probabilities that also rapidly sample uncorrelated configurations. Here, we introduce a new class of moves based on nonequilibrium dynamics: candidate configurations are generated through a finite-time process in which a system is actively driven out of equilibrium, and accepted with criteria that preserve the equilibrium distribution. The acceptance rule is similar to the Metropolis acceptance probability, but related to the nonequilibrium work rather than the instantaneous energy difference. Our method is applicable to sampling from both a single thermodynamic state or a mixture of thermodynamic states, and allows both coordinates and thermodynamic parameters to be driven in nonequilibrium proposals. While generating finite-time switching trajectories incurs an additional cost, driving some degrees of freedom while allowing others to evolve naturally can lead to large enhancements in acceptance probabilities, greatly reducing structural correlation times. Using nonequilibrium driven processes vastly expands the repertoire of useful Monte Carlo proposals in simulations of dense solvated systems.
Alhassan, Erwin; Duan, Junfeng; Gustavsson, Cecilia; Koning, Arjan; Pomp, Stephan; Rochman, Dimitri; Österlund, Michael
2013-01-01
Analyses are carried out to assess the impact of nuclear data uncertainties on keff for the European Lead Cooled Training Reactor (ELECTRA) using the Total Monte Carlo method. A large number of Pu-239 random ENDF-formated libraries generated using the TALYS based system were processed into ACE format with NJOY99.336 code and used as input into the Serpent Monte Carlo neutron transport code to obtain distribution in keff. The keff distribution obtained was compared with the latest major nuclear data libraries - JEFF-3.1.2, ENDF/B-VII.1 and JENDL-4.0. A method is proposed for the selection of benchmarks for specific applications using the Total Monte Carlo approach. Finally, an accept/reject criterion was investigated based on chi square values obtained using the Pu-239 Jezebel criticality benchmark. It was observed that nuclear data uncertainties in keff were reduced considerably from 748 to 443 pcm by applying a more rigid acceptance criteria for accepting random files.
Erwin Alhassan; Henrik Sjöstrand; Junfeng Duan; Cecilia Gustavsson; Arjan Koning; Stephan Pomp; Dimitri Rochman; Michael Österlund
2013-04-04
Analyses are carried out to assess the impact of nuclear data uncertainties on keff for the European Lead Cooled Training Reactor (ELECTRA) using the Total Monte Carlo method. A large number of Pu-239 random ENDF-formated libraries generated using the TALYS based system were processed into ACE format with NJOY99.336 code and used as input into the Serpent Monte Carlo neutron transport code to obtain distribution in keff. The keff distribution obtained was compared with the latest major nuclear data libraries - JEFF-3.1.2, ENDF/B-VII.1 and JENDL-4.0. A method is proposed for the selection of benchmarks for specific applications using the Total Monte Carlo approach. Finally, an accept/reject criterion was investigated based on chi square values obtained using the Pu-239 Jezebel criticality benchmark. It was observed that nuclear data uncertainties in keff were reduced considerably from 748 to 443 pcm by applying a more rigid acceptance criteria for accepting random files.
Majumdar, Amit
there is interest to simulate enormously large Monte Carlo particle transport problems for neutron and photon.e., the end of a time step. Besides absorption, the photons may undergo Thompson scattering. The overall
Xu, Zao
We present a numerical study of the near-surface underwater solar light statistics using the state-of-the-art Monte Carlo radiative transfer (RT) simulations in the coupled atmosphere-ocean system. Advanced variance-reduction ...
Kurebayashi, Shinya, 1976-
2004-01-01
Measurements from three classes of direct-drive implosions at the OMEGA laser system [T. R. Boehly et al., Opt. Commun. 133, 495 (1997)] were combined with Monte-Carlo simulations to investigate models for determining ...
Tutt, Teresa Elizabeth
2009-05-15
Monte Carlo method is an invaluable tool in the field of radiation protection, used to calculate shielding effectiveness, as well as dose for medical applications. With few exceptions, most of the objects currently simulated ...
Erickson, Lori
1995-01-01
Monte Carlo modeling techniques using mean information fields (MIF), developed by Torsten Hagerstrand in the 1950s, were integrated with a geographic information system (GIS) to simulate lost person behavior in wilderness areas. Big Bend Ranch State...
A Positive-Weight Next-to-Leading-Order Monte Carlo for e+e- Annihilation to Hadrons
Oluseyi Latunde-Dada; Stefan Gieseke; Bryan Webber
2007-02-20
We apply the positive-weight Monte Carlo method of Nason for simulating QCD processes accurate to Next-To-Leading Order to the case of e+e- annihilation to hadrons. The method entails the generation of the hardest gluon emission first and then subsequently adding a `truncated' shower before the emission. We have interfaced our result to the Herwig++ shower Monte Carlo program and obtained better results than those obtained with Herwig++ at leading order with a matrix element correction.
A Monte-Carlo Method without Grid to Compute the Exchange Coefficient in the Double Porosity Model
Boyer, Edmond
Classification: 76S05 (65C05 76M35) Published in Monte Carlo Methods Appl.. 8:2, 129147, 2002 Archives, links Methods and Applications 8, 2 (2002) 129-147" #12;F. Campillo and A. Lejay / A Monte Carlo Method witouth consists in transforming (1) into a system: m Pm t = a-Pm - (Pm - Pf), m = Meas(m) Meas() f Pf t = a
Radiation doses in cone-beam breast computed tomography: A Monte Carlo simulation study
Yi Ying; Lai, Chao-Jen; Han Tao; Zhong Yuncheng; Shen Youtao; Liu Xinming; Ge Shuaiping; You Zhicheng; Wang Tianpeng; Shaw, Chris C.
2011-02-15
Purpose: In this article, we describe a method to estimate the spatial dose variation, average dose and mean glandular dose (MGD) for a real breast using Monte Carlo simulation based on cone beam breast computed tomography (CBBCT) images. We present and discuss the dose estimation results for 19 mastectomy breast specimens, 4 homogeneous breast models, 6 ellipsoidal phantoms, and 6 cylindrical phantoms. Methods: To validate the Monte Carlo method for dose estimation in CBBCT, we compared the Monte Carlo dose estimates with the thermoluminescent dosimeter measurements at various radial positions in two polycarbonate cylinders (11- and 15-cm in diameter). Cone-beam computed tomography (CBCT) images of 19 mastectomy breast specimens, obtained with a bench-top experimental scanner, were segmented and used to construct 19 structured breast models. Monte Carlo simulation of CBBCT with these models was performed and used to estimate the point doses, average doses, and mean glandular doses for unit open air exposure at the iso-center. Mass based glandularity values were computed and used to investigate their effects on the average doses as well as the mean glandular doses. Average doses for 4 homogeneous breast models were estimated and compared to those of the corresponding structured breast models to investigate the effect of tissue structures. Average doses for ellipsoidal and cylindrical digital phantoms of identical diameter and height were also estimated for various glandularity values and compared with those for the structured breast models. Results: The absorbed dose maps for structured breast models show that doses in the glandular tissue were higher than those in the nearby adipose tissue. Estimated average doses for the homogeneous breast models were almost identical to those for the structured breast models (p=1). Normalized average doses estimated for the ellipsoidal phantoms were similar to those for the structured breast models (root mean square (rms) percentage difference=1.7%; p=0.01), whereas those for the cylindrical phantoms were significantly lower (rms percentage difference=7.7%; p<0.01). Normalized MGDs were found to decrease with increasing glandularity. Conclusions: Our results indicate that it is sufficient to use homogeneous breast models derived from CBCT generated structured breast models to estimate the average dose. This investigation also shows that ellipsoidal digital phantoms of similar dimensions (diameter and height) and glandularity to actual breasts may be used to represent a real breast to estimate the average breast dose with Monte Carlo simulation. We have also successfully demonstrated the use of structured breast models to estimate the true MGDs and shown that the normalized MGDs decreased with the glandularity as previously reported by other researchers for CBBCT or mammography.
Doebling, S.W.; Farrar, C.R. [Los Alamos National Lab., NM (United States); Cornwell, P.J. [Rose Hulman Inst. of Tech., Terre Haute, IN (United States)
1998-02-01
This paper presents a comparison of two techniques used to estimate the statistical confidence intervals on modal parameters identified from measured vibration data. The first technique is Monte Carlo simulation, which involves the repeated simulation of random data sets based on the statistics of the measured data and an assumed distribution of the variability in the measured data. A standard modal identification procedure is repeatedly applied to the randomly perturbed data sets to form a statistical distribution on the identified modal parameters. The second technique is the Bootstrap approach, where individual Frequency Response Function (FRF) measurements are randomly selected with replacement to form an ensemble average. This procedure, in effect, randomly weights the various FRF measurements. These weighted averages of the FRFs are then put through the modal identification procedure. The modal parameters identified from each randomly weighted data set are then used to define a statistical distribution for these parameters. The basic difference in the two techniques is that the Monte Carlo technique requires the assumption on the form of the distribution of the variability in the measured data, while the bootstrap technique does not. Also, the Monte Carlo technique can only estimate random errors, while the bootstrap statistics represent both random and bias (systematic) variability such as that arising from changing environmental conditions. However, the bootstrap technique requires that every frequency response function be saved for each average during the data acquisition process. Neither method can account for bias introduced during the estimation of the FRFs. This study has been motivated by a program to develop vibration-based damage identification procedures.
Charged-Particle Thermonuclear Reaction Rates: I. Monte Carlo Method and Statistical Distributions
Richard Longland; Christian Iliadis; Art Champagne; Joe Newton; Claudio Ugalde; Alain Coc; Ryan Fitzgerald
2010-04-23
A method based on Monte Carlo techniques is presented for evaluating thermonuclear reaction rates. We begin by reviewing commonly applied procedures and point out that reaction rates that have been reported up to now in the literature have no rigorous statistical meaning. Subsequently, we associate each nuclear physics quantity entering in the calculation of reaction rates with a specific probability density function, including Gaussian, lognormal and chi-squared distributions. Based on these probability density functions the total reaction rate is randomly sampled many times until the required statistical precision is achieved. This procedure results in a median (Monte Carlo) rate which agrees under certain conditions with the commonly reported recommended "classical" rate. In addition, we present at each temperature a low rate and a high rate, corresponding to the 0.16 and 0.84 quantiles of the cumulative reaction rate distribution. These quantities are in general different from the statistically meaningless "minimum" (or "lower limit") and "maximum" (or "upper limit") reaction rates which are commonly reported. Furthermore, we approximate the output reaction rate probability density function by a lognormal distribution and present, at each temperature, the lognormal parameters miu and sigma. The values of these quantities will be crucial for future Monte Carlo nucleosynthesis studies. Our new reaction rates, appropriate for bare nuclei in the laboratory, are tabulated in the second paper of this series (Paper II). The nuclear physics input used to derive our reaction rates is presented in the third paper of this series (Paper III). In the fourth paper of this series (Paper IV) we compare our new reaction rates to previous results.
Transport in open spin chains: A Monte Carlo wave-function approach
Mathias Michel; Ortwin Hess; Hannu Wichterich; Jochen Gemmer
2008-03-07
We investigate energy transport in several two-level atom or spin-1/2 models by a direct coupling to heat baths of different temperatures. The analysis is carried out on the basis of a recently derived quantum master equation which describes the nonequilibrium properties of internally weakly coupled systems appropriately. For the computation of the stationary state of the dynamical equations, we employ a Monte Carlo wave-function approach. The analysis directly indicates normal diffusive or ballistic transport in finite models and hints toward an extrapolation of the transport behavior of infinite models.
S. Frixione; E. Laenen; P. Motylinski; B. R. Webber
2007-02-20
We explain how angular correlations in leptonic decays of vector bosons and top quarks can be included in Monte Carlo parton showers, in particular those matched to NLO QCD computations. We consider the production of $n$ pairs of leptons, originating from the decays of $n$ electroweak vector bosons or of $n$ top quarks, in the narrow-width approximation. In the latter case, the information on the $n$ $b$ quarks emerging from the decays is also retained. We give results of implementing this procedure in MC@NLO
Monte Carlo Generators for Studies of the 3D Structure of the Nucleon
Avagyan, Harut A. [JLAB
2015-01-01
Extraction of transverse momentum and space distributions of partons from measurements of spin and azimuthal asymmetries requires development of a self consistent analysis framework, accounting for evolution effects, and allowing control of systematic uncertainties due to variations of input parameters and models. Development of realistic Monte-Carlo generators, accounting for TMD evolution effects, spin-orbit and quark-gluon correlations will be crucial for future studies of quark-gluon dynamics in general and 3D structure of the nucleon in particular.
Monte Carlo simulations of channeling spectra recorded for samples containing complex defects
Jagielski, Jacek; Turos, Prof. Andrzej; Nowicki, Lech; Jozwik, P.; Shutthanandan, Vaithiyalingam; Zhang, Yanwen; Sathish, N.; Thome, Lionel; Stonert, A.; Jozwik-Biala, Iwona
2012-01-01
The aim of the present paper is to describe the current status of the development of McChasy, a Monte Carlo simulation code, to make it suitable for the analysis of dislocations and dislocation loops in crystals. Such factors like the shape of the bent channel and geometrical distortions of the crystalline structure in the vicinity of dislocation has been discussed. The results obtained demonstrate that the new procedure applied to the spectra recorded on crystals containing dislocation yields damage profiles which are independent of the energy of the analyzing beam.
Monte Carlo simulations of channeling spectra recorded for samples containing complex defects
Jagielski, Jacek K.; Turos, Andrzej W.; Nowicki, L.; Jozwik, Przemyslaw A.; Shutthanandan, V.; Zhang, Yanwen; Sathish, N.; Thome, Lionel; Stonert, A.; Jozwik Biala, Iwona
2012-02-15
The main aim of the present paper is to describe the current status of the development of McChasy, a Monte Carlo simulation code, to make it suitable for the analysis of dislocations and dislocation loops in crystals. Such factors like the shape of the bent channel and geometrical distortions of the crystalline structure in the vicinity of dislocation has been discussed. Several examples of the analysis performed at different energies of analyzing ions are presented. The results obtained demonstrate that the new procedure applied to the spectra recorded on crystals containing dislocation yields damage profiles which are independent of the energy of the analyzing beam.
The Imprints of IMBHs on the Structure of Globular Clusters: Monte-Carlo Simulations
Stefan Umbreit; John M. Fregeau; Frederic A. Rasio
2008-03-06
We present the first results of a series of Monte-Carlo simulations investigating the imprint of a central black hole on the core structure of a globular cluster. We investigate the three-dimensional and the projected density profile of the inner regions of idealized as well as more realistic globular cluster models, taking into account a stellar mass spectrum, stellar evolution and allowing for a larger, more realistic, number of stars than was previously possible with direct N-body methods. We compare our results to other N-body simulations published previously in the literature.
Four-Quark Binding Energies from SU(2) Lattice Monte Carlo
A. M. Green; C. Michael; M. E. Sainio
1994-04-11
Energies of four-quark systems have been extracted in a static quenched SU(2) lattice Monte Carlo calculation for six different geometries, both planar and non-planar, with $\\beta=2.4$ and lattice size $16^3\\times 32$. In all cases, it is found that the binding energy is greatly enhanced when the four quarks can be partitioned in two ways with comparable energies. Also it is shown that the energies of the four-quark states cannot be understood simply in terms of two-quark potentials.
Study of DCX reaction on medium nuclei with Monte-Carlo Shell Model
Wu, H. C.; Gibbs, W. R.
2010-08-04
In this work a method is introduced to calculate the DCX reaction in the framework of Monte-Carlo Shell Model (MCSM). To facilitate the use of Zero-temperature formalism of MCSM, the Double-Isobaric-Analog State (DIAS) is derived from the ground state by using isospin shifting operator. The validity of this method is tested by comparing the MCSM results to those of the SU(3) symmetry case. Application of this method to DCX on {sup 56}Fe and {sup 93}Nb is discussed.
Monte-Carlo study of the phase transition in the AA-stacked bilayer graphene
A. A. Nikolaev; M. V. Ulybyshev
2014-12-04
Tight-binding model of the AA-stacked bilayer graphene with screened electron-electron interactions has been studied using the Hybrid Monte Carlo simulations on the original double-layer hexagonal lattice. Instantaneous screened Coulomb potential is taken into account using Hubbard-Stratonovich transformation. G-type antiferromagnetic ordering has been studied and the phase transition with spontaneous generation of the mass gap has been observed. Dependence of the antiferromagnetic condensate on the on-site electron-electron interaction is examined.
Monte Carlo generators for studies of the 3D structure of the nucleon
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Avakian, Harut; D'Alesio, U.; Murgia, F.
2015-01-23
In this study, extraction of transverse momentum and space distributions of partons from measurements of spin and azimuthal asymmetries requires development of a self consistent analysis framework, accounting for evolution effects, and allowing control of systematic uncertainties due to variations of input parameters and models. Development of realistic Monte-Carlo generators, accounting for TMD evolution effects, spin-orbit and quark-gluon correlations will be crucial for future studies of quark-gluon dynamics in general and 3D structure of the nucleon in particular.
Nuclear Level Density of ${}^{161}$Dy in the Shell Model Monte Carlo Method
Cem Özen; Yoram Alhassid; Hitoshi Nakada
2012-06-27
We extend the shell-model Monte Carlo applications to the rare-earth region to include the odd-even nucleus ${}^{161}$Dy. The projection on an odd number of particles leads to a sign problem at low temperatures making it impractical to extract the ground-state energy in direct calculations. We use level counting data at low energies and neutron resonance data to extract the shell model ground-state energy to good precision. We then calculate the level density of ${}^{161}$Dy and find it in very good agreement with the level density extracted from experimental data.
A Hybrid (Monte-Carlo/Deterministic) Approach for Multi-Dimensional Radiation Transport
Guillaume Bal; Anthony Davis; Ian Langmore
2011-05-07
A novel hybrid Monte Carlo transport scheme is demonstrated in a scene with solar illumination, scattering and absorbing 2D atmosphere, a textured reflecting mountain, and a small detector located in the sky (mounted on a satellite or a airplane). It uses a deterministic approximation of an adjoint transport solution to reduce variance, computed quickly by ignoring atmospheric interactions. This allows significant variance and computational cost reductions when the atmospheric scattering and absorption coefficient are small. When combined with an atmospheric photon-redirection scheme, significant variance reduction (equivalently acceleration) is achieved in the presence of atmospheric interactions.
A Hybrid (Monte-Carlo/Deterministic) Approach for Multi-Dimensional Radiation Transport
Bal, Guillaume; Langmore, Ian
2011-01-01
A novel hybrid Monte Carlo transport scheme is demonstrated in a scene with solar illumination, scattering and absorbing 2D atmosphere, a textured reflecting mountain, and a small detector located in the sky (mounted on a satellite or a airplane). It uses a deterministic approximation of an adjoint transport solution to reduce variance, computed quickly by ignoring atmospheric interactions. This allows significant variance and computational cost reductions when the atmospheric scattering and absorption coefficient are small. When combined with an atmospheric photon-redirection scheme, significant variance reduction (equivalently acceleration) is achieved in the presence of atmospheric interactions.
Quantized vortices in {sup 4}He droplets: A quantum Monte Carlo study
Sola, E.; Casulleras, J.; Boronat, J.
2007-08-01
We present a diffusion Monte Carlo study of a vortex line excitation attached to the center of a {sup 4}He droplet at zero temperature. The vortex energy is estimated for droplets of increasing number of atoms, from N=70 up to 300, showing a monotonous increase with N. The evolution of the core radius and its associated energy, the core energy, is also studied as a function of N. The core radius is {approx}1 A in the center and increases when approaching the droplet surface; the core energy per unit volume stabilizes at a value 2.8 K{sigma}{sup -3} ({sigma}=2.556 A) for N{>=}200.
Quantum Monte Carlo simulation of a two-dimensional Bose gas
Pilati, S.; Boronat, J.; Casulleras, J.; Giorgini, S.
2005-02-01
The equation of state of a homogeneous two-dimensional Bose gas is calculated using quantum Monte Carlo methods. The low-density universal behavior is investigated using different interatomic model potentials, both finite ranged and strictly repulsive and zero ranged, supporting a bound state. The condensate fraction and the pair distribution function are calculated as a function of the gas parameter, ranging from the dilute to the strongly correlated regime. In the case of the zero-range pseudopotential we discuss the stability of the gaslike state for large values of the two-dimensional scattering length, and we calculate the critical density where the system becomes unstable against cluster formation.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Betzler, Benjamin R.; Kiedrowski, Brian C.; Brown, Forrest B.; Martin, William R.
2015-08-28
The time-dependent behavior of the energy spectrum in neutron transport was investigated with a formulation, based on continuous-time Markov processes, for computing ? eigenvalues and eigenvectors in an infinite medium. In this study, a research Monte Carlo code called “TORTE” (To Obtain Real Time Eigenvalues) was created and used to estimate elements of a transition rate matrix. TORTE is capable of using both multigroup and continuous-energy nuclear data, and verification was performed. Eigenvalue spectra for infinite homogeneous mixtures were obtained, and an eigenfunction expansion was used to investigate transient behavior of the neutron energy spectrum.
Boris Tomasik
2009-01-09
A Monte Carlo generator of the final state of hadrons emitted from an ultrarelativistic nuclear collision is introduced. An important feature of the generator is a possible fragmentation of the fireball and emission of the hadrons from fragments. Phase space distribution of the fragments is based on the blast wave model extended to azimuthally non-symmetric fireballs. Parameters of the model can be tuned and this allows to generate final states from various kinds of fireballs. A facultative output in the OSCAR1999A format allows for a comprehensive analysis of phase-space distributions and/or use as an input for an afterburner.
Thermonuclear reaction rate of $^{18}$Ne($\\alpha$,$p$)$^{21}$Na from Monte-Carlo calculations
Mohr, P; Iliadis, C
2014-01-01
The $^{18}$Ne($\\alpha$,$p$)$^{21}$Na reaction impacts the break-out from the hot CNO-cycles to the $rp$-process in type I X-ray bursts. We present a revised thermonuclear reaction rate, which is based on the latest experimental data. The new rate is derived from Monte-Carlo calculations, taking into account the uncertainties of all nuclear physics input quantities. In addition, we present the reaction rate uncertainty and probability density versus temperature. Our results are also consistent with estimates obtained using different indirect approaches.
Thermonuclear reaction rate of $^{18}$Ne($?$,$p$)$^{21}$Na from Monte-Carlo calculations
P. Mohr; R. Longland; C. Iliadis
2014-12-14
The $^{18}$Ne($\\alpha$,$p$)$^{21}$Na reaction impacts the break-out from the hot CNO-cycles to the $rp$-process in type I X-ray bursts. We present a revised thermonuclear reaction rate, which is based on the latest experimental data. The new rate is derived from Monte-Carlo calculations, taking into account the uncertainties of all nuclear physics input quantities. In addition, we present the reaction rate uncertainty and probability density versus temperature. Our results are also consistent with estimates obtained using different indirect approaches.
Monte-Carlo Simulation of Exclusive Channels in e+e- Annihilation at Low Energy
D. Anipko; S. Eidelman; A. Pak
2003-12-25
Software package for Monte-Carlo simulation of e+e- exclusive annihilation channels written in the C++ language for Linux/Solaris platforms has been developed. It incorporates matrix elements for several mechanisms of multipion production in a model of consequent two and three-body resonance decays. Possible charge states of intermediate and final particles are accounted automatically under the assumption of isospin conservation. Interference effects can be taken into acccount. Package structure allows adding new matrix elements written in a gauge-invariant form.
Perera, Meewanage Dilina N; Li, Ying Wai; Eisenbach, Markus; Vogel, Thomas; Landau, David P
2015-01-01
We describe the study of thermodynamics of materials using replica-exchange Wang Landau (REWL) sampling, a generic framework for massively parallel implementations of the Wang Landau Monte Carlo method. To evaluate the performance and scalability of the method, we investigate the magnetic phase transition in body-centered cubic (bcc) iron using the classical Heisenberg model parameterized with first principles calculations. We demonstrate that our framework leads to a significant speedup without compromising the accuracy and precision and facilitates the study of much larger systems than is possible with its serial counterpart.
Physics and Algorithm Enhancements for a Validated MCNP/X Monte Carlo Simulation Tool, Phase VII
McKinney, Gregg W [Los Alamos National Laboratory
2012-07-17
Currently the US lacks an end-to-end (i.e., source-to-detector) radiation transport simulation code with predictive capability for the broad range of DHS nuclear material detection applications. For example, gaps in the physics, along with inadequate analysis algorithms, make it difficult for Monte Carlo simulations to provide a comprehensive evaluation, design, and optimization of proposed interrogation systems. With the development and implementation of several key physics and algorithm enhancements, along with needed improvements in evaluated data and benchmark measurements, the MCNP/X Monte Carlo codes will provide designers, operators, and systems analysts with a validated tool for developing state-of-the-art active and passive detection systems. This project is currently in its seventh year (Phase VII). This presentation will review thirty enhancements that have been implemented in MCNPX over the last 3 years and were included in the 2011 release of version 2.7.0. These improvements include 12 physics enhancements, 4 source enhancements, 8 tally enhancements, and 6 other enhancements. Examples and results will be provided for each of these features. The presentation will also discuss the eight enhancements that will be migrated into MCNP6 over the upcoming year.
Quantum Monte Carlo algorithms for electronic structure at the petascale; the endstation project.
Kim, J; Ceperley, D M; Purwanto, W; Walter, E J; Krakauer, H; Zhang, S W; Kent, P.R. C; Hennig, R G; Umrigar, C; Bajdich, M; Kolorenc, J; Mitas, L; Srinivasan, A
2008-10-01
Over the past two decades, continuum quantum Monte Carlo (QMC) has proved to be an invaluable tool for predicting of the properties of matter from fundamental principles. By solving the Schrodinger equation through a stochastic projection, it achieves the greatest accuracy and reliability of methods available for physical systems containing more than a few quantum particles. QMC enjoys scaling favorable to quantum chemical methods, with a computational effort which grows with the second or third power of system size. This accuracy and scalability has enabled scientific discovery across a broad spectrum of disciplines. The current methods perform very efficiently at the terascale. The quantum Monte Carlo Endstation project is a collaborative effort among researchers in the field to develop a new generation of algorithms, and their efficient implementations, which will take advantage of the upcoming petaflop architectures. Some aspects of these developments are discussed here. These tools will expand the accuracy, efficiency and range of QMC applicability and enable us to tackle challenges which are currently out of reach. The methods will be applied to several important problems including electronic and structural properties of water, transition metal oxides, nanosystems and ultracold atoms.
Berg, John M.; Veirs, D. Kirk; Vaughn, Randolph B.; Cisneros, Michael R.; Smith, Coleman A.
2000-06-01
Standard modeling approaches can produce the most likely values of the formation constants of metal-ligand complexes if a particular set of species containing the metal ion is known or assumed to exist in solution equilibrium with complexing ligands. Identifying the most likely set of species when more than one set is plausible is a more difficult problem to address quantitatively. A Monte Carlo method of data analysis is described that measures the relative abilities of different speciation models to fit optical spectra of open-shell actinide ions. The best model(s) can be identified from among a larger group of models initially judged to be plausible. The method is demonstrated by analyzing the absorption spectra of aqueous Pu(IV) titrated with nitrate ion at constant 2 molal ionic strength in aqueous perchloric acid. The best speciation model supported by the data is shown to include three Pu(IV) species with nitrate coordination numbers 0, 1, and 2. Formation constants are {beta}{sub 1}=3.2{+-}0.5 and {beta}{sub 2}=11.2{+-}1.2, where the uncertainties are 95% confidence limits estimated by propagating raw data uncertainties using Monte Carlo methods. Principal component analysis independently indicates three Pu(IV) complexes in equilibrium. (c) 2000 Society for Applied Spectroscopy.
Monte Carlo Simulations of Globular Cluster Evolution. IV. Direct Integration of Strong Interactions
John M. Fregeau; Frederic A. Rasio
2006-12-06
We study the dynamical evolution of globular clusters containing populations of primordial binaries, using our newly updated Monte Carlo cluster evolution code with the inclusion of direct integration of binary scattering interactions. We describe the modifications we have made to the code, as well as improvements we have made to the core Monte Carlo method. We present several test calculations to verify the validity of the new code, and perform many comparisons with previous analytical and numerical work in the literature. We simulate the evolution of a large grid of models, with a wide range of initial cluster profiles, and with binary fractions ranging from 0 to 1, and compare with observations of Galactic globular clusters. We find that our code yields very good agreement with direct N-body simulations of clusters with primordial binaries, but yields some results that differ significantly from other approximate methods. Notably, the direct integration of binary interactions reduces their energy generation rate relative to the simple recipes used in Paper III, and yields smaller core radii. Our results for the structural parameters of clusters during the binary-burning phase are now in the tail of the range of parameters for observed clusters, implying that either clusters are born significantly more or less centrally concentrated than has been previously considered, or that there are additional physical processes beyond two-body relaxation and binary interactions that affect the structural characteristics of clusters.
Monte-Carlo simulations of neutron shielding for the ATLAS forward region
Stekl, I; Kovalenko, V E; Vorobel, V; Leroy, C; Piquemal, F; Eschbach, R; Marquet, C
2000-01-01
The effectiveness of different types of neutron shielding for the ATLAS forward region has been studied by means of Monte-Carlo simulations and compared with the results of an experiment performed at the CERN PS. The simulation code is based on GEANT, FLUKA, MICAP and GAMLIB. GAMLIB is a new library including processes with gamma-rays produced in (n, gamma), (n, n'gamma) neutron reactions and is interfaced to the MICAP code. The effectiveness of different types of shielding against neutrons and gamma-rays, composed from different types of material, such as pure polyethylene, borated polyethylene, lithium-filled polyethylene, lead and iron, were compared. The results from Monte-Carlo simulations were compared to the results obtained from the experiment. The simulation results reproduce the experimental data well. This agreement supports the correctness of the simulation code used to describe the generation, spreading and absorption of neutrons (up to thermal energies) and gamma-rays in the shielding materials....
Energy density matrix formalism for interacting quantum systems: a quantum Monte Carlo study
Krogel, Jaron T; Kim, Jeongnim; Reboredo, Fernando A
2014-01-01
We develop an energy density matrix that parallels the one-body reduced density matrix (1RDM) for many-body quantum systems. Just as the density matrix gives access to the number density and occupation numbers, the energy density matrix yields the energy density and orbital occupation energies. The eigenvectors of the matrix provide a natural orbital partitioning of the energy density while the eigenvalues comprise a single particle energy spectrum obeying a total energy sum rule. For mean-field systems the energy density matrix recovers the exact spectrum. When correlation becomes important, the occupation energies resemble quasiparticle energies in some respects. We explore the occupation energy spectrum for the finite 3D homogeneous electron gas in the metallic regime and an isolated oxygen atom with ground state quantum Monte Carlo techniques imple- mented in the QMCPACK simulation code. The occupation energy spectrum for the homogeneous electron gas can be described by an effective mass below the Fermi level. Above the Fermi level evanescent behavior in the occupation energies is observed in similar fashion to the occupation numbers of the 1RDM. A direct comparison with total energy differences demonstrates a quantita- tive connection between the occupation energies and electron addition and removal energies for the electron gas. For the oxygen atom, the association between the ground state occupation energies and particle addition and removal energies becomes only qualitative. The energy density matrix provides a new avenue for describing energetics with quantum Monte Carlo methods which have traditionally been limited to total energies.
M. Alvioli; H. -J. Drescher; M. Strikman
2009-09-04
We developed a Monte Carlo event generator for production of nucleon configurations in complex nuclei consistently including effects of Nucleon-Nucleon (NN) correlations. Our approach is based on the Metropolis search for configurations satisfying essential constraints imposed by short- and long-range NN correlations, guided by the findings of realistic calculations of one- and two-body densities for medium-heavy nuclei. The produced event generator can be used for Monte Carlo (MC) studies of pA and AA collisions. We perform several tests of consistency of the code and comparison with previous models, in the case of high energy proton-nucleus scattering on an event-by-event basis, using nucleus configurations produced by our code and Glauber multiple scattering theory both for the uncorrelated and the correlated configurations; fluctuations of the average number of collisions are shown to be affected considerably by the introduction of NN correlations in the target nucleus. We also use the generator to estimate maximal possible gluon nuclear shadowing in a simple geometric model.
Cosmological parameters from CMB and other data: a Monte-Carlo approach
Antony Lewis; Sarah Bridle
2002-10-14
We present a fast Markov Chain Monte-Carlo exploration of cosmological parameter space. We perform a joint analysis of results from recent CMB experiments and provide parameter constraints, including sigma_8, from the CMB independent of other data. We next combine data from the CMB, HST Key Project, 2dF galaxy redshift survey, supernovae Ia and big-bang nucleosynthesis. The Monte Carlo method allows the rapid investigation of a large number of parameters, and we present results from 6 and 9 parameter analyses of flat models, and an 11 parameter analysis of non-flat models. Our results include constraints on the neutrino mass (m_nu < 0.3eV), equation of state of the dark energy, and the tensor amplitude, as well as demonstrating the effect of additional parameters on the base parameter constraints. In a series of appendices we describe the many uses of importance sampling, including computing results from new data and accuracy correction of results generated from an approximate method. We also discuss the different ways of converting parameter samples to parameter constraints, the effect of the prior, assess the goodness of fit and consistency, and describe the use of analytic marginalization over normalization parameters.
Use of SCALE Continuous-Energy Monte Carlo Tools for Eigenvalue Sensitivity Coefficient Calculations
Perfetti, Christopher M [ORNL] [ORNL; Rearden, Bradley T [ORNL] [ORNL
2013-01-01
The TSUNAMI code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the development of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The CLUTCH and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE KENO framework to generate the capability for TSUNAMI-3D to perform eigenvalue sensitivity calculations in continuous-energy applications. This work explores the improvements in accuracy that can be gained in eigenvalue and eigenvalue sensitivity calculations through the use of the SCALE CE KENO and CE TSUNAMI continuous-energy Monte Carlo tools as compared to multigroup tools. The CE KENO and CE TSUNAMI tools were used to analyze two difficult models of critical benchmarks, and produced eigenvalue and eigenvalue sensitivity coefficient results that showed a marked improvement in accuracy. The CLUTCH sensitivity method in particular excelled in terms of efficiency and computational memory requirements.
Surface Structures of Cubo-octahedral Pt-Mo Catalyst Nanoparticles from Monte Carlo Simulations
Wang, Guofeng; Van Hove, M.A.; Ross, P.N.; Baskes, M.I.
2005-03-31
The surface structures of cubo-octahedral Pt-Mo nanoparticles have been investigated using the Monte Carlo method and modified embedded atom method potentials that we developed for Pt-Mo alloys. The cubo-octahedral Pt-Mo nanoparticles are constructed with disordered fcc configurations, with sizes from 2.5 to 5.0 nm, and with Pt concentrations from 60 to 90 at. percent. The equilibrium Pt-Mo nanoparticle configurations were generated through Monte Carlo simulations allowing both atomic displacements and element exchanges at 600 K. We predict that the Pt atoms weakly segregate to the surfaces of such nanoparticles. The Pt concentrations in the surface are calculated to be 5 to 14 at. percent higher than the Pt concentrations of the nanoparticles. Moreover, the Pt atoms preferentially segregate to the facet sites of the surface, while the Pt and Mo atoms tend to alternate along the edges and vertices of these nanoparticles. We found that decreasing the size or increasing the Pt concentration leads to higher Pt concentrations but fewer Pt-Mo pairs in the Pt-Mo nanoparticle surfaces.
A Monte Carlo Study of Multiplicity Fluctuations in Pb-Pb Collisions at LHC Energies
Ramni Gupta
2015-01-15
With large volumes of data available from LHC, it has become possible to study the multiplicity distributions for the various possible behaviours of the multiparticle production in collisions of relativistic heavy ion collisions, where a system of dense and hot partons has been created. In this context it is important and interesting as well to check how well the Monte Carlo generators can describe the properties or the behaviour of multiparticle production processes. One such possible behaviour is the self-similarity in the particle production, which can be studied with the intermittency studies and further with chaoticity/erraticity, in the heavy ion collisions. We analyse the behaviour of erraticity index in central Pb-Pb collisions at centre of mass energy of 2.76 TeV per nucleon using the AMPT monte carlo event generator, following the recent proposal by R.C. Hwa and C.B. Yang, concerning the local multiplicity fluctuation study as a signature of critical hadronization in heavy-ion collisions. We report the values of erraticity index for the two versions of the model with default settings and their dependence on the size of the phase space region. Results presented here may serve as a reference sample for the experimental data from heavy ion collisions at these energies.
SIMDET - Version 4 A Parametric Monte Carlo for a TESLA Detector
M. Pohl; H. J. Schreiber
2002-06-05
A new release of the parametric detector Monte Carlo program \\verb+SIMDET+ (version 4.01) is now available. We describe the principles of operation and the usage of this program to simulate the response of a detector for the TESLA linear collider. The detector components are implemented according to the TESLA Technical Design Report. All detector component responses are treated in a realistic way using a parametrisation of results from the {\\em ab initio} Monte Carlo program \\verb+BRAHMS+. Pattern recognition is emulated using a complete cross reference between generated particles and detector response. Also, for charged particles, the covariance matrix and $dE/dx$ information are made available. An idealised energy flow algorithm defines the output of the program, consisting of particles generically classified as electrons, photons, muons, charged and neutral hadrons as well as unresolved clusters. The program parameters adjustable by the user are described in detail. User hooks inside the program and the output data structure are documented.
Massively parallel Monte Carlo for many-particle simulations on GPUs
Anderson, Joshua A.; Jankowski, Eric [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)] [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Grubb, Thomas L. [Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)] [Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Engel, Michael [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)] [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Glotzer, Sharon C., E-mail: sglotzer@umich.edu [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)
2013-12-01
Current trends in parallel processors call for the design of efficient massively parallel algorithms for scientific computing. Parallel algorithms for Monte Carlo simulations of thermodynamic ensembles of particles have received little attention because of the inherent serial nature of the statistical sampling. In this paper, we present a massively parallel method that obeys detailed balance and implement it for a system of hard disks on the GPU. We reproduce results of serial high-precision Monte Carlo runs to verify the method. This is a good test case because the hard disk equation of state over the range where the liquid transforms into the solid is particularly sensitive to small deviations away from the balance conditions. On a Tesla K20, our GPU implementation executes over one billion trial moves per second, which is 148 times faster than on a single Intel Xeon E5540 CPU core, enables 27 times better performance per dollar, and cuts energy usage by a factor of 13. With this improved performance we are able to calculate the equation of state for systems of up to one million hard disks. These large system sizes are required in order to probe the nature of the melting transition, which has been debated for the last forty years. In this paper we present the details of our computational method, and discuss the thermodynamics of hard disks separately in a companion paper.
Ibrahim, Ahmad M; Wilson, P.; Sawan, M.; Mosher, Scott W; Peplow, Douglas E.; Grove, Robert E
2013-01-01
Three mesh adaptivity algorithms were developed to facilitate and expedite the use of the CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques in accurate full-scale neutronics simulations of fusion energy systems with immense sizes and complicated geometries. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility and resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation. Additionally, because of the significant increase in the efficiency of FW-CADIS simulations, the three algorithms enabled this difficult calculation to be accurately solved on a regular computer cluster, eliminating the need for a world-class super computer.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Mayers, Matthew Z.; Berkelbach, Timothy C.; Hybertsen, Mark S.; Reichman, David R.
2015-10-09
Ground-state diffusion Monte Carlo is used to investigate the binding energies and intercarrier radial probability distributions of excitons, trions, and biexcitons in a variety of two-dimensional transition-metal dichalcogenide materials. We compare these results to approximate variational calculations, as well as to analogous Monte Carlo calculations performed with simplified carrier interaction potentials. Our results highlight the successes and failures of approximate approaches as well as the physical features that determine the stability of small carrier complexes in monolayer transition-metal dichalcogenide materials. In conclusion, we discuss points of agreement and disagreement with recent experiments.
Pilati, S.; Giorgini, S.; Sakkos, K.; Boronat, J.; Casulleras, J.
2006-10-15
By using exact path-integral Monte Carlo methods we calculate the equation of state of an interacting Bose gas as a function of temperature both below and above the superfluid transition. The universal character of the equation of state for dilute systems and low temperatures is investigated by modeling the interatomic interactions using different repulsive potentials corresponding to the same s-wave scattering length. The results obtained for the energy and the pressure are compared to the virial expansion for temperatures larger than the critical temperature. At very low temperatures we find agreement with the ground-state energy calculated using the diffusion Monte Carlo method.
Nakano, Y. Yamazaki, A.; Watanabe, K.; Uritani, A.; Ogawa, K.; Isobe, M.
2014-11-15
Neutron monitoring is important to manage safety of fusion experiment facilities because neutrons are generated in fusion reactions. Monte Carlo simulations play an important role in evaluating the influence of neutron scattering from various structures and correcting differences between deuterium plasma experiments and in situ calibration experiments. We evaluated these influences based on differences between the both experiments at Large Helical Device using Monte Carlo simulation code MCNP5. A difference between the both experiments in absolute detection efficiency of the fission chamber between O-ports is estimated to be the biggest of all monitors. We additionally evaluated correction coefficients for some neutron monitors.
Le Roy, Robert J.
Path-integral Monte Carlo simulation of 3 vibrational shifts for CO2 in ,,He...n clusters critically tests the HeCO2 potential energy surface Hui Li,1 Nicholas Blinov,2,3 Pierre-Nicholas Roy,1 2009; accepted 20 February 2009; published online 9 April 2009 Path-integral Monte Carlo simulations
Glyde, Henry R.
, dense Bose liquid and the connection between BEC in an ideal gas and superfluidity was not at all clearNatural orbitals and Bose-Einstein condensates in traps: A diffusion Monte Carlo analysis J. L. Du diffusion Monte Carlo methods and it is diagonalized to obtain the ``natural'' single-particle orbitals
Alfè, Dario
of 76 kJ/mole.2 The kinetics of hydrogen intake by Mg is quite slow because of a relatively large energyHydrogen dissociation on Mg(0001) studied via quantum Monte Carlo calculations M. Pozzo1,2 and D have used diffusion Monte Carlo DMC simulations to calculate the energy barrier for H2 dissociation
Auxiliary field Monte-Carlo simulation of strong coupling lattice QCD for QCD phase diagram
Terukazu Ichihara; Akira Ohnishi; Takashi Z. Nakano
2014-10-07
We study the QCD phase diagram in the strong coupling limit with fluctuation effects by using the auxiliary field Monte-Carlo method. We apply the chiral angle fixing technique in order to obtain finite chiral condensate in the chiral limit in finite volume. The behavior of order parameters suggests that chiral phase transition is the second order or crossover at low chemical potential and the first order at high chemical potential. Compared with the mean field results, the hadronic phase is suppressed at low chemical potential, and is extended at high chemical potential as already suggested in the monomer-dimer-polymer simulations. We find that the sign problem originating from the bosonization procedure is weakened by the phase cancellation mechanism; a complex phase from one site tends to be canceled by the nearest neighbor site phase as long as low momentum auxiliary field contributions dominate.
Hybrid Monte-Carlo simulation of interacting tight-binding model of graphene
Dominik Smith; Lorenz von Smekal
2013-11-05
In this work, results are presented of Hybrid-Monte-Carlo simulations of the tight-binding Hamiltonian of graphene, coupled to an instantaneous long-range two-body potential which is modeled by a Hubbard-Stratonovich auxiliary field. We present an investigation of the spontaneous breaking of the sublattice symmetry, which corresponds to a phase transition from a conducting to an insulating phase and which occurs when the effective fine-structure constant $\\alpha$ of the system crosses above a certain threshold $\\alpha_C$. Qualitative comparisons to earlier works on the subject (which used larger system sizes and higher statistics) are made and it is established that $\\alpha_C$ is of a plausible magnitude in our simulations. Also, we discuss differences between simulations using compact and non-compact variants of the Hubbard field and present a quantitative comparison of distinct discretization schemes of the Euclidean time-like dimension in the Fermion operator.
Size and habit evolution of PETN crystals - a lattice Monte Carlo study
Zepeda-Ruiz, L A; Maiti, A; Gee, R; Gilmer, G H; Weeks, B
2006-02-28
Starting from an accurate inter-atomic potential we develop a simple scheme of generating an ''on-lattice'' molecular potential of short range, which is then incorporated into a lattice Monte Carlo code for simulating size and shape evolution of nanocrystallites. As a specific example, we test such a procedure on the morphological evolution of a molecular crystal of interest to us, e.g., Pentaerythritol Tetranitrate, or PETN, and obtain realistic facetted structures in excellent agreement with experimental morphologies. We investigate several interesting effects including, the evolution of the initial shape of a ''seed'' to an equilibrium configuration, and the variation of growth morphology as a function of the rate of particle addition relative to diffusion.
Billion-atom synchronous parallel kinetic Monte Carlo simulations of critical 3D Ising systems
Martinez, E.; Monasterio, P.R.; Marian, J.
2011-02-20
An extension of the synchronous parallel kinetic Monte Carlo (spkMC) algorithm developed by Martinez et al. [J. Comp. Phys. 227 (2008) 3804] to discrete lattices is presented. The method solves the master equation synchronously by recourse to null events that keep all processors' time clocks current in a global sense. Boundary conflicts are resolved by adopting a chessboard decomposition into non-interacting sublattices. We find that the bias introduced by the spatial correlations attendant to the sublattice decomposition is within the standard deviation of serial calculations, which confirms the statistical validity of our algorithm. We have analyzed the parallel efficiency of spkMC and find that it scales consistently with problem size and sublattice partition. We apply the method to the calculation of scale-dependent critical exponents in billion-atom 3D Ising systems, with very good agreement with state-of-the-art multispin simulations.
ASCOT: redesigned Monte Carlo code for simulations of minority species in tokamak plasmas
Hirvijoki, Eero; Koskela, Tuomas; Kurki-Suonio, Taina; Miettunen, Juho; Sipilä, Seppo; Snicker, Antti; Äkäslompolo, Simppa
2013-01-01
A comprehensive description of methods for Monte Carlo studies of fast ions and impurity species in tokamak plasmas is presented. The described methods include Hamiltonian orbit-following in particle and guiding center phase space, test particle or guiding center solution of the kinetic equation applying stochastic differential equations in the presence of Coulomb collisions, Neoclassical tearing modes and Alfv\\'en eigenmodes as electromagnetic perturbations relevant for fast ions, together with plasma flow and atomic reactions relevant for impurity studies. Applying the methods, a complete reimplementation of a well-established minority species code is carried out as a response both to the increase in computing power during the last twenty years and to the weakly structured growth of the previous code which has made implementation of additional models impractical. Also, a thorough benchmark between the previous code and the reimplementation is accomplished, showing good agreement between the codes.
Ab-initio molecular dynamics simulation of liquid water by Quantum Monte Carlo
Andrea Zen; Ye Luo; Guglielmo Mazzola; Leonardo Guidoni; Sandro Sorella
2015-04-21
Although liquid water is ubiquitous in chemical reactions at roots of life and climate on the earth, the prediction of its properties by high-level ab initio molecular dynamics simulations still represents a formidable task for quantum chemistry. In this article we present a room temperature simulation of liquid water based on the potential energy surface obtained by a many-body wave function through quantum Monte Carlo (QMC) methods. The simulated properties are in good agreement with recent neutron scattering and X-ray experiments, particularly concerning the position of the oxygen-oxygen peak in the radial distribution function, at variance of previous Density Functional Theory attempts. Given the excellent performances of QMC on large scale supercomputers, this work opens new perspectives for predictive and reliable ab-initio simulations of complex chemical systems.
Resonating Valence Bond Quantum Monte Carlo: Application to the ozone molecule
Azadi, Sam; Kühne, Thomas D
2015-01-01
We study the potential energy surface of the ozone molecule by means of Quantum Monte Carlo simulations based on the resonating valence bond concept. The trial wave function consists of an antisymmetrized geminal power arranged in a single-determinant that is multiplied by a Jastrow correlation factor. Whereas the determinantal part incorporates static correlation effects, the augmented real-space correlation factor accounts for the dynamics electron correlation. The accuracy of this approach is demonstrated by computing the potential energy surface for the ozone molecule in three vibrational states: symmetric, asymmetric and scissoring. We find that the employed wave function provides a detailed description of rather strongly-correlated multi-reference systems, which is in quantitative agreement with experiment.
Resonating Valence Bond Quantum Monte Carlo: Application to the ozone molecule
Sam Azadi; Ranber Singh; Thomas D. Kühne
2015-02-24
We study the potential energy surface of the ozone molecule by means of Quantum Monte Carlo simulations based on the resonating valence bond concept. The trial wave function consists of an antisymmetrized geminal power arranged in a single-determinant that is multiplied by a Jastrow correlation factor. Whereas the determinantal part incorporates static correlation effects, the augmented real-space correlation factor accounts for the dynamics electron correlation. The accuracy of this approach is demonstrated by computing the potential energy surface for the ozone molecule in three vibrational states: symmetric, asymmetric and scissoring. We find that the employed wave function provides a detailed description of rather strongly-correlated multi-reference systems, which is in quantitative agreement with experiment.
Clay, Raymond C.; Mcminis, Jeremy; McMahon, Jeffrey M.; Pierleoni, Carlo; Ceperley, David M.; Morales, Miguel A.
2014-05-01
The ab initio phase diagram of dense hydrogen is very sensitive to errors in the treatment of electronic correlation. Recently, it has been shown that the choice of the density functional has a large effect on the predicted location of both the liquid-liquid phase transition and the solid insulator-to-metal transition in dense hydrogen. To identify the most accurate functional for dense hydrogen applications, we systematically benchmark some of the most commonly used functionals using quantum Monte Carlo. By considering several measures of functional accuracy, we conclude that the van der Waals and hybrid functionals significantly outperform local density approximation and Perdew-Burke-Ernzerhof. We support these conclusions by analyzing the impact of functional choice on structural optimization in the molecular solid, and on the location of the liquid-liquid phase transition.
Pethes, Ildikó
2015-01-01
Although liquid water has been studied for many decades by (X-ray and neutron) diffraction measurements, new experimental results keep appearing, virtually every year. The reason for this is that neither X-ray, nor neutron diffraction data are trivial to correct and interpret for this essential substance. Since X-rays are somewhat insensitive to hydrogen, neutron diffraction with (most frequently, H/D) isotopic substitution is vital for investigating the most important feature in water: hydrogen bonding. Here, the two very recent sets of neutron diffraction data are considered, both exploiting the contrast between light and heavy hydrogen, $^1$H and $^2$H, in different ways. Reverse Monte Carlo structural modeling is applied for constructing large structural models that are as consistent as possible with all experimental information, both in real and reciprocal space. The method has also proven to be useful for revealing where possible small inconsistencies appear during primary data processing: for one neutr...
Application analysis of Monte Carlo to estimate the capacity of geothermal resources in Lawu Mount
Supriyadi, E-mail: supriyadi-uno@yahoo.co.nz [Physics, Faculty of Mathematics and Natural Sciences, University of Jember, Jl. Kalimantan Kampus Bumi Tegal Boto, Jember 68181 (Indonesia); Srigutomo, Wahyu [Complex system and earth physics, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia); Munandar, Arif [Kelompok Program Penelitian Panas Bumi, PSDG, Badan Geologi, Kementrian ESDM, Jl. Soekarno Hatta No. 444 Bandung 40254 (Indonesia)
2014-03-24
Monte Carlo analysis has been applied in calculation of geothermal resource capacity based on volumetric method issued by Standar Nasional Indonesia (SNI). A deterministic formula is converted into a stochastic formula to take into account the nature of uncertainties in input parameters. The method yields a range of potential power probability stored beneath Lawu Mount geothermal area. For 10,000 iterations, the capacity of geothermal resources is in the range of 139.30-218.24 MWe with the most likely value is 177.77 MWe. The risk of resource capacity above 196.19 MWe is less than 10%. The power density of the prospect area covering 17 km{sup 2} is 9.41 MWe/km{sup 2} with probability 80%.
A new time quantifiable Monte Carlo method in simulating magnetization reversal process
X. Z. Cheng; M. B. A. Jalil; H. K. Lee; Y. Okabe
2005-04-14
We propose a new time quantifiable Monte Carlo (MC) method to simulate the thermally induced magnetization reversal for an isolated single domain particle system. The MC method involves the determination of density of states, and the use of Master equation for time evolution. We derive an analytical factor to convert MC steps into real time intervals. Unlike a previous time quantified MC method, our method is readily scalable to arbitrarily long time scales, and can be repeated for different temperatures with minimal computational effort. Based on the conversion factor, we are able to make a direct comparison between the results obtained from MC and Langevin dynamics methods, and find excellent agreement between them. An analytical formula for the magnetization reversal time is also derived, which agrees very well with both numerical Langevin and time-quantified MC results, over a large temperature range and for parallel and oblique easy axis orientations.
Update of the MCSANC Monte Carlo Integrator, v.1.20
A. Arbuzov; D. Bardin; S. Bondarenko; P. Christova; L. Kalinovskaya; U. Klein; V. Kolesnikov; R. Sadykov; A. Sapronov; F. Uskov
2015-09-10
This article presents new features of the MCSANC v.1.20 program, a Monte Carlo tool for calculation of the next-to-leading order electroweak and QCD corrections to various Standard Model processes. The extensions concern implementation of Drell--Yan-like processes and include a systematic treatment of the photon-induced contribution in proton--proton collisions and electroweak corrections beyond NLO approximation. There are also technical improvements such as calculation of the forward-backward asymmetry for the neutral current Drell--Yan process. The updated code is suitable for studies of the effects due to EW and QCD radiative corrections to Drell--Yan (and several other) processes at the LHC and for forthcoming high energy proton--proton colliders.
Monte-Carlo study of quasiparticle dispersion relation in monolayer graphene
P. V. Buividovich
2013-01-07
The density of electronic one-particle states in monolayer graphene is studied by performing the Hybrid Monte-Carlo simulations of the tight-binding model for electrons on the pi orbitals of carbon atoms which make up the graphene lattice. Density of states is approximated as a derivative of the number of particles over the chemical potential at sufficiently small temperature. Simulations are performed in the partially quenched approximation, in which virtual particles and holes have zero chemical potential. It is found that the Van Hove singularity becomes much sharper than in the free tight-binding model. Simulation results also suggest that the Fermi velocity increases with interaction strength up to the transition to the phase with spontaneously broken chiral symmetry.
Monte Carlo simulation of the experiment MAMBO I and possible correction of neutron lifetime result
A. P. Serebrov; A. K. Fomin
2009-04-14
We are discussing the present situation with neutron lifetime measurements. There is a serious discrepancy between the previous experiments and the recent precise experiment [1]. The possible reason of the discrepancy can be connected with a quasi-elastic scattering of UCN on the surface of liquid fomblin which was used for most of the previous experiments. The Monte Carlo simulation of one of the previous experiments [2] shows that the result of this experiment [2] has to be corrected and instead of the previous result 887.6 +/- 3 s the new result 880.4 +/- 3 s has to be claimed. [1] A.P. Serebrov et al., Phys. Lett. B 605 (2005) 72. [2] W. Mampe et al., Phys. Rev. Lett. 63 (1989) 593.
Validation of the Monte Carlo Criticality Program KENO V. a for highly-enriched uranium systems
Knight, J.R.
1984-11-01
A series of calculations based on critical experiments have been performed using the KENO V.a Monte Carlo Criticality Program for the purpose of validating KENO V.a for use in evaluating Y-12 Plant criticality problems. The experiments were reflected and unreflected systems of single units and arrays containing highly enriched uranium metal or uranium compounds. Various geometrical shapes were used in the experiments. The SCALE control module CSAS25 with the 27-group ENDF/B-4 cross-section library was used to perform the calculations. Some of the experiments were also calculated using the 16-group Hansen-Roach Library. Results are presented in a series of tables and discussed. Results show that the criteria established for the safe application of the KENO IV program may also be used for KENO V.a results.
The tau leptons theory and experimental data: Monte Carlo, fits, software and systematic errors
Zbigniew Was
2014-12-09
Status of tau lepton decay Monte Carlo generator TAUOLA is reviewed. Recent efforts on development of new hadronic currents are presented. Multitude new channels for anomalous tau decay modes and parametrization based on defaults used by BaBar collaboration are introduced. Also parametrization based on theoretical considerations are presented as an alternative. Lesson from comparison and fits to the BaBar and Belle data is recalled. It was found that as in the past, in particular at a time of comparisons with CLEO and ALEPH data, proper fitting, to as detailed as possible representation of the experimental data, is essential for appropriate developments of models of tau decays. In the later part of the presentation, use of the TAUOLA program for phenomenology of W,Z,H decays at LHC is adressed. Some new results, relevant for QED bremsstrahlung in such decays are presented as well.
MaGe - a Geant4-based Monte Carlo framework for low-background experiments
Yuen-Dat Chan; Jason A. Detwiler; Reyco Henning; Victor M. Gehman; Rob A. Johnson; David V. Jordan; Kareem Kazkaz; Markus Knapp; Kevin Kroninger; Daniel Lenz; Jing Liu; Xiang Liu; Michael G. Marino; Akbar Mokhtarani; Luciano Pandola; Alexis G. Schubert; Claudia Tomei
2008-02-06
A Monte Carlo framework, MaGe, has been developed based on the Geant4 simulation toolkit. Its purpose is to simulate physics processes in low-energy and low-background radiation detectors, specifically for the Majorana and Gerda $^{76}$Ge neutrinoless double-beta decay experiments. This jointly-developed tool is also used to verify the simulation of physics processes relevant to other low-background experiments in Geant4. The MaGe framework contains simulations of prototype experiments and test stands, and is easily extended to incorporate new geometries and configurations while still using the same verified physics processes, tunings, and code framework. This reduces duplication of efforts and improves the robustness of and confidence in the simulation output.
Monte Carlo Neutrino Transport Through Remnant Disks from Neutron Star Mergers
Richers, S; O'Connor, Evan; Fernandez, Rodrigo; Ott, Christian
2015-01-01
We present Sedonu, a new open source, steady-state, special relativistic Monte Carlo (MC) neutrino transport code, available at bitbucket.org/srichers/sedonu. The code calculates the energy- and angle-dependent neutrino distribution function on fluid backgrounds of any number of spatial dimensions, calculates the rates of change of fluid internal energy and electron fraction, and solves for the equilibrium fluid temperature and electron fraction. We apply this method to snapshots from two dimensional simulations of accretion disks left behind by binary neutron star mergers, varying the input physics and comparing to the results obtained with a leakage scheme for the case of a central black hole and a central hypermassive neutron star. Neutrinos are guided away from the densest regions of the disk and escape preferentially around 45 degrees from the equatorial plane. Neutrino heating is strengthened by MC transport a few scale heights above the disk midplane near the innermost stable circular orbit, potentiall...
Ground State Calculations of Confined Hydrogen Molecule H_2 Using Variational Monte Carlo Method
Doma, S B; Amer, A A
2015-01-01
The variational Monte Carlo method is used to evaluate the ground-state energy of the confined hydrogen molecule, H_2. Accordingly, we considered the case of hydrogen molecule confined by a hard prolate spheroidal cavity when the nuclear positions are clamped at the foci (on-focus case). Also, the case of off-focus nuclei in which the two nuclei are not clamped to the foci is studied. This case provides flexibility for the treatment of the molecular properties by selecting an arbitrary size and shape of the confining spheroidal box. An accurate trial wave function depending on many variational parameters is used for this purpose. The obtained results are in good agreement with the most recent results.
Introduction to Computational Physics and Monte Carlo Simulations of Matrix Field Theory
Ydri, Badis
2015-01-01
This book is divided into two parts. In the first part we give an elementary introduction to computational physics consisting of 21 simulations which originated from a formal course of lectures and laboratory simulations delivered since 2010 to physics students at Annaba University. The second part is much more advanced and deals with the problem of how to set up working Monte Carlo simulations of matrix field theories which involve finite dimensional matrix regularizations of noncommutative and fuzzy field theories, fuzzy spaces and matrix geometry. The study of matrix field theory in its own right has also become very important to the proper understanding of all noncommutative, fuzzy and matrix phenomena. The second part, which consists of 9 simulations, was delivered informally to doctoral students who are working on various problems in matrix field theory. Sample codes as well as sample key solutions are also provided for convenience and completness. An appendix containing an executive arabic summary of t...
WEB Portal for Monte Carlo Simulations in High Energy Physics - HEPWEB
E. I. Alexandrov; V. M. Kotov; V. V. Uzhinsky; P. V. Zrelov
2012-08-31
A WEB-portal HepWeb allows users to perform the most popular calculations in high energy physics - calculations of hadron-hadron, hadron-nucleus and nucleus-nucleus interaction cross sections as well as calculations of secondary particles characteristics in the interactions using Monte Carlo event generators. The List of the generators includes Dubna version of the intra-nuclear cascade model (CASCADE), FRITIOF model, ultra-relativistic quantum molecular dynamic model (UrQMD), HIJING model, and AMPT model. Setting up the colliding particles/nucleus properties (collision energy, mass numbers and charges of nuclei, impact parameters of interactions, and number of generated events) is realized by a WEB interface. A query is processed by a server, and results are presented to the user as a WEB-page. Short descriptions of the installed generators, the WEB interface implementation and the server operation are given.
Combining Stochastics and Analytics for a Fast Monte Carlo Decay Chain Generator
Kareem Kazkaz; Nick Walsh
2011-04-14
Various Monte Carlo programs, developed either by small groups or widely available, have been used to calculate the effects of decays of radioactive chains, from the original parent nucleus to the final stable isotopes. These chains include uranium, thorium, radon, and others, and generally have long-lived parent nuclei. Generating decays within these chains requires a certain amount of computing overhead related to simulating unnecessary decays, time-ordering the final results in post-processing, or both. We present a combination analytic/stochastic algorithm for creating a time-ordered set of decays with position and time correlations, and starting with an arbitrary source age. Thus the simulation costs are greatly reduced, while at the same time avoiding chronological post-processing. We discuss optimization methods within the approach to minimize calculation time.
Thomas, Robert E; Overy, Catherine; Knowles, Peter J; Alavi, Ali; Booth, George H
2015-01-01
Unbiased stochastic sampling of the one- and two-body reduced density matrices is achieved in full configuration interaction quantum Monte Carlo with the introduction of a second, "replica" ensemble of walkers, whose population evolves in imaginary time independently from the first, and which entails only modest additional computational overheads. The matrices obtained from this approach are shown to be representative of full configuration-interaction quality, and hence provide a realistic opportunity to achieve high-quality results for a range of properties whose operators do not necessarily commute with the hamiltonian. A density-matrix formulated quasi-variational energy estimator having been already proposed and investigated, the present work extends the scope of the theory to take in studies of analytic nuclear forces, molecular dipole moments and polarisabilities, with extensive comparison to exact results where possible. These new results confirm the suitability of the sampling technique and, where suf...
I. B. Bischofs; U. S. Schwarz
2006-01-16
Compliant environments can mediate interactions between mechanically active cells like fibroblasts. Starting with a phenomenological model for the behaviour of single cells, we use extensive Monte Carlo simulations to predict non-trivial structure formation for cell communities on soft elastic substrates as a function of elastic moduli, cell density, noise and cell position geometry. In general, we find a disordered structure as well as ordered string-like and ring-like structures. The transition between ordered and disordered structures is controlled both by cell density and noise level, while the transition between string- and ring-like ordered structures is controlled by the Poisson ratio. Similar effects are observed in three dimensions. Our results suggest that in regard to elastic effects, healthy connective tissue usually is in a macroscopically disordered state, but can be switched to a macroscopically ordered state by appropriate parameter variations, in a way that is reminiscent of wound contraction or diseased states like contracture.
Monte Carlo study of Lefschetz thimble structure in one-dimensional Thirring model at finite density
Fujii, Hirotsugu; Kikukawa, Yoshio
2015-01-01
We consider the one-dimensional massive Thirring model formulated on the lattice with staggered fermions and an auxiliary compact vector (link) field, which is exactly solvable and shows a phase transition with increasing the chemical potential of fermion number: the crossover at a finite temperature and the first order transition at zero temperature. We complexify its path-integration on Lefschetz thimbles and examine its phase transition by hybrid Monte Carlo simulations on the single dominant thimble. We observe a discrepancy between the numerical and exact results in the crossover region for small inverse coupling $\\beta$ and/or large lattice size $L$, while they are in good agreement at the lower and higher density regions. We also observe that the discrepancy persists in the continuum limit keeping the temperature finite and it becomes more significant toward the low-temperature limit. This numerical result is consistent with our analytical study of the model's thimble structure. And these results imply...
A bottom collider vertex detector design, Monte-Carlo simulation and analysis package
Lebrun, P.
1990-10-01
A detailed simulation of the BCD vertex detector is underway. Specifications and global design issues are briefly reviewed. The BCD design based on double sided strip detector is described in more detail. The GEANT3-based Monte-Carlo program and the analysis package used to estimate detector performance are discussed in detail. The current status of the expected resolution and signal to noise ratio for the golden'' CP violating mode B{sub d} {yields} {pi}{sup +}{pi}{sup {minus}} is presented. These calculations have been done at FNAL energy ({radical}s = 2.0 TeV). Emphasis is placed on design issues, analysis techniques and related software rather than physics potentials. 20 refs., 46 figs.
Monte Carlo simulation of O(2) phi^4 field theory in three dimensions
Peter Arnold; Guy D. Moore
2003-07-24
Using standard numerical Monte Carlo lattice methods, we study non-universal properties of the phase transition of three-dimensional phi^4 theory of a 2-component real field phi = (phi_1,phi_2) with O(2) symmetry. Specifically, we extract the renormalized values of /u and r/u^2 at the phase transition, where the continuum action of the theory is \\int d^3x [ (1/2) |\\grad\\phi|^2 + \\half r \\phi^2 + {u\\over4!} \\phi^4 ]. These values have applications to calculating the phase transition temperature of dilute or weakly-interacting Bose gases (both relativistic and non-relativistic). In passing, we also provide perturbative calculations of various O(a) lattice-spacing errors in three-dimensional O(N) scalar field theory, where (a) is the lattice spacing.
Stimuli-responsive brushes with active minority components: Monte Carlo study and analytical theory
Shuanhu Qi; Leonid I. Klushin; Alexander M. Skvortsov; Alexey A. Polotsky; Friederike Schmid
2015-05-07
Using a combination of analytical theory, Monte Carlo simulations, and three dimensional self-consistent field calculations, we study the equilibrium properties and the switching behavior of adsorption-active polymer chains included in a homopolymer brush. The switching transition is driven by a conformational change of a small fraction of minority chains, which are attracted by the substrate. Depending on the strength of the attractive interaction, the minority chains assume one of two states: An exposed state characterized by a stem-crown-like conformation, and an adsorbed state characterized by a flat two-dimensional structure. Comparing the Monte Carlo simulations, which use an Edwards-type Hamiltonian with density dependent interactions, with the predictions from self-consistent-field theory based on the same Hamiltonian, we find that thermal density fluctuations affect the system in two different ways. First, they renormalize the excluded volume interaction parameter $v_\\mathrm{\\tiny bare}$ inside the brush. The properties of the brushes can be reproduced by self-consistent field theory if one replaces $v_\\mathrm{\\tiny bare}$ by an effective parameter $v_{\\mathrm{\\tiny eff}}$, where the ratio of second virial coefficients $B_{\\mathrm{\\tiny eff}}/B_\\mathrm{\\tiny bare}$ depends on the range of monomer interactions, but not on the grafting density, the chain length, and $v_\\mathrm{\\tiny bare}$. Second, density fluctuations affect the conformations of chains at the brush surface and have a favorable effect on the characteristics of the switching transition: In the interesting regime where the transition is sharp, they reduce the free energy barrier between the two states significantly. The scaling behavior of various quantities is also analyzed and compared with analytical predictions.
Sci—Thur AM: YIS - 04: Gold Nanoparticle Enhanced Arc Radiotherapy: A Monte Carlo Feasibility Study
Koger, B; Kirkby, C
2014-08-15
Introduction: The use of gold nanoparticles (GNPs) in radiotherapy has shown promise for therapeutic enhancement. In this study, we explore the feasibility of enhancing radiotherapy with GNPs in an arc-therapy context. We use Monte Carlo simulations to quantify the macroscopic dose-enhancement ratio (DER) and tumour to normal tissue ratio (TNTR) as functions of photon energy over various tumour and body geometries. Methods: GNP-enhanced arc radiotherapy (GEART) was simulated using the PENELOPE Monte Carlo code and penEasy main program. We simulated 360° arc-therapy with monoenergetic photon energies 50 – 1000 keV and several clinical spectra used to treat a spherical tumour containing uniformly distributed GNPs in a cylindrical tissue phantom. Various geometries were used to simulate different tumour sizes and depths. Voxel dose was used to calculate DERs and TNTRs. Inhomogeneity effects were examined through skull dose in brain tumour treatment simulations. Results: Below 100 keV, DERs greater than 2.0 were observed. Compared to 6 MV, tumour dose at low energies was more conformai, with lower normal tissue dose and higher TNTRs. Both the DER and TNTR increased with increasing cylinder radius and decreasing tumour radius. The inclusion of bone showed excellent tumour conformality at low energies, though with an increase in skull dose (40% of tumour dose with 100 keV compared to 25% with 6 MV). Conclusions: Even in the presence of inhomogeneities, our results show promise for the treatment of deep-seated tumours with low-energy GEART, with greater tumour dose conformality and lower normal tissue dose than 6 MV.
Sunny, E. E.; Martin, W. R. [University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor MI 48109 (United States)
2013-07-01
Current Monte Carlo codes use one of three models to model neutron scattering in the epithermal energy range: (1) the asymptotic scattering model, (2) the free gas scattering model, or (3) the S({alpha},{beta}) model, depending on the neutron energy and the specific Monte Carlo code. The free gas scattering model assumes the scattering cross section is constant over the neutron energy range, which is usually a good approximation for light nuclei, but not for heavy nuclei where the scattering cross section may have several resonances in the epithermal region. Several researchers in the field have shown that using the free gas scattering model in the vicinity of the resonances in the lower epithermal range can under-predict resonance absorption due to the up-scattering phenomenon. Existing methods all involve performing the collision analysis in the center-of-mass frame, followed by a conversion back to the laboratory frame. In this paper, we will present a new sampling methodology that (1) accounts for the energy-dependent scattering cross sections in the collision analysis and (2) acts in the laboratory frame, avoiding the conversion to the center-of-mass frame. The energy dependence of the scattering cross section was modeled with even-ordered polynomials to approximate the scattering cross section in Blackshaw's equations for the moments of the differential scattering PDFs. These moments were used to sample the outgoing neutron speed and angle in the laboratory frame on-the-fly during the random walk of the neutron. Results for criticality studies on fuel pin and fuel assembly calculations using these methods showed very close comparison to results using the reference Doppler-broadened rejection correction (DBRC) scheme. (authors)
Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations
Arampatzis, Georgios; Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 ; Katsoulakis, Markos A.
2014-03-28
In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.
Cluster expansion modeling and Monte Carlo simulation of alnico 5–7 permanent magnets
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Nguyen, Manh Cuong; Zhao, Xin; Wang, Cai -Zhuang; Ho, Kai -Ming
2015-03-05
The concerns about the supply and resource of rare earth (RE) metals have generated a lot of interests in searching for high performance RE-free permanent magnets. Alnico alloys are traditional non-RE permanent magnets and have received much attention recently due their good performance at high temperature. In this paper, we develop an accurate and efficient cluster expansion energy model for alnico 5–7. Monte Carlo simulations using the cluster expansion method are performed to investigate the structure of alnico 5–7 at atomistic and nano scales. The alnico 5–7 master alloy is found to decompose into FeCo-rich and NiAl-rich phases at lowmore »temperature. The boundary between these two phases is quite sharp (~2 nm) for a wide range of temperature. The compositions of the main constituents in these two phases become higher when the temperature gets lower. Both FeCo-rich and NiAl-rich phases are in B2 ordering with Fe and Al on ?-site and Ni and Co on ?-site. The degree of order of the NiAl-rich phase is much higher than that of the FeCo-rich phase. In addition, a small magnetic moment is also observed in NiAl-rich phase but the moment reduces as the temperature is lowered, implying that the magnetic properties of alnico 5–7 could be improved by lowering annealing temperature to diminish the magnetism in NiAl-rich phase. Furthermore, the results from our Monte Carlo simulations are consistent with available experimental results.« less
Neutrinos from WIMP Annihilations Obtained Using a Full Three-Flavor Monte Carlo Approach
Mattias Blennow; Joakim Edsjo; Tommy Ohlsson
2008-03-12
Weakly Interacting Massive Particles (WIMPs) are one of the main candidates for the dark matter in the Universe. If these particles make up the dark matter, then they can be captured by the Sun or the Earth, sink to the respective cores, annihilate, and produce neutrinos. Thus, these neutrinos can be a striking dark matter signature at neutrino telescopes looking towards the Sun and/or the Earth. Here, we improve previous analyses on computing the neutrino yields from WIMP annihilations in several respects. We include neutrino oscillations in a full three-flavor framework as well as all effects from neutrino interactions on the way through the Sun (absorption, energy loss, and regeneration from tau decays). In addition, we study the effects of non-zero values of the mixing angle $\\theta_{13}$ as well as the normal and inverted neutrino mass hierarchies. Our study is performed in an event-based setting which makes these results very useful both for theoretical analyses and for building a neutrino telescope Monte Carlo code. All our results for the neutrino yields, as well as our Monte Carlo code, are publicly available. We find that the yield of muon-type neutrinos from WIMP annihilations in the Sun is enhanced or suppressed, depending on the dominant WIMP annihilation channel. This effect is due to an effective flavor mixing caused by neutrino oscillations. For WIMP annihilations inside the Earth, the distance from source to detector is too small to allow for any significant amount of oscillations at the neutrino energies relevant for neutrino telescopes.
Neutrinos from WIMP annihilations obtained using a full three-flavor Monte Carlo approach
Blennow, Mattias; Ohlsson, Tommy; Edsjoe, Joakim E-mail: edsjo@physto.se
2008-01-15
Weakly interacting massive particles (WIMPs) are one of the main candidates for making up the dark matter in the Universe. If these particles make up the dark matter, then they can be captured by the Sun or the Earth, sink to the respective cores, annihilate, and produce neutrinos. Thus, these neutrinos can be a striking dark matter signature at neutrino telescopes looking towards the Sun and/or the Earth. Here, we improve previous analyses on computing the neutrino yields from WIMP annihilations in several respects. We include neutrino oscillations in a full three-flavor framework as well as all effects from neutrino interactions on the way through the Sun (absorption, energy loss, and regeneration from tau decays). In addition, we study the effects of non-zero values of the mixing angle {theta}{sub 13} as well as the normal and inverted neutrino mass hierarchies. Our study is performed in an event-based setting which makes these results very useful both for theoretical analyses and for building a neutrino telescope Monte Carlo code. All our results for the neutrino yields, as well as our Monte Carlo code, are publicly available. We find that the yield of muon-type neutrinos from WIMP annihilations in the Sun is enhanced or suppressed, depending on the dominant WIMP annihilation channel. This effect is due to an effective flavor mixing caused by neutrino oscillations. For WIMP annihilations inside the Earth, the distance from source to detector is too small to allow for any significant amount of oscillations at the neutrino energies relevant for neutrino telescopes.
SU-E-T-578: MCEBRT, A Monte Carlo Code for External Beam Treatment Plan Verifications
Chibani, O; Ma, C; Eldib, A
2014-06-01
Purpose: Present a new Monte Carlo code (MCEBRT) for patient-specific dose calculations in external beam radiotherapy. The code MLC model is benchmarked and real patient plans are re-calculated using MCEBRT and compared with commercial TPS. Methods: MCEBRT is based on the GEPTS system (Med. Phys. 29 (2002) 835–846). Phase space data generated for Varian linac photon beams (6 – 15 MV) are used as source term. MCEBRT uses a realistic MLC model (tongue and groove, rounded ends). Patient CT and DICOM RT files are used to generate a 3D patient phantom and simulate the treatment configuration (gantry, collimator and couch angles; jaw positions; MLC sequences; MUs). MCEBRT dose distributions and DVHs are compared with those from TPS in absolute way (Gy). Results: Calculations based on the developed MLC model closely matches transmission measurements (pin-point ionization chamber at selected positions and film for lateral dose profile). See Fig.1. Dose calculations for two clinical cases (whole brain irradiation with opposed beams and lung case with eight fields) are carried out and outcomes are compared with the Eclipse AAA algorithm. Good agreement is observed for the brain case (Figs 2-3) except at the surface where MCEBRT dose can be higher by 20%. This is due to better modeling of electron contamination by MCEBRT. For the lung case an overall good agreement (91% gamma index passing rate with 3%/3mm DTA criterion) is observed (Fig.4) but dose in lung can be over-estimated by up to 10% by AAA (Fig.5). CTV and PTV DVHs from TPS and MCEBRT are nevertheless close (Fig.6). Conclusion: A new Monte Carlo code is developed for plan verification. Contrary to phantombased QA measurements, MCEBRT simulate the exact patient geometry and tissue composition. MCEBRT can be used as extra verification layer for plans where surface dose and tissue heterogeneity are an issue.
SU-E-T-277: Raystation Electron Monte Carlo Commissioning and Clinical Implementation
Allen, C; Sansourekidou, P; Pavord, D
2014-06-01
Purpose: To evaluate the Raystation v4.0 Electron Monte Carlo algorithm for an Elekta Infinity linear accelerator and commission for clinical use. Methods: A total of 199 tests were performed (75 Export and Documentation, 20 PDD, 30 Profiles, 4 Obliquity, 10 Inhomogeneity, 55 MU Accuracy, and 5 Grid and Particle History). Export and documentation tests were performed with respect to MOSAIQ (Elekta AB) and RadCalc (Lifeline Software Inc). Mechanical jaw parameters and cutout magnifications were verified. PDD and profiles for open cones and cutouts were extracted and compared with water tank measurements. Obliquity and inhomogeneity for bone and air calculations were compared to film dosimetry. MU calculations for open cones and cutouts were performed and compared to both RadCalc and simple hand calculations. Grid size and particle histories were evaluated per energy for statistical uncertainty performance. Acceptability was categorized as follows: performs as expected, negligible impact on workflow, marginal impact, critical impact or safety concern, and catastrophic impact of safety concern. Results: Overall results are: 88.8% perform as expected, 10.2% negligible, 2.0% marginal, 0% critical and 0% catastrophic. Results per test category are as follows: Export and Documentation: 100% perform as expected, PDD: 100% perform as expected, Profiles: 66.7% perform as expected, 33.3% negligible, Obliquity: 100% marginal, Inhomogeneity 50% perform as expected, 50% negligible, MU Accuracy: 100% perform as expected, Grid and particle histories: 100% negligible. To achieve distributions with satisfactory smoothness level, 5,000,000 particle histories were used. Calculation time was approximately 1 hour. Conclusion: Raystation electron Monte Carlo is acceptable for clinical use. All of the issues encountered have acceptable workarounds. Known issues were reported to Raysearch and will be resolved in upcoming releases.
SU-E-T-344: Validation and Clinical Experience of Eclipse Electron Monte Carlo Algorithm (EMC)
Pokharel, S [21st Century Oncology, Fort Myers, FL (United States); Rana, S [Procure Proton Therapy Center, Oklahoma City, OK (United States)
2014-06-01
Purpose: The purpose of this study is to validate Eclipse Electron Monte Carlo (Algorithm for routine clinical uses. Methods: The PTW inhomogeneity phantom (T40037) with different combination of heterogeneous slabs has been CT-scanned with Philips Brilliance 16 slice scanner. The phantom contains blocks of Rando Alderson materials mimicking lung, Polystyrene (Tissue), PTFE (Bone) and PMAA. The phantom has 30×30×2.5 cm base plate with 2cm recesses to insert inhomogeneity. The detector systems used in this study are diode, tlds and Gafchromic EBT2 films. The diode and tlds were included in CT scans. The CT sets are transferred to Eclipse treatment planning system. Several plans have been created with Eclipse Monte Carlo (EMC) algorithm 11.0.21. Measurements have been carried out in Varian TrueBeam machine for energy from 6–22mev. Results: The measured and calculated doses agreed very well for tissue like media. The agreement was reasonably okay for the presence of lung inhomogeneity. The point dose agreement was within 3.5% and Gamma passing rate at 3%/3mm was greater than 93% except for 6Mev(85%). The disagreement can reach as high as 10% in the presence of bone inhomogeneity. This is due to eclipse reporting dose to the medium as opposed to the dose to the water as in conventional calculation engines. Conclusion: Care must be taken when using Varian Eclipse EMC algorithm for dose calculation for routine clinical uses. The algorithm dose not report dose to water in which most of the clinical experiences are based on rather it just reports dose to medium directly. In the presence of inhomogeneity such as bone, the dose discrepancy can be as high as 10% or even more depending on the location of normalization point or volume. As Radiation oncology as an empirical science, care must be taken before using EMC reported monitor units for clinical uses.
Vrugt, Jasper A.
- duce considerable uncertainty in the model parameters and predictions. This is in part due increasingly popular for aquifer and reservoir characteriza- tion, and parameter and model predictive statistical analysis of uncertainty [Kennedy and O'Hagan, 2001], and use Markov chain Monte Carlo (MCMC
Int. J. Mod. Phys. C (1999), accepted for publication A Monte Carlo Study of the Specific Heat
Usadel, K. D.
1999-01-01
Int. J. Mod. Phys. C (1999), accepted for publication A Monte Carlo Study of the Specific Heat is suppressed in the FC case. The specific heat shows a noncritical broad maximum above the transi tion., whereas our interpretation of the data is different. Keywords: Criticalpoint effects, specific heats
Shuster, David L.
Abrupt changes in the rate of Andean Plateau uplift from reversible jump Markov Chain Monte Carlo form 19 February 2015 Accepted 21 February 2015 Available online 1 March 2015 Keywords: Andean uplift of surface uplift of the central Andean Plateau provides important boundary conditions for regional
A Monte Carlo Based Analysis of Optimal Design Criteria H. T. Banks, Kathleen J. Holm and Franz compare a recent design criteria, SE-optimal design (standard error optimal design [8]) with the more [11, 14, 15, 22]). Since one has a number of different design criteria from which to choose
Monte Carlo data-driven tight frame for seismic data Shiwei Yu1, Jianwei Ma2 and Stanley Osher3
Ferguson, Thomas S.
-DDTF), and tested the trained filter bank derived from this process by conducting seismic data denoising preprocessing steps in the seismic data processing chain. Methods to attenuate random noise can generallyMonte Carlo data-driven tight frame for seismic data recovery Shiwei Yu1, Jianwei Ma2 and Stanley
Morton, David
1 Encyclopedia of Optimization, C.A. Floudas & P.M. Pardalos (eds.) Kluwer 2001 MONTE CARLO SIMULATIONS FOR STOCHASTIC OPTIMIZATION 1. Introduction Many important real-world problems contain stochastic elements and require optimization. Stochastic programming and simulation-based optimization are two
Monte Carlo study of the CO-poisoning dynamics in a model for the catalytic oxidation of CO
Marro, Joaquín
Monte Carlo study of the CO-poisoning dynamics in a model for the catalytic oxidation of CO The poisoning dynamics of the ZiffGulariBarshad Phys. Rev. Lett. 56, 2553 1986 model, for a monomer absorbing state and close to the coexistence point. Analysis of the average poisoning time ( p) allows us
Hiatt, Matthew Torgerson
2009-06-02
links three external codes together to create these libraries. The code creates an MCNP (Monte Carlo N-Particle) model of the reactor and calculates the zoneaveraged scalar flux in various tally regions and a core-averaged scalar flux tallied by energy...
Bendele, Travis Henry
2013-02-22
A honeycomb probe was designed to measure the optical properties of biological tissues using single Monte Carlo method. The ongoing project is intended to be a multi-wavelength, real time, and in-vivo technique to detect breast cancer. Preliminary...
Boas, David
September 1, 2001 / Vol. 26, No. 17 / OPTICS LETTERS 1335 Perturbation Monte Carlo methods to solve with respect to perturbations in background tissue optical properties. We then feed this derivative information to a nonlinear optimization algorithm to determine the optical properties of the tissue heterogeneity under
Tafreshi, Hooman Vahedi
Analytical Monte Carlo Ray Tracing simulation of radiative heat transfer through bimodal fibrous-state radiative heat transfer through fibrous insulation materials. The simulations are conducted in 3-D disor radiation and conduc- tion to be the only modes of heat transfer in fibrous insulation materials
Crawford, John R.
Testing for Suspected Impairments and Dissociations in Single-Case Studies in Neuropsychology: Evaluation of Alternatives Using Monte Carlo Simulations and Revised Tests for Dissociations John R. Crawford, a patient is compared with a small control sample. Methods of testing for a deficit on Task X
MO-G-BRF-09: Investigating Magnetic Field Dose Effects in Mice: A Monte Carlo Study
Rubinstein, A; Guindani, M; Followill, D; Melancon, A; Hazle, J; Court, L
2014-06-15
Purpose: In MRI-linac treatments, radiation dose distributions are affected by magnetic fields, especially at high-density/low-density interfaces. Radiobiological consequences of magnetic field dose effects are presently unknown; therefore, preclinical studies are needed to ensure the safe clinical use of MRI-linacs. This study investigates the optimal combination of beam energy and magnetic field strength needed for preclinical murine studies. Methods: The Monte Carlo code MCNP6 was used to simulate the effects of a magnetic field when irradiating a mouse-sized lung phantom with a 1.0cmx1.0cm photon beam. Magnetic field effects were examined using various beam energies (225kVp, 662keV[Cs-137], and 1.25MeV[Co-60]) and magnetic field strengths (0.75T, 1.5T, and 3T). The resulting dose distributions were compared to Monte Carlo results for humans with various field sizes and patient geometries using a 6MV/1.5T MRI-linac. Results: In human simulations, the addition of a 1.5T magnetic field caused an average dose increase of 49% (range:36%–60%) to lung at the soft tissue-to-lung interface and an average dose decrease of 30% (range:25%–36%) at the lung-to-soft tissue interface. In mouse simulations, the magnetic fields had no effect on the 225kVp dose distribution. The dose increases for the Cs-137 beam were 12%, 33%, and 49% for 0.75T, 1.5T, and 3.0T magnetic fields, respectively while the dose decreases were 7%, 23%, and 33%. For the Co-60 beam, the dose increases were 14%, 45%, and 41%, and the dose decreases were 18%, 35%, and 35%. Conclusion: The magnetic field dose effects observed in mouse phantoms using a Co-60 beam with 1.5T or 3T fields and a Cs-137 beam with a 3T field compare well with those seen in simulated human treatments with an MRI-linac. These irradiator/magnet combinations are suitable for preclinical studies investigating potential biological effects of delivering radiation therapy in the presence of a magnetic field. Partially funded by Elekta.
Monte Carlo simulation based study of a proposed multileaf collimator for a telecobalt machine
Sahani, G.; Dash Sharma, P. K.; Hussain, S. A.; Dutt Sharma, Sunil; Sharma, D. N.
2013-02-15
Purpose: The objective of the present work was to propose a design of a secondary multileaf collimator (MLC) for a telecobalt machine and optimize its design features through Monte Carlo simulation. Methods: The proposed MLC design consists of 72 leaves (36 leaf pairs) with additional jaws perpendicular to leaf motion having the capability of shaping a maximum square field size of 35 Multiplication-Sign 35 cm{sup 2}. The projected widths at isocenter of each of the central 34 leaf pairs and 2 peripheral leaf pairs are 10 and 5 mm, respectively. The ends of the leaves and the x-jaws were optimized to obtain acceptable values of dosimetric and leakage parameters. Monte Carlo N-Particle code was used for generating beam profiles and depth dose curves and estimating the leakage radiation through the MLC. A water phantom of dimension 50 Multiplication-Sign 50 Multiplication-Sign 40 cm{sup 3} with an array of voxels (4 Multiplication-Sign 0.3 Multiplication-Sign 0.6 cm{sup 3}= 0.72 cm{sup 3}) was used for the study of dosimetric and leakage characteristics of the MLC. Output files generated for beam profiles were exported to the PTW radiation field analyzer software through locally developed software for analysis of beam profiles in order to evaluate radiation field width, beam flatness, symmetry, and beam penumbra. Results: The optimized version of the MLC can define radiation fields of up to 35 Multiplication-Sign 35 cm{sup 2} within the prescribed tolerance values of 2 mm. The flatness and symmetry were found to be well within the acceptable tolerance value of 3%. The penumbra for a 10 Multiplication-Sign 10 cm{sup 2} field size is 10.7 mm which is less than the generally acceptable value of 12 mm for a telecobalt machine. The maximum and average radiation leakage through the MLC were found to be 0.74% and 0.41% which are well below the International Electrotechnical Commission recommended tolerance values of 2% and 0.75%, respectively. The maximum leakage through the leaf ends in closed condition was observed to be 8.6% which is less than the values reported for other MLCs designed for medical linear accelerators. Conclusions: It is concluded that dosimetric parameters and the leakage radiation of the optimized secondary MLC design are well below their recommended tolerance values. The optimized design of the proposed MLC can be integrated into a telecobalt machine by replacing the existing adjustable secondary collimator for conformal radiotherapy treatment of cancer patients.
Monte Carlo inversion of hydrogen and metal lines from QSO absorption spectra
Sergei A. Levshakov; Irina I. Agafonova; Wilhelm H. Kegel
2000-03-06
A new method, based on the simulated annealing algorithm and aimed at the inverse problem in the analysis of intergalactic (interstellar) complex spectra of hydrogen and metal lines, is presented. We consider the process of line formation in clumpy stochastic media accounting for fluctuating velocity and density fields (mesoturbulence). This approach generalizes our previous Reverse Monte Carlo and Entropy-Regularized Minimization methods which were applied to velocity fluctuations only. The method allows one to estimate, from an observed system of spectral lines, both the physical parameters of the absorbing gas and appropriate structures of the velocity and density distributions along the line of sight. The validity of the computational procedure is demonstrated using a series of synthetic spectra that emulate the up-to-date best quality data. HI, CII, SiII, CIV, SiIV, and OVI lines, exhibiting complex profiles, were fitted simultaneously. The adopted physical parameters have been recovered with a sufficiently high accuracy. The results obtained encourage the application of the proposed procedure to the analysis of real observational data.
Äkäslompolo, Simppa; Tardini, Giovanni; Kurki-Suonio, Taina
2015-01-01
The activation probe is a robust tool to measure flux of fusion products from a magnetically confined plasma. A carefully chosen solid sample is exposed to the flux, and the impinging ions transmute the material makig it radioactive. Ultra-low level gamma-ray spectroscopy is used post mortem to measure the activity and, thus, the number of fusion products. This contribution presents the numerical analysis of the first measurement in the ASDEX Upgrade tokamak, which was also the first experiment to measure a single discharge. The ASCOT suite of codes was used to perform adjoint/reverse Monte-Carlo calculations of the fusion products. The analysis facilitated, for the first time, a comparison of numerical and experimental values for absolutely calibrated flux. The results agree to within 40%, which can be considered remarkable considering the fact that all features of the plasma cannot be accounted in the simulations. Also an alternative probe orientation was studied. The results suggest that a better optimized...
Auxiliary-field quantum Monte Carlo calculations of molecular systems with a Gaussian basis
Al-Saidi, W.A.; Zhang Shiwei; Krakauer, Henry [Department of Physics, College of William and Mary, Williamsburg, Virginia 23187-8795 (United States)
2006-06-14
We extend the recently introduced phaseless auxiliary-field quantum Monte Carlo (QMC) approach to any single-particle basis and apply it to molecular systems with Gaussian basis sets. QMC methods in general scale favorably with the system size as a low power. A QMC approach with auxiliary fields, in principle, allows an exact solution of the Schroedinger equation in the chosen basis. However, the well-known sign/phase problem causes the statistical noise to increase exponentially. The phaseless method controls this problem by constraining the paths in the auxiliary-field path integrals with an approximate phase condition that depends on a trial wave function. In the present calculations, the trial wave function is a single Slater determinant from a Hartree-Fock calculation. The calculated all-electron total energies show typical systematic errors of no more than a few millihartrees compared to exact results. At equilibrium geometries in the molecules we studied, this accuracy is roughly comparable to that of coupled cluster with single and double excitations and with noniterative triples [CCSD(T)]. For stretched bonds in H{sub 2}O, our method exhibits a better overall accuracy and a more uniform behavior than CCSD(T)
Comparison of hybrid and pure Monte Carlo shower generators on an event by event basis
Jeff Allen; Hans-Joachim Drescher; Glennys Farrar
2007-08-21
SENECA is a hybrid air shower simulation written by H. Drescher that utilizes both Monte Carlo simulation and cascade equations. By using the cascade equations only in the high energy portion of the shower, where the shower is inherently one-dimensional, SENECA is able to utilize the advantages in speed from the cascade equations yet still produce complete, three dimensional particle distributions at ground level which capture the shower to shower variations coming from the early interactions. We present a comparison, on an event by event basis, of SENECA and CORSIKA, a well trusted MC simulation code. By using the same first interaction in both SENECA and CORSIKA, the effect of the cascade equations can be studied within a single shower, rather than averaged over many showers. Our study shows that for showers produced in this manner, SENECA agrees with CORSIKA to a very high accuracy with respect to densities, energies, and timing information for individual species of ground-level particles from both iron and proton primaries with energies between 1 EeV and 100 EeV. Used properly, SENECA produces ground particle distributions virtually indistinguishable from those of CORSIKA in a fraction of the time. For example, for a shower induced by a 10 EeV proton, SENECA is 10 times faster than CORSIKA, with comparable accuracy.
Cascade annealing simulations of bcc iron using object kinetic Monte Carlo
Xu, Haixuan; Osetskiy, Yury N; Stoller, Roger E
2012-01-01
Simulations of displacement cascade annealing were carried out using object kinetic Monte Carlo based on an extensive MD database including various primary knock-on atom energies and directions. The sensitivity of the results to a broad range of material and model parameters was examined. The diffusion mechanism of interstitial clusters has been identified to have the most significant impact on the fraction of stable interstitials that escape the cascade region. The maximum level of recombination was observed for the limiting case in which all interstitial clusters exhibit 3D random walk diffusion. The OKMC model was parameterized using two alternative sets of defect migration and binding energies, one from ab initio calculations and the second from an empirical potential. The two sets of data predict essentially the same fraction of surviving defects but different times associated with the defect escape processes. This study provides a comprehensive picture of the first phase of long-term defect evolution in bcc iron and generates information that can be used as input data for mean field rate theory (MFRT) to predict the microstructure evolution of materials under irradiation. In addition, the limitations of the current OKMC model are discussed and a potential way to overcome these limitations is outlined.
An excited-state approach within full configuration interaction quantum Monte Carlo
Blunt, N S; Booth, George H; Alavi, Ali
2015-01-01
We present a new approach to calculate excited states with the full configuration interaction quantum Monte Carlo (FCIQMC) method. The approach uses a Gram-Schmidt procedure, instantaneously applied to the stochastically evolving distributions of walkers, to orthogonalize higher energy states against lower energy ones. It can thus be used to study several of the lowest-energy states of a system within the same symmetry. This additional step is particularly simple and computationally inexpensive, requiring only a small change to the underlying FCIQMC algorithm. No trial wave functions or partitioning of the space is needed. The approach should allow excited states to be studied for systems similar to those accessible to the ground-state method, due to a comparable computational cost, while the excited states follow a similar sub-linear scaling of computational effort with system size to converge. As a first application we consider the carbon dimer in basis sets up to quadruple-zeta quality, and compare to exis...
Ildikó Pethes; László Pusztai
2015-08-25
Although liquid water has been studied for many decades by (X-ray and neutron) diffraction measurements, new experimental results keep appearing, virtually every year. The reason for this is that neither X-ray, nor neutron diffraction data are trivial to correct and interpret for this essential substance. Since X-rays are somewhat insensitive to hydrogen, neutron diffraction with (most frequently, H/D) isotopic substitution is vital for investigating the most important feature in water: hydrogen bonding. Here, the two very recent sets of neutron diffraction data are considered, both exploiting the contrast between light and heavy hydrogen, $^1$H and $^2$H, in different ways. Reverse Monte Carlo structural modeling is applied for constructing large structural models that are as consistent as possible with all experimental information, both in real and reciprocal space. The method has also proven to be useful for revealing where possible small inconsistencies appear during primary data processing: for one neutron data set, it is the molecular geometry that may not be maintained within reasonable limits, whereas for the other set, it is one of the (composite) radial distribution functions that cannot be modeled at the same (high) level as the other three functions. Nevertheless, details of the local structure around the hydrogen bonds appear very much the same for both data sets: the most probable hydrogen bond angle is straight, and the nearest oxygen neighbours of a central oxygen atom occupy approximately tetrahedral positions.
von Wittenau, A; Aufderheide, M B; Henderson, G L
2010-05-07
Given the cost and lead-times involved in high-energy proton radiography, it is prudent to model proposed radiographic experiments to see if the images predicted would return useful information. We recently modified our raytracing transmission radiography modeling code HADES to perform simplified Monte Carlo simulations of the transport of protons in a proton radiography beamline. Beamline objects include the initial diffuser, vacuum magnetic fields, windows, angle-selecting collimators, and objects described as distorted 2D (planar or cylindrical) meshes or as distorted 3D hexahedral meshes. We present an overview of the algorithms used for the modeling and code timings for simulations through typical 2D and 3D meshes. We next calculate expected changes in image blur as scattering materials are placed upstream and downstream of a resolution test object (a 3 mm thick sheet of tantalum, into which 0.4 mm wide slits have been cut), and as the current supplied to the focusing magnets is varied. We compare and contrast the resulting simulations with the results of measurements obtained at the 800 MeV Los Alamos LANSCE Line-C proton radiography facility.
Abdel-Khalik, Hany S.; Zhang, Qiong
2014-05-20
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10^{3} - 10^{5} times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.
Full-dispersion Monte Carlo simulation of phonon transport in micron-sized graphene nanoribbons
Mei, S., E-mail: smei4@wisc.edu; Knezevic, I., E-mail: knezevic@engr.wisc.edu [Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States); Maurer, L. N. [Department of Physics, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States); Aksamija, Z. [Department of Electrical and Computer Engineering, University of Massachusetts-Amherst, Amherst, Massachusetts 01003 (United States)
2014-10-28
We simulate phonon transport in suspended graphene nanoribbons (GNRs) with real-space edges and experimentally relevant widths and lengths (from submicron to hundreds of microns). The full-dispersion phonon Monte Carlo simulation technique, which we describe in detail, involves a stochastic solution to the phonon Boltzmann transport equation with the relevant scattering mechanisms (edge, three-phonon, isotope, and grain boundary scattering) while accounting for the dispersion of all three acoustic phonon branches, calculated from the fourth-nearest-neighbor dynamical matrix. We accurately reproduce the results of several experimental measurements on pure and isotopically modified samples [S. Chen et al., ACS Nano 5, 321 (2011);S. Chen et al., Nature Mater. 11, 203 (2012); X. Xu et al., Nat. Commun. 5, 3689 (2014)]. We capture the ballistic-to-diffusive crossover in wide GNRs: room-temperature thermal conductivity increases with increasing length up to roughly 100??m, where it saturates at a value of 5800?W/m K. This finding indicates that most experiments are carried out in the quasiballistic rather than the diffusive regime, and we calculate the diffusive upper-limit thermal conductivities up to 600?K. Furthermore, we demonstrate that calculations with isotropic dispersions overestimate the GNR thermal conductivity. Zigzag GNRs have higher thermal conductivity than same-size armchair GNRs, in agreement with atomistic calculations.
Electrolyte pore/solution partitioning by expanded grand canonical ensemble Monte Carlo simulation
Moucka, Filip; Bratko, Dusan Luzar, Alenka
2015-03-28
Using a newly developed grand canonical Monte Carlo approach based on fractional exchanges of dissolved ions and water molecules, we studied equilibrium partitioning of both components between laterally extended apolar confinements and surrounding electrolyte solution. Accurate calculations of the Hamiltonian and tensorial pressure components at anisotropic conditions in the pore required the development of a novel algorithm for a self-consistent correction of nonelectrostatic cut-off effects. At pore widths above the kinetic threshold to capillary evaporation, the molality of the salt inside the confinement grows in parallel with that of the bulk phase, but presents a nonuniform width-dependence, being depleted at some and elevated at other separations. The presence of the salt enhances the layered structure in the slit and lengthens the range of inter-wall pressure exerted by the metastable liquid. Solvation pressure becomes increasingly repulsive with growing salt molality in the surrounding bath. Depending on the sign of the excess molality in the pore, the wetting free energy of pore walls is either increased or decreased by the presence of the salt. Because of simultaneous rise in the solution surface tension, which increases the free-energy cost of vapor nucleation, the rise in the apparent hydrophobicity of the walls has not been shown to enhance the volatility of the metastable liquid in the pores.
Da, B.; Li, Z. Y.; Chang, H. C.; Ding, Z. J.; Mao, S. F.
2014-09-28
It has been experimentally found that the carbon surface contamination influences strongly the spectrum signals in reflection electron energy loss spectroscopy (REELS) especially at low primary electron energy. However, there is still little theoretical work dealing with the carbon contamination effect in REELS. Such a work is required to predict REELS spectrum for layered structural sample, providing an understanding of the experimental phenomena observed. In this study, we present a numerical calculation result on the spatially varying differential inelastic mean free path for a sample made of a carbon contamination layer of varied thickness on a SrTiO{sub 3} substrate. A Monte Carlo simulation model for electron interaction with a layered structural sample is built by combining this inelastic scattering cross-section with the Mott's cross-section for electron elastic scattering. The simulation results have clearly shown that the contribution of the electron energy loss from carbon surface contamination increases with decreasing primary energy due to increased individual scattering processes along trajectory parts carbon contamination layer. Comparison of the simulated spectra for different thicknesses of the carbon contamination layer and for different primary electron energies with experimental spectra clearly identifies that the carbon contamination in the measured sample was in the form of discontinuous islands other than the uniform film.
Chatterjee, Abhijit [Los Alamos National Laboratory; Voter, Arthur [Los Alamos National Laboratory
2009-01-01
We develop a variation of the temperature accelerated dynamics (TAD) method, called the p-TAD method, that efficiently generates an on-the-fly kinetic Monte Carlo (KMC) process catalog with control over the accuracy of the catalog. It is assumed that transition state theory is valid. The p-TAD method guarantees that processes relevant at the timescales of interest to the simulation are present in the catalog with a chosen confidence. A confidence measure associated with the process catalog is derived. The dynamics is then studied using the process catalog with the KMC method. Effective accuracy of a p-TAD calculation is derived when a KMC catalog is reused for conditions different from those the catalog was originally generated for. Different KMC catalog generation strategies that exploit the features of the p-TAD method and ensure higher accuracy and/or computational efficiency are presented. The accuracy and the computational requirements of the p-TAD method are assessed. Comparisons to the original TAD method are made. As an example, we study dynamics in sub-monolayer Ag/Cu(110) at the time scale of seconds using the p-TAD method. It is demonstrated that the p-TAD method overcomes several challenges plaguing the conventional KMC method.
A Monte Carlo simulation study on the wetting behavior of water on graphite surface
Xiongce Zhao
2012-09-20
This paper is an expanded edition of the rapid communication published several years ago by the author (Phys. Rev. B, v76, 041402(R), 2007) on the simulation of wetting transition of water on graphite, aiming to provide more details on the methodology, parameters, and results of the study which might be of interest to certain readers. We calculate adsorption isotherms of water on graphite using grand canonical Monte Carlo simulations combined with multiple histogram reweighting, based on the empirical potentials of SPC/E for water, the 10-4-3 van der Waals model, and a recently developed induction and multipolar potential for water and graphite. Our results show that wetting transition of water on graphite occurs at 475-480 K, and the prewetting critical temperature lies in the range of 505-510 K. The calculated wetting transition temperature agrees quantitatively with a previously predicted value using a simple model. The observation of the coexistence of stable and metastable states at temperatures between the wetting transition temperature and prewetting critical temperature indicates that the transition is first order.
Feasibility of a Monte Carlo-deterministic hybrid method for fast reactor analysis
Heo, W.; Kim, W.; Kim, Y.; Yun, S.
2013-07-01
A Monte Carlo and deterministic hybrid method is investigated for the analysis of fast reactors in this paper. Effective multi-group cross sections data are generated using a collision estimator in the MCNP5. A high order Legendre scattering cross section data generation module was added into the MCNP5 code. Both cross section data generated from MCNP5 and TRANSX/TWODANT using the homogeneous core model were compared, and were applied to DIF3D code for fast reactor core analysis of a 300 MWe SFR TRU burner core. For this analysis, 9 groups macroscopic-wise data was used. In this paper, a hybrid calculation MCNP5/DIF3D was used to analyze the core model. The cross section data was generated using MCNP5. The k{sub eff} and core power distribution were calculated using the 54 triangle FDM code DIF3D. A whole core calculation of the heterogeneous core model using the MCNP5 was selected as a reference. In terms of the k{sub eff}, 9-group MCNP5/DIF3D has a discrepancy of -154 pcm from the reference solution, 9-group TRANSX/TWODANT/DIF3D analysis gives -1070 pcm discrepancy. (authors)
Evaluation of a new commercial Monte Carlo dose calculation algorithm for electron beams
Vandervoort, Eric J. Cygler, Joanna E.; The Faculty of Medicine, The University of Ottawa, Ottawa, Ontario K1H 8M5; Department of Physics, Carleton University, Ottawa, Ontario K1S 5B6 ; Tchistiakova, Ekaterina; Department of Medical Biophysics, University of Toronto, Ontario M5G 2M9; Heart and Stroke Foundation Centre for Stroke Recovery, Sunnybrook Research Institute, University of Toronto, Ontario M4N 3M5 ; La Russa, Daniel J.; The Faculty of Medicine, The University of Ottawa, Ottawa, Ontario K1H 8M5
2014-02-15
Purpose: In this report the authors present the validation of a Monte Carlo dose calculation algorithm (XiO EMC from Elekta Software) for electron beams. Methods: Calculated and measured dose distributions were compared for homogeneous water phantoms and for a 3D heterogeneous phantom meant to approximate the geometry of a trachea and spine. Comparisons of measurements and calculated data were performed using 2D and 3D gamma index dose comparison metrics. Results: Measured outputs agree with calculated values within estimated uncertainties for standard and extended SSDs for open applicators, and for cutouts, with the exception of the 17 MeV electron beam at extended SSD for cutout sizes smaller than 5 × 5 cm{sup 2}. Good agreement was obtained between calculated and experimental depth dose curves and dose profiles (minimum number of measurements that pass a 2%/2 mm agreement 2D gamma index criteria for any applicator or energy was 97%). Dose calculations in a heterogeneous phantom agree with radiochromic film measurements (>98% of pixels pass a 3 dimensional 3%/2 mm ?-criteria) provided that the steep dose gradient in the depth direction is considered. Conclusions: Clinically acceptable agreement (at the 2%/2 mm level) between the measurements and calculated data for measurements in water are obtained for this dose calculation algorithm. Radiochromic film is a useful tool to evaluate the accuracy of electron MC treatment planning systems in heterogeneous media.
Feasibility Study of Neutron Dose for Real Time Image Guided Proton Therapy: A Monte Carlo Study
Kim, Jin Sung; Kim, Daehyun; Shin, EunHyuk; Chung, Kwangzoo; Cho, Sungkoo; Ahn, Sung Hwan; Ju, Sanggyu; Chung, Yoonsun; Jung, Sang Hoon; Han, Youngyih
2015-01-01
Two full rotating gantry with different nozzles (Multipurpose nozzle with MLC, Scanning Dedicated nozzle) with conventional cyclotron system is installed and under commissioning for various proton treatment options at Samsung Medical Center in Korea. The purpose of this study is to investigate neutron dose equivalent per therapeutic dose, H/D, to x-ray imaging equipment under various treatment conditions with monte carlo simulation. At first, we investigated H/D with the various modifications of the beam line devices (Scattering, Scanning, Multi-leaf collimator, Aperture, Compensator) at isocenter, 20, 40, 60 cm distance from isocenter and compared with other research groups. Next, we investigated the neutron dose at x-ray equipments used for real time imaging with various treatment conditions. Our investigation showed the 0.07 ~ 0.19 mSv/Gy at x-ray imaging equipments according to various treatment options and intestingly 50% neutron dose reduction effect of flat panel detector was observed due to multi- lea...
Monte Carlo modeling of neutron and gamma-ray imaging systems
Hall, J.
1996-04-01
Detailed numerical prototypes are essential to design of efficient and cost-effective neutron and gamma-ray imaging systems. We have exploited the unique capabilities of an LLNL-developed radiation transport code (COG) to develop code modules capable of simulating the performance of neutron and gamma-ray imaging systems over a wide range of source energies. COG allows us to simulate complex, energy-, angle-, and time-dependent radiation sources, model 3-dimensional system geometries with ``real world`` complexity, specify detailed elemental and isotopic distributions and predict the responses of various types of imaging detectors with full Monte Carlo accuray. COG references detailed, evaluated nuclear interaction databases allowingusers to account for multiple scattering, energy straggling, and secondary particle production phenomena which may significantly effect the performance of an imaging system by may be difficult or even impossible to estimate using simple analytical models. This work presents examples illustrating the use of these routines in the analysis of industrial radiographic systems for thick target inspection, nonintrusive luggage and cargoscanning systems, and international treaty verification.
Thermodynamics and quark susceptibilities: a Monte-Carlo approach to the PNJL model
M. Cristoforetti; T. Hell; B. Klein; W. Weise
2010-02-11
The Monte-Carlo method is applied to the Polyakov-loop extended Nambu--Jona-Lasinio (PNJL) model. This leads beyond the saddle-point approximation in a mean-field calculation and introduces fluctuations around the mean fields. We study the impact of fluctuations on the thermodynamics of the model, both in the case of pure gauge theory and including two quark flavors. In the two-flavor case, we calculate the second-order Taylor expansion coefficients of the thermodynamic grand canonical partition function with respect to the quark chemical potential and present a comparison with extrapolations from lattice QCD. We show that the introduction of fluctuations produces only small changes in the behavior of the order parameters for chiral symmetry restoration and the deconfinement transition. On the other hand, we find that fluctuations are necessary in order to reproduce lattice data for the flavor non-diagonal quark susceptibilities. Of particular importance are pion fields, the contribution of which is strictly zero in the saddle point approximation.
Krueger, Rachel A.; Haibach, Frederick G.; Fry, Dana L.; Gomez, Maria A.
2015-04-21
A centrality measure based on the time of first returns rather than the number of steps is developed and applied to finding proton traps and access points to proton highways in the doped perovskite oxides: AZr{sub 0.875}D{sub 0.125}O{sub 3}, where A is Ba or Sr and the dopant D is Y or Al. The high centrality region near the dopant is wider in the SrZrO{sub 3} systems than the BaZrO{sub 3} systems. In the aluminum-doped systems, a region of intermediate centrality (secondary region) is found in a plane away from the dopant. Kinetic Monte Carlo (kMC) trajectories show that this secondary region is an entry to fast conduction planes in the aluminum-doped systems in contrast to the highest centrality area near the dopant trap. The yttrium-doped systems do not show this secondary region because the fast conduction routes are in the same plane as the dopant and hence already in the high centrality trapped area. This centrality measure complements kMC by highlighting key areas in trajectories. The limiting activation barriers found via kMC are in very good agreement with experiments and related to the barriers to escape dopant traps.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Boscoboinik, A. M.; Manzi, S. J.; Tysoe, W. T.; Pereyra, V. D.; Boscoboinik, J. A.
2015-09-10
The influence of directing agents in the self-assembly of molecular wires to produce two-dimensional electronic nanoarchitectures is studied here using a Monte Carlo approach to simulate the effect of arbitrarily locating nodal points on a surface, from which the growth of self-assembled molecular wires can be nucleated. This is compared to experimental results reported for the self-assembly of molecular wires when 1,4-phenylenediisocyanide (PDI) is adsorbed on Au(111). The latter results in the formation of (Au-PDI)n organometallic chains, which were shown to be conductive when linked between gold nanoparticles on an insulating substrate. The present study analyzes, by means of stochasticmore »methods, the influence of variables that affect the growth and design of self-assembled conductive nanoarchitectures, such as the distance between nodes, coverage of the monomeric units that leads to the formation of the desired architectures, and the interaction between the monomeric units. As a result, this study proposes an approach and sets the stage for the production of complex 2D nanoarchitectures using a bottom-up strategy but including the use of current state-of-the-art top-down technology as an integral part of the self-assembly strategy.« less
MONTE CARLO SIMULATIONS OF NONLINEAR PARTICLE ACCELERATION IN PARALLEL TRANS-RELATIVISTIC SHOCKS
Ellison, Donald C.; Warren, Donald C. [Physics Department, North Carolina State University, Box 8202, Raleigh, NC 27695 (United States); Bykov, Andrei M., E-mail: don_ellison@ncsu.edu, E-mail: ambykov@yahoo.com [Ioffe Institute for Physics and Technology, 194021 St. Petersburg (Russian Federation)
2013-10-10
We present results from a Monte Carlo simulation of a parallel collisionless shock undergoing particle acceleration. Our simulation, which contains parameterized scattering and a particular thermal leakage injection model, calculates the feedback between accelerated particles ahead of the shock, which influence the shock precursor and 'smooth' the shock, and thermal particle injection. We show that there is a transition between nonrelativistic shocks, where the acceleration efficiency can be extremely high and the nonlinear compression ratio can be substantially greater than the Rankine-Hugoniot value, and fully relativistic shocks, where diffusive shock acceleration is less efficient and the compression ratio remains at the Rankine-Hugoniot value. This transition occurs in the trans-relativistic regime and, for the particular parameters we use, occurs around a shock Lorentz factor ?{sub 0} = 1.5. We also find that nonlinear shock smoothing dramatically reduces the acceleration efficiency presumed to occur with large-angle scattering in ultra-relativistic shocks. Our ability to seamlessly treat the transition from ultra-relativistic to trans-relativistic to nonrelativistic shocks may be important for evolving relativistic systems, such as gamma-ray bursts and Type Ibc supernovae. We expect a substantial evolution of shock accelerated spectra during this transition from soft early on to much harder when the blast-wave shock becomes nonrelativistic.
Collapse transitions in thermosensitive multi-block copolymers: A Monte Carlo study
Rissanou, Anastassia N.; Tzeli, Despoina S.; Anastasiadis, Spiros H.; Bitsanis, Ioannis A.
2014-05-28
Monte Carlo simulations are performed on a simple cubic lattice to investigate the behavior of a single linear multiblock copolymer chain of various lengths N. The chain of type (A{sub n}B{sub n}){sub m} consists of alternating A and B blocks, where A are solvophilic and B are solvophobic and N = 2nm. The conformations are classified in five cases of globule formation by the solvophobic blocks of the chain. The dependence of globule characteristics on the molecular weight and on the number of blocks, which participate in their formation, is examined. The focus is on relative high molecular weight blocks (i.e., N in the range of 500–5000 units) and very differing energetic conditions for the two blocks (very good—almost athermal solvent for A and bad solvent for B). A rich phase behavior is observed as a result of the alternating architecture of the multiblock copolymer chain. We trust that thermodynamic equilibrium has been reached for chains of N up to 2000 units; however, for longer chains kinetic entrapments are observed. The comparison among equivalent globules consisting of different number of B-blocks shows that the more the solvophobic blocks constituting the globule the bigger its radius of gyration and the looser its structure. Comparisons between globules formed by the solvophobic blocks of the multiblock copolymer chain and their homopolymer analogs highlight the important role of the solvophilic A-blocks.
A kinetic Monte Carlo method for the simulation of massive phase transformations
Bos, C.; Sommer, F.; Mittemeijer, E.J
2004-07-12
A multi-lattice kinetic Monte Carlo method has been developed for the atomistic simulation of massive phase transformations. Beside sites on the crystal lattices of the parent and product phase, randomly placed sites are incorporated as possible positions. These random sites allow the atoms to take favourable intermediate positions, essential for a realistic description of transformation interfaces. The transformation from fcc to bcc starting from a flat interface with the fcc(1 1 1)//bcc(1 1 0) and fcc[1 1 1-bar]//bcc[0 0 1-bar] orientation in a single component system has been simulated. Growth occurs in two different modes depending on the chosen values of the bond energies. For larger fcc-bcc energy differences, continuous growth is observed with a rough transformation front. For smaller energy differences, plane-by-plane growth is observed. In this growth mode two-dimensional nucleation is required in the next fcc plane after completion of the transformation of the previous fcc plane.
Monte-Carlo simulations of different concepts for shielding in the ATLAS experiment forward region
Stekl, I; Eschbach, R; Kovalenko, V E; Leroy, C; Marquet, C; Palla, J; Piquemal, F; Pospísil, S; Shupe, M A; Sodomka, J; Tourneur, S; Vorobel, V
2001-01-01
The role and performance of various layers (steel, cast iron (CI), concrete, lead, borated polyethylene (BPE), lithium filled polyethylene (LiPE)) and their combinations as shielding against neutrons and photons in the ATLAS experiment forward region (JF shielding) has been studied by means of Monte-Carlo simulations. These simulations permitted one to determine the locations of appearance and disappearance of neutrons and photons and their number at this location. In particular, the determination of the number of newly born neutrons and photons, the number of stopped neutrons and photons, as well as the number of neutrons and photons crossing the borders of shielding layers allowed the assessment of the efficiency of the JF shielding. It provided a basis for comparing the merits of different configurations of shielding layers. The simulation code is based on GEANT, FLUKA, MICAP and GAMLIB. The results of the study give strong support to a segmented shielding made of five layers (steel, CI, BPE, steel, LiPE).
Composition PDF/photon Monte Carlo modeling of moderately sooting turbulent jet flames
Mehta, R.S.; Haworth, D.C.; Modest, M.F. [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, University Park, PA 16802 (United States)
2010-05-15
A comprehensive model for luminous turbulent flames is presented. The model features detailed chemistry, radiation and soot models and state-of-the-art closures for turbulence-chemistry interactions and turbulence-radiation interactions. A transported probability density function (PDF) method is used to capture the effects of turbulent fluctuations in composition and temperature. The PDF method is extended to include soot formation. Spectral gas and soot radiation is modeled using a (particle-based) photon Monte Carlo method coupled with the PDF method, thereby capturing both emission and absorption turbulence-radiation interactions. An important element of this work is that the gas-phase chemistry and soot models that have been thoroughly validated across a wide range of laminar flames are used in turbulent flame simulations without modification. Six turbulent jet flames are simulated with Reynolds numbers varying from 6700 to 15,000, two fuel types (pure ethylene, 90% methane-10% ethylene blend) and different oxygen concentrations in the oxidizer stream (from 21% O{sub 2} to 55% O{sub 2}). All simulations are carried out with a single set of physical and numerical parameters (model constants). Uniformly good agreement between measured and computed mean temperatures, mean soot volume fractions and (where available) radiative fluxes is found across all flames. This demonstrates that with the combination of a systematic approach and state-of-the-art physical models and numerical algorithms, it is possible to simulate a broad range of luminous turbulent flames with a single model. (author)
Saha, Krishnendu; Straus, Kenneth J.; Glick, Stephen J.; Chen, Yu.
2014-08-28
To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.
Introduction to Computational Physics and Monte Carlo Simulations of Matrix Field Theory
Badis Ydri
2015-06-05
This book is divided into two parts. In the first part we give an elementary introduction to computational physics consisting of 21 simulations which originated from a formal course of lectures and laboratory simulations delivered since 2010 to physics students at Annaba University. The second part is much more advanced and deals with the problem of how to set up working Monte Carlo simulations of matrix field theories which involve finite dimensional matrix regularizations of noncommutative and fuzzy field theories, fuzzy spaces and matrix geometry. The study of matrix field theory in its own right has also become very important to the proper understanding of all noncommutative, fuzzy and matrix phenomena. The second part, which consists of 9 simulations, was delivered informally to doctoral students who are working on various problems in matrix field theory. Sample codes as well as sample key solutions are also provided for convenience and completness. An appendix containing an executive arabic summary of the first part is added at the end of the book.
MONTE CARLO SIMULATIONS OF THE PHOTOSPHERIC EMISSION IN GAMMA-RAY BURSTS
Begue, D.; Siutsou, I. A.; Vereshchagin, G. V. [University of Roma ''Sapienza'', I-00185, p.le A. Moro 5, Rome (Italy)
2013-04-20
We studied the decoupling of photons from ultra-relativistic spherically symmetric outflows expanding with constant velocity by means of Monte Carlo simulations. For outflows with finite widths we confirm the existence of two regimes: photon-thick and photon-thin, introduced recently by Ruffini et al. (RSV). The probability density function of the last scattering of photons is shown to be very different in these two cases. We also obtained spectra as well as light curves. In the photon-thick case, the time-integrated spectrum is much broader than the Planck function and its shape is well described by the fuzzy photosphere approximation introduced by RSV. In the photon-thin case, we confirm the crucial role of photon diffusion, hence the probability density of decoupling has a maximum near the diffusion radius well below the photosphere. The time-integrated spectrum of the photon-thin case has a Band shape that is produced when the outflow is optically thick and its peak is formed at the diffusion radius.
Byun, H. S.; Pirbadian, S.; Nakano, Aiichiro; Shi, Liang; El-Naggar, Mohamed Y.
2014-09-05
Microorganisms overcome the considerable hurdle of respiring extracellular solid substrates by deploying large multiheme cytochrome complexes that form 20 nanometer conduits to traffic electrons through the periplasm and across the cellular outer membrane. Here we report the first kinetic Monte Carlo simulations and single-molecule scanning tunneling microscopy (STM) measurements of the Shewanella oneidensis MR-1 outer membrane decaheme cytochrome MtrF, which can perform the final electron transfer step from cells to minerals and microbial fuel cell anodes. We find that the calculated electron transport rate through MtrF is consistent with previously reported in vitro measurements of the Shewanella Mtr complex, as well as in vivo respiration rates on electrode surfaces assuming a reasonable (experimentally verified) coverage of cytochromes on the cell surface. The simulations also reveal a rich phase diagram in the overall electron occupation density of the hemes as a function of electron injection and ejection rates. Single molecule tunneling spectroscopy confirms MtrF's ability to mediate electron transport between an STM tip and an underlying Au(111) surface, but at rates higher than expected from previously calculated heme-heme electron transfer rates for solvated molecules.
Evaluation of vectorized Monte Carlo algorithms on GPUs for a neutron Eigenvalue problem
Du, X.; Liu, T.; Ji, W.; Xu, X. G. [Nuclear Engineering Program, Rensselaer Polytechnic Institute, Troy, NY 12180 (United States); Brown, F. B. [Monte Carlo Codes Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)
2013-07-01
Conventional Monte Carlo (MC) methods for radiation transport computations are 'history-based', which means that one particle history at a time is tracked. Simulations based on such methods suffer from thread divergence on the graphics processing unit (GPU), which severely affects the performance of GPUs. To circumvent this limitation, event-based vectorized MC algorithms can be utilized. A versatile software test-bed, called ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - was used for this study. ARCHER facilitates the development and testing of a MC code based on the vectorized MC algorithm implemented on GPUs by using NVIDIA's Compute Unified Device Architecture (CUDA). The ARCHER{sub GPU} code was designed to solve a neutron eigenvalue problem and was tested on a NVIDIA Tesla M2090 Fermi card. We found that although the vectorized MC method significantly reduces the occurrence of divergent branching and enhances the warp execution efficiency, the overall simulation speed is ten times slower than the conventional history-based MC method on GPUs. By analyzing detailed GPU profiling information from ARCHER, we discovered that the main reason was the large amount of global memory transactions, causing severe memory access latency. Several possible solutions to alleviate the memory latency issue are discussed. (authors)
Vrugt, Jasper A; Hyman, James M; Robinson, Bruce A; Higdon, Dave; Ter Braak, Cajo J F; Diks, Cees G H
2008-01-01
Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.
Monte Carlo modeling of transport in PbSe nanocrystal films
Carbone, I. Carter, S. A.; Zimanyi, G. T.
2013-11-21
A Monte Carlo hopping model was developed to simulate electron and hole transport in nanocrystalline PbSe films. Transport is carried out as a series of thermally activated hopping events between neighboring sites on a cubic lattice. Each site, representing an individual nanocrystal, is assigned a size-dependent electronic structure, and the effects of particle size, charging, interparticle coupling, and energetic disorder on electron and hole mobilities were investigated. Results of simulated field-effect measurements confirm that electron mobilities and conductivities at constant carrier densities increase with particle diameter by an order of magnitude up to 5?nm and begin to decrease above 6?nm. We find that as particle size increases, fewer hops are required to traverse the same distance and that site energy disorder significantly inhibits transport in films composed of smaller nanoparticles. The dip in mobilities and conductivities at larger particle sizes can be explained by a decrease in tunneling amplitudes and by charging penalties that are incurred more frequently when carriers are confined to fewer, larger nanoparticles. Using a nearly identical set of parameter values as the electron simulations, hole mobility simulations confirm measurements that increase monotonically with particle size over two orders of magnitude.
A Monte Carlo Analysis of Gas Centrifuge Enrichment Plant Process Load Cell Data
Garner, James R; Whitaker, J Michael
2013-01-01
As uranium enrichment plants increase in number, capacity, and types of separative technology deployed (e.g., gas centrifuge, laser, etc.), more automated safeguards measures are needed to enable the IAEA to maintain safeguards effectiveness in a fiscally constrained environment. Monitoring load cell data can significantly increase the IAEA s ability to efficiently achieve the fundamental safeguards objective of confirming operations as declared (i.e., no undeclared activities), but care must be taken to fully protect the operator s proprietary and classified information related to operations. Staff at ORNL, LANL, JRC/ISPRA, and University of Glasgow are investigating monitoring the process load cells at feed and withdrawal (F/W) stations to improve international safeguards at enrichment plants. A key question that must be resolved is what is the necessary frequency of recording data from the process F/W stations? Several studies have analyzed data collected at a fixed frequency. This paper contributes to load cell process monitoring research by presenting an analysis of Monte Carlo simulations to determine the expected errors caused by low frequency sampling and its impact on material balance calculations.
Tushar Kanti Bose; Jayashree Saha
2015-03-06
The realization of a spontaneous macroscopic ferroelectric order in fluids of anisotropic mesogens is a topic of both fundamental and technological interest. Recently, we demonstrated that a system of dipolar achiral disklike ellipsoids can exhibit long-searched ferroelectric liquid crystalline phases of dipolar origin. In the present work, extensive off-lattice Monte Carlo simulations are used to investigate the phase behavior of the system under the influences of the electrostatic boundary conditions that restrict any global polarization. We find that the system develops strongly ferroelectric slablike domains periodically arranged in an antiferroelectric fashion. Exploring the phase behavior at different dipole strengths, we find existence of the ferroelectric nematic and ferroelectric columnar order inside the domains. For higher dipole strengths, a biaxial phase is also obtained with a similar periodic array of ferroelectric slabs of antiparallel polarizations. We have studied the depolarizing effects by using both the Ewald summation and the spherical cut-off techniques. We present and compare the results of the two different approaches of considering the depolarizing effects in this anisotropic system. It is explicitly shown that the domain size increases with the system size as a result of considering longer range of dipolar interactions. The system exhibits pronounced system size effects for stronger dipolar interactions. The results provide strong evidence to the novel understanding that the dipolar interactions are indeed sufficient to produce long range ferroelectric order in anisotropic fluids.
Numerical Methods for the QCD Overlap Operator IV: Hybrid Monte Carlo
N. Cundy; S. Krieg; G. Arnold; A. Frommer; Th. Lippert; K. Schilling
2008-12-18
The extreme computational costs of calculating the sign of the Wilson matrix within the overlap operator have so far prevented four dimensional dynamical overlap simulations on realistic lattice sizes, because the computational power required to invert the overlap operator, the time consuming part of the Hybrid Monte Carlo algorithm, is too high. In this series of papers we introduced the optimal approximation of the sign function and have been developing preconditioning and relaxation techniques which reduce the time needed for the inversion of the overlap operator by over a factor of four, bringing the simulation of dynamical overlap fermions on medium-size lattices within the range of Teraflop-computers. In this paper we adapt the HMC algorithm to overlap fermions. We approximate the matrix sign function using the Zolotarev rational approximation, treating the smallest eigenvalues of the Wilson operator exactly within the fermionic force. We then derive the fermionic force for the overlap operator, elaborating on the problem of Dirac delta-function terms from zero crossings of eigenvalues of the Wilson operator. The crossing scheme proposed shows energy violations which are better than O($\\Delta\\tau^2$) and thus are comparable with the violations of the standard leapfrog algorithm over the course of a trajectory. We explicitly prove that our algorithm satisfies reversibility and area conservation. Finally, we test our algorithm on small $4^4$, $6^4$, and $8^4$ lattices at large masses.
V. Dorvilien; C. N. Patra; L. B. Bhuiyan; C. W. Outhwaite
2013-12-17
The structure of cylindrical double layers is studied using a modified Poisson Boltzmann theory and the density functional approach. In the model double layer, the electrode is a cylindrical polyion that is infinitely long, impenetrable, and uniformly charged. The polyion is immersed in a sea of equi-sized rigid ions embedded in a dielectric continuum. An in-depth comparison of the theoretically predicted zeta potentials, the mean electrostatic potentials, and the electrode-ion singlet density distributions is made with the corresponding Monte Carlo simulation data. The theories are seen to be consistent in their predictions that include variations in ionic diameters, electrolyte concentrations, and electrode surface charge densities, and are also capable of well reproducing some new and existing Monte Carlo results.
Asadi, Somayeh; Masoudi, S Farhad; Rahmani, Faezeh
2014-01-01
Materials of high atomic number such as gold, can provide a high probability for photon interaction by photoelectric effects during radiation therapy. In cancer therapy, the object of brachytherapy as a kind of radiotherapy is to deliver adequate radiation dose to tumor while sparing surrounding healthy tissue. Several studies demonstrated that the preferential accumulation of gold nanoparticles within the tumor can enhance the absorbed dose by the tumor without increasing the radiation dose delivered externally. Accordingly, the required time for tumor irradiation decreases as the estimated adequate radiation dose for tumor is provided following this method. The dose delivered to healthy tissue is reduced when the time of irradiation is decreased. Hear, GNPs effects on choroidal Melanoma dosimetry is discussed by Monte Carlo study. Monte Carlo Ophthalmic brachytherapy dosimetry usually, is studied by simulation of water phantom. Considering the composition and density of eye material instead of water in thes...
Shulenburger, Luke; Desjarlais, M P
2015-01-01
Motivated by the disagreement between recent diffusion Monte Carlo calculations and experiments on the phase transition pressure between the ambient and beta-Sn phases of silicon, we present a study of the HCP to BCC phase transition in beryllium. This lighter element provides an oppor- tunity for directly testing many of the approximations required for calculations on silicon and may suggest a path towards increasing the practical accuracy of diffusion Monte Carlo calculations of solids in general. We demonstrate that the single largest approximation in these calculations is the pseudopotential approximation. After removing this we find excellent agreement with experiment for the ambient HCP phase and results similar to careful calculations using density functional theory for the phase transition pressure.
Simone Alioli; Christian W. Bauer; Calvin Berggren; Andrew Hornig; Frank J. Tackmann; Christopher K. Vermilion; Jonathan R. Walsh; Saba Zuberi
2013-05-22
We discuss the GENEVA Monte Carlo framework, which combines higher-order resummation (NNLL) of large Sudakov logarithms with multiple next-to-leading-order (NLO) matrix-element corrections and parton showering (using PYTHIA8) to give a complete description at the next higher perturbative accuracy in alpha_s at both small and large jet resolution scales. Results for e+e- -> jets compared to LEP data and for Drell-Yan production are presented.
Doma, S B; Amer, A A
2015-01-01
The ground state energy of hydrogen molecular ion H2+ confined by a hard prolate spheroidal cavity is calculated. The case in which the nuclear positions are clamped at the foci is considered. Our calculations are based on using the variational Monte Carlo method with an accurate trial wave function depending on many variational parameters. The calculations were extended also to include the HeH++ molecular ion. The obtained results are in good agreement with the recent results.
Approaching the Ground State of a Quantum Spin Glass using a Zero-Temperature Quantum Monte Carlo
Arnab Das; Bikas K. Chakrabarti
2008-03-31
Here we discuss the annealing behavior of an infinite-range $\\pm J$ Ising spin glass in presence of a transverse field using a zero-temperature quantum Monte Carlo. Within the simulation scheme, we demonstrate that quantum annealing not only helps finding the ground state of a classical spin glass, but can also help simulating the ground state of a quantum spin glass, in particularly, when the transverse field is low, much more efficiently.
Williams, M. L.; Gehin, J. C.; Clarno, K. T. [Oak Ridge National Laboratory, Bldg. 5700, P.O. Box 2008, Oak Ridge, TN 37831-6170 (United States)
2006-07-01
The TSUNAMI computational sequences currently in the SCALE 5 code system provide an automated approach to performing sensitivity and uncertainty analysis for eigenvalue responses, using either one-dimensional discrete ordinates or three-dimensional Monte Carlo methods. This capability has recently been expanded to address eigenvalue-difference responses such as reactivity changes. This paper describes the methodology and presents results obtained for an example advanced CANDU reactor design. (authors)
The two-phase issue in the O(n) non-linear $?$-model: A Monte Carlo study
B. Alles; A. Buonanno; G. Cella
1996-08-01
We have performed a high statistics Monte Carlo simulation to investigate whether the two-dimensional O(n) non-linear sigma models are asymptotically free or they show a Kosterlitz- Thouless-like phase transition. We have calculated the mass gap and the magnetic susceptibility in the O(8) model with standard action and the O(3) model with Symanzik action. Our results for O(8) support the asymptotic freedom scenario.
Çatl?, Serap; Tan?r, Güne?
2013-10-01
The present study aimed to investigate the effects of titanium, titanium alloy, and stainless steel hip prostheses on dose distribution based on the Monte Carlo simulation method, as well as the accuracy of the Eclipse treatment planning system (TPS) at 6 and 18 MV photon energies. In the present study the pencil beam convolution (PBC) method implemented in the Eclipse TPS was compared to the Monte Carlo method and ionization chamber measurements. The present findings show that if high-Z material is used in prosthesis, large dose changes can occur due to scattering. The variance in dose observed in the present study was dependent on material type, density, and atomic number, as well as photon energy; as photon energy increased back scattering decreased. The dose perturbation effect of hip prostheses was significant and could not be predicted accurately by the PBC method for hip prostheses. The findings show that for accurate dose calculation the Monte Carlo-based TPS should be used in patients with hip prostheses.
Pasciak, Alexander Samuel
2009-05-15
There are two principal techniques for performing Monte Carlo electron transport computations. The first, and least common, is the full track-structure method. This method individually models all physical electron interactions ...
Kim, Beop-Min
1991-01-01
of the computer time is the Monte-Carlo algorithm. Because the scattering coefficient used in this study is large compared to the absorption coefficient, a photon launched into the tissue experiences more absorption-scattering events. Hence more calculations... are needed, which results in larger computer time. Also, as time passes, because the increment of scattering coefficient is dominant, time needed for running one Monte-Carlo algorithm gradually increases. Due to the large number of calculations needed...
Forward treatment planning for modulated electron radiotherapy (MERT) employing Monte Carlo methods
Henzen, D. Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Lössl, K.; Aebersold, D. M.; Fix, M. K.; Neuenschwander, H.; Stampanoni, M. F. M.
2014-03-15
Purpose: This paper describes the development of a forward planning process for modulated electron radiotherapy (MERT). The approach is based on a previously developed electron beam model used to calculate dose distributions of electron beams shaped by a photon multi leaf collimator (pMLC). Methods: As the electron beam model has already been implemented into the Swiss Monte Carlo Plan environment, the Eclipse treatment planning system (Varian Medical Systems, Palo Alto, CA) can be included in the planning process for MERT. In a first step, CT data are imported into Eclipse and a pMLC shaped electron beam is set up. This initial electron beam is then divided into segments, with the electron energy in each segment chosen according to the distal depth of the planning target volume (PTV) in beam direction. In order to improve the homogeneity of the dose distribution in the PTV, a feathering process (Gaussian edge feathering) is launched, which results in a number of feathered segments. For each of these segments a dose calculation is performed employing the in-house developed electron beam model along with the macro Monte Carlo dose calculation algorithm. Finally, an automated weight optimization of all segments is carried out and the total dose distribution is read back into Eclipse for display and evaluation. One academic and two clinical situations are investigated for possible benefits of MERT treatment compared to standard treatments performed in our clinics and treatment with a bolus electron conformal (BolusECT) method. Results: The MERT treatment plan of the academic case was superior to the standard single segment electron treatment plan in terms of organs at risk (OAR) sparing. Further, a comparison between an unfeathered and a feathered MERT plan showed better PTV coverage and homogeneity for the feathered plan, with V{sub 95%} increased from 90% to 96% and V{sub 107%} decreased from 8% to nearly 0%. For a clinical breast boost irradiation, the MERT plan led to a similar homogeneity in the PTV compared to the standard treatment plan while the mean body dose was lower for the MERT plan. Regarding the second clinical case, a whole breast treatment, MERT resulted in a reduction of the lung volume receiving more than 45% of the prescribed dose when compared to the standard plan. On the other hand, the MERT plan leads to a larger low-dose lung volume and a degraded dose homogeneity in the PTV. For the clinical cases evaluated in this work, treatment plans using the BolusECT technique resulted in a more homogenous PTV and CTV coverage but higher doses to the OARs than the MERT plans. Conclusions: MERT treatments were successfully planned for phantom and clinical cases, applying a newly developed intuitive and efficient forward planning strategy that employs a MC based electron beam model for pMLC shaped electron beams. It is shown that MERT can lead to a dose reduction in OARs compared to other methods. The process of feathering MERT segments results in an improvement of the dose homogeneity in the PTV.
Silva-Rodríguez, Jesús Aguiar, Pablo; Servicio de Medicina Nuclear, Complexo Hospitalario Universidade de Santiago de Compostela , 15782, Galicia; Grupo de Imaxe Molecular, Instituto de Investigación Sanitarias , Santiago de Compostela, 15706, Galicia ; Sánchez, Manuel; Mosquera, Javier; Luna-Vega, Víctor; Cortés, Julia; Garrido, Miguel; Pombar, Miguel; Ruibal, Álvaro; Grupo de Imaxe Molecular, Instituto de Investigación Sanitarias , Santiago de Compostela, 15706, Galicia; Fundación Tejerina, 28003, Madrid
2014-05-15
Purpose: Current procedure guidelines for whole body [18F]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) state that studies with visible dose extravasations should be rejected for quantification protocols. Our work is focused on the development and validation of methods for estimating extravasated doses in order to correct standard uptake value (SUV) values for this effect in clinical routine. Methods: One thousand three hundred sixty-seven consecutive whole body FDG-PET studies were visually inspected looking for extravasation cases. Two methods for estimating the extravasated dose were proposed and validated in different scenarios using Monte Carlo simulations. All visible extravasations were retrospectively evaluated using a manual ROI based method. In addition, the 50 patients with higher extravasated doses were also evaluated using a threshold-based method. Results: Simulation studies showed that the proposed methods for estimating extravasated doses allow us to compensate the impact of extravasations on SUV values with an error below 5%. The quantitative evaluation of patient studies revealed that paravenous injection is a relatively frequent effect (18%) with a small fraction of patients presenting considerable extravasations ranging from 1% to a maximum of 22% of the injected dose. A criterion based on the extravasated volume and maximum concentration was established in order to identify this fraction of patients that might be corrected for paravenous injection effect. Conclusions: The authors propose the use of a manual ROI based method for estimating the effectively administered FDG dose and then correct SUV quantification in those patients fulfilling the proposed criterion.
SU-E-T-238: Monte Carlo Estimation of Cerenkov Dose for Photo-Dynamic Radiotherapy
Chibani, O; Price, R; Ma, C; Eldib, A; Mora, G
2014-06-01
Purpose: Estimation of Cerenkov dose from high-energy megavoltage photon and electron beams in tissue and its impact on the radiosensitization using Protoporphyrine IX (PpIX) for tumor targeting enhancement in radiotherapy. Methods: The GEPTS Monte Carlo code is used to generate dose distributions from 18MV Varian photon beam and generic high-energy (45-MV) photon and (45-MeV) electron beams in a voxel-based tissueequivalent phantom. In addition to calculating the ionization dose, the code scores Cerenkov energy released in the wavelength range 375–425 nm corresponding to the pick of the PpIX absorption spectrum (Fig. 1) using the Frank-Tamm formula. Results: The simulations shows that the produced Cerenkov dose suitable for activating PpIX is 4000 to 5500 times lower than the overall radiation dose for all considered beams (18MV, 45 MV and 45 MeV). These results were contradictory to the recent experimental studies by Axelsson et al. (Med. Phys. 38 (2011) p 4127), where Cerenkov dose was reported to be only two orders of magnitude lower than the radiation dose. Note that our simulation results can be corroborated by a simple model where the Frank and Tamm formula is applied for electrons with 2 MeV/cm stopping power generating Cerenkov photons in the 375–425 nm range and assuming these photons have less than 1mm penetration in tissue. Conclusion: The Cerenkov dose generated by high-energy photon and electron beams may produce minimal clinical effect in comparison with the photon fluence (or dose) commonly used for photo-dynamic therapy. At the present time, it is unclear whether Cerenkov radiation is a significant contributor to the recently observed tumor regression for patients receiving radiotherapy and PpIX versus patients receiving radiotherapy only. The ongoing study will include animal experimentation and investigation of dose rate effects on PpIX response.
Minibeam radiation therapy for the management of osteosarcomas: A Monte Carlo study
Martínez-Rovira, I.; Prezado, Y.
2014-06-15
Purpose: Minibeam radiation therapy (MBRT) exploits the well-established tissue-sparing effect provided by the combination of submillimetric field sizes and a spatial fractionation of the dose. The aim of this work is to evaluate the feasibility and potential therapeutic gain of MBRT, in comparison with conventional radiotherapy, for osteosarcoma treatments. Methods: Monte Carlo simulations (PENELOPE/PENEASY code) were used as a method to study the dose distributions resulting from MBRT irradiations of a rat femur and a realistic human femur phantoms. As a figure of merit, peak and valley doses and peak-to-valley dose ratios (PVDR) were assessed. Conversion of absorbed dose to normalized total dose (NTD) was performed in the human case. Several field sizes and irradiation geometries were evaluated. Results: It is feasible to deliver a uniform dose distribution in the target while the healthy tissue benefits from a spatial fractionation of the dose. Very high PVDR values (?20) were achieved in the entrance beam path in the rat case. PVDR values ranged from 2 to 9 in the human phantom. NTD{sub 2.0} of 87 Gy might be reached in the tumor in the human femur while the healthy tissues might receive valley NTD{sub 2.0} lower than 20 Gy. The doses in the tumor and healthy tissues might be significantly higher and lower than the ones commonly delivered used in conventional radiotherapy. Conclusions: The obtained dose distributions indicate that a gain in normal tissue sparing might be expected. This would allow the use of higher (and potentially curative) doses in the tumor. Biological experiments are warranted.
BENCHMARK TESTS FOR MARKOV CHAIN MONTE CARLO FITTING OF EXOPLANET ECLIPSE OBSERVATIONS
Rogers, Justin; Lopez-Morales, Mercedes; Apai, Daniel; Adams, Elisabeth
2013-04-10
Ground-based observations of exoplanet eclipses provide important clues to the planets' atmospheric physics, yet systematics in light curve analyses are not fully understood. It is unknown if measurements suggesting near-infrared flux densities brighter than models predict are real, or artifacts of the analysis processes. We created a large suite of model light curves, using both synthetic and real noise, and tested the common process of light curve modeling and parameter optimization with a Markov Chain Monte Carlo algorithm. With synthetic white noise models, we find that input eclipse signals are generally recovered within 10% accuracy for eclipse depths greater than the noise amplitude, and to smaller depths for higher sampling rates and longer baselines. Red noise models see greater discrepancies between input and measured eclipse signals, often biased in one direction. Finally, we find that in real data, systematic biases result even with a complex model to account for trends, and significant false eclipse signals may appear in a non-Gaussian distribution. To quantify the bias and validate an eclipse measurement, we compare both the planet-hosting star and several of its neighbors to a separately chosen control sample of field stars. Re-examining the Rogers et al. Ks-band measurement of CoRoT-1b finds an eclipse 3190{sup +370}{sub -440} ppm deep centered at {phi}{sub me} = 0.50418{sup +0.00197}{sub -0.00203}. Finally, we provide and recommend the use of selected data sets we generated as a benchmark test for eclipse modeling and analysis routines, and propose criteria to verify eclipse detections.
Adsorption of branched and dendritic polymers onto flat surfaces: A Monte Carlo study
Sommer, J.-U. [Leibniz Institute of Polymer Research Dresden e. V., 01069 Dresden (Germany) [Leibniz Institute of Polymer Research Dresden e. V., 01069 Dresden (Germany); Institute for Theoretical Physics, Technische Universität Dresden, 01069 Dresden (Germany); K?os, J. S. [Leibniz Institute of Polymer Research Dresden e. V., 01069 Dresden (Germany) [Leibniz Institute of Polymer Research Dresden e. V., 01069 Dresden (Germany); Faculty of Physics, A. Mickiewicz University, Umultowska 85, 61-614 Pozna? (Poland); Mironova, O. N. [Leibniz Institute of Polymer Research Dresden e. V., 01069 Dresden (Germany)] [Leibniz Institute of Polymer Research Dresden e. V., 01069 Dresden (Germany)
2013-12-28
Using Monte Carlo simulations based on the bond fluctuation model we study the adsorption of starburst dendrimers with flexible spacers onto a flat surface. The calculations are performed for various generation number G and spacer length S in a wide range of the reduced temperature ? as the measure of the interaction strength between the monomers and the surface. Our simulations indicate a two-step adsorption scenario. Below the critical point of adsorption, ?{sub c}, a weakly adsorbed state of the dendrimer is found. Here, the dendrimer retains its shape but sticks to the surface by adsorbed spacers. By lowering the temperature below a spacer-length dependent value, ?*(S) < ?{sub c}, a step-like transition into a strongly adsorbed state takes place. In the flatly adsorbed state the shape of the dendrimer is well described by a mean field model of a dendrimer in two dimensions. We also performed simulations of star-polymers which display a simple crossover-behavior in full analogy to linear chains. By analyzing the order parameter of the adsorption transition, we determine the critical point of adsorption of the dendrimers which is located close to the critical point of adsorption for star-polymers. While the order parameter for the adsorbed spacers displays a critical crossover scaling, the overall order parameter, which combines both critical and discontinuous transition effects, does not display simple scaling. The step-like transition from the weak into the strong adsorbed regime is confirmed by analyzing the shape-anisotropy of the dendrimers. We present a mean-field model based on the concept of spacer adsorption which predicts a discontinuous transition of dendrimers due to an excluded volume barrier. The latter results from an increased density of the dendrimer in the flatly adsorbed state which has to be overcome before this state is thermodynamically stable.
Local and chain dynamics in miscible polymer blends: A Monte Carlo simulation study
Jutta Luettmer-Strathmann; Manjeera Mantina
2005-11-07
Local chain structure and local environment play an important role in the dynamics of polymer chains in miscible blends. In general, the friction coefficients that describe the segmental dynamics of the two components in a blend differ from each other and from those of the pure melts. In this work, we investigate polymer blend dynamics with Monte Carlo simulations of a generalized bond-fluctuation model, where differences in the interaction energies between non-bonded nearest neighbors distinguish the two components of a blend. Simulations employing only local moves and respecting a non-bond crossing condition were carried out for blends with a range of compositions, densities, and chain lengths. The blends investigated here have long-chain dynamics in the crossover region between Rouse and entangled behavior. In order to investigate the scaling of the self-diffusion coefficients, characteristic chain lengths $N_\\mathrm{c}$ are calculated from the packing length of the chains. These are combined with a local mobility $\\mu$ determined from the acceptance rate and the effective bond length to yield characteristic self-diffusion coefficients $D_\\mathrm{c}=\\mu/N_\\mathrm{c}$. We find that the data for both melts and blends collapse onto a common line in a graph of reduced diffusion coefficients $D/D_\\mathrm{c}$ as a function of reduced chain length $N/N_\\mathrm{c}$. The composition dependence of dynamic properties is investigated in detail for melts and blends with chains of length twenty at three different densities. For these blends, we calculate friction coefficients from the local mobilities and consider their composition and pressure dependence. The friction coefficients determined in this way show many of the characteristics observed in experiments on miscible blends.
Singh, Jayant K.
densities and vapor pressures of select n-alkanes. Surface tension values for butane, hexane, and octane Carlo method (GEMC) by Panagiotopolous1 greatly enhanced our ability to predict the phase behavior energy is enhanced, and the likelihood of molecules overlapping is reduced. Gibbs ensemble Monte Carlo
Mei, Donghai; Neurock, Matthew; Smith, C Michael
2009-10-22
The kinetics for the selective hydrogenation of acetylene-ethylene mixtures over model Pd(111) and bimetallic Pd-Ag alloy surfaces were examined using first principles based kinetic Monte Carlo (KMC) simulations to elucidate the effects of alloying as well as process conditions (temperature and hydrogen partial pressure). The mechanisms that control the selective and unselective routes which included hydrogenation, dehydrogenation and C-?C bond breaking pathways were analyzed using first-principle density functional theory (DFT) calculations. The results were used to construct an intrinsic kinetic database that was used in a variable time step kinetic Monte Carlo simulation to follow the kinetics and the molecular transformations in the selective hydrogenation of acetylene-ethylene feeds over Pd and Pd-Ag surfaces. The lateral interactions between coadsorbates that occur through-surface and through-space were estimated using DFT-parameterized bond order conservation and van der Waal interaction models respectively. The simulation results show that the rate of acetylene hydrogenation as well as the ethylene selectivity increase with temperature over both the Pd(111) and the Pd-Ag/Pd(111) alloy surfaces. The selective hydrogenation of acetylene to ethylene proceeds via the formation of a vinyl intermediate. The unselective formation of ethane is the result of the over-hydrogenation of ethylene as well as over-hydrogenation of vinyl to form ethylidene. Ethylidene further hydrogenates to form ethane and dehydrogenates to form ethylidyne. While ethylidyne is not reactive, it can block adsorption sites which limit the availability of hydrogen on the surface and thus act to enhance the selectivity. Alloying Ag into the Pd surface decreases the overall rated but increases the ethylene selectivity significantly by promoting the selective hydrogenation of vinyl to ethylene and concomitantly suppressing the unselective path involving the hydrogenation of vinyl to ethylidene and the dehydrogenation ethylidene to ethylidyne. This is consistent with experimental results which suggest only the predominant hydrogenation path involving the sequential addition of hydrogen to form vinyl and ethylene exists over the Pd-Ag alloys. Ag enhances the desorption of ethylene and hydrogen from the surface thus limiting their ability to undergo subsequent reactions. The simulated apparent activation barriers were calculated to be 32-44 kJ/mol on Pd(111) and 26-31 kJ/mol on Pd-Ag/Pd(111) respectively. The reaction was found to be essentially first order in hydrogen over Pd(111) and Pd-Ag/Pd(111) surfaces. The results reveal that increases in the hydrogen partial pressure increase the activity but decrease ethylene selectivity over both Pd and Pd-Ag/Pd(111) surfaces. Pacific Northwest National Laboratory is operated by Battelle for the US Department of Energy.
SU-E-I-28: Evaluating the Organ Dose From Computed Tomography Using Monte Carlo Calculations
Ono, T; Araki, F
2014-06-01
Purpose: To evaluate organ doses from computed tomography (CT) using Monte Carlo (MC) calculations. Methods: A Philips Brilliance CT scanner (64 slice) was simulated using the GMctdospp (IMPS, Germany) based on the EGSnrc user code. The X-ray spectra and a bowtie filter for MC simulations were determined to coincide with measurements of half-value layer (HVL) and off-center ratio (OCR) profile in air. The MC dose was calibrated from absorbed dose measurements using a Farmer chamber and a cylindrical water phantom. The dose distribution from CT was calculated using patient CT images and organ doses were evaluated from dose volume histograms. Results: The HVLs of Al at 80, 100, and 120 kV were 6.3, 7.7, and 8.7 mm, respectively. The calculated HVLs agreed with measurements within 0.3%. The calculated and measured OCR profiles agreed within 3%. For adult head scans (CTDIvol) =51.4 mGy), mean doses for brain stem, eye, and eye lens were 23.2, 34.2, and 37.6 mGy, respectively. For pediatric head scans (CTDIvol =35.6 mGy), mean doses for brain stem, eye, and eye lens were 19.3, 24.5, and 26.8 mGy, respectively. For adult chest scans (CTDIvol=19.0 mGy), mean doses for lung, heart, and spinal cord were 21.1, 22.0, and 15.5 mGy, respectively. For adult abdominal scans (CTDIvol=14.4 mGy), the mean doses for kidney, liver, pancreas, spleen, and spinal cord were 17.4, 16.5, 16.8, 16.8, and 13.1 mGy, respectively. For pediatric abdominal scans (CTDIvol=6.76 mGy), mean doses for kidney, liver, pancreas, spleen, and spinal cord were 8.24, 8.90, 8.17, 8.31, and 6.73 mGy, respectively. In head scan, organ doses were considerably different from CTDIvol values. Conclusion: MC dose distributions calculated by using patient CT images are useful to evaluate organ doses absorbed to individual patients.
Nanothermodynamics of large iron clusters by means of a flat histogram Monte Carlo method
Basire, M.; Soudan, J.-M.; Angelié, C., E-mail: christian.angelie@cea.fr [Laboratoire Francis Perrin, CNRS-URA 2453, CEA/DSM/IRAMIS/LIDyL, F-91191 Gif-sur-Yvette Cedex (France)
2014-09-14
The thermodynamics of iron clusters of various sizes, from 76 to 2452 atoms, typical of the catalyst particles used for carbon nanotubes growth, has been explored by a flat histogram Monte Carlo (MC) algorithm (called the ?-mapping), developed by Soudan et al. [J. Chem. Phys. 135, 144109 (2011), Paper I]. This method provides the classical density of states, g{sub p}(E{sub p}) in the configurational space, in terms of the potential energy of the system, with good and well controlled convergence properties, particularly in the melting phase transition zone which is of interest in this work. To describe the system, an iron potential has been implemented, called “corrected EAM” (cEAM), which approximates the MEAM potential of Lee et al. [Phys. Rev. B 64, 184102 (2001)] with an accuracy better than 3 meV/at, and a five times larger computational speed. The main simplification concerns the angular dependence of the potential, with a small impact on accuracy, while the screening coefficients S{sub ij} are exactly computed with a fast algorithm. With this potential, ergodic explorations of the clusters can be performed efficiently in a reasonable computing time, at least in the upper half of the solid zone and above. Problems of ergodicity exist in the lower half of the solid zone but routes to overcome them are discussed. The solid-liquid (melting) phase transition temperature T{sub m} is plotted in terms of the cluster atom number N{sub at}. The standard N{sub at}{sup ?1/3} linear dependence (Pawlow law) is observed for N{sub at} >300, allowing an extrapolation up to the bulk metal at 1940 ±50 K. For N{sub at} <150, a strong divergence is observed compared to the Pawlow law. The melting transition, which begins at the surface, is stated by a Lindemann-Berry index and an atomic density analysis. Several new features are obtained for the thermodynamics of cEAM clusters, compared to the Rydberg pair potential clusters studied in Paper I.
Monte Carlo based beam model using a photon MLC for modulated electron radiotherapy
Henzen, D. Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Vetterli, D.; Chatelain, C.; Fix, M. K.; Neuenschwander, H.; Stampanoni, M. F. M.
2014-02-15
Purpose: Modulated electron radiotherapy (MERT) promises sparing of organs at risk for certain tumor sites. Any implementation of MERT treatment planning requires an accurate beam model. The aim of this work is the development of a beam model which reconstructs electron fields shaped using the Millennium photon multileaf collimator (MLC) (Varian Medical Systems, Inc., Palo Alto, CA) for a Varian linear accelerator (linac). Methods: This beam model is divided into an analytical part (two photon and two electron sources) and a Monte Carlo (MC) transport through the MLC. For dose calculation purposes the beam model has been coupled with a macro MC dose calculation algorithm. The commissioning process requires a set of measurements and precalculated MC input. The beam model has been commissioned at a source to surface distance of 70 cm for a Clinac 23EX (Varian Medical Systems, Inc., Palo Alto, CA) and a TrueBeam linac (Varian Medical Systems, Inc., Palo Alto, CA). For validation purposes, measured and calculated depth dose curves and dose profiles are compared for four different MLC shaped electron fields and all available energies. Furthermore, a measured two-dimensional dose distribution for patched segments consisting of three 18 MeV segments, three 12 MeV segments, and a 9 MeV segment is compared with corresponding dose calculations. Finally, measured and calculated two-dimensional dose distributions are compared for a circular segment encompassed with a C-shaped segment. Results: For 15 × 34, 5 × 5, and 2 × 2 cm{sup 2} fields differences between water phantom measurements and calculations using the beam model coupled with the macro MC dose calculation algorithm are generally within 2% of the maximal dose value or 2 mm distance to agreement (DTA) for all electron beam energies. For a more complex MLC pattern, differences between measurements and calculations are generally within 3% of the maximal dose value or 3 mm DTA for all electron beam energies. For the two-dimensional dose comparisons, the differences between calculations and measurements are generally within 2% of the maximal dose value or 2 mm DTA. Conclusions : The results of the dose comparisons suggest that the developed beam model is suitable to accurately reconstruct photon MLC shaped electron beams for a Clinac 23EX and a TrueBeam linac. Hence, in future work the beam model will be utilized to investigate the possibilities of MERT using the photon MLC to shape electron beams.
Search for New Heavy Higgs Boson in B-L model at the LHC using Monte Carlo Simulation
Hesham Mansour; Nady Bakhet
2013-04-24
The aim of this work is to search for a new heavy Higgs boson in the B-L extension of the Standard Model at LHC using the data produced from simulated collisions between two protons at different center of mass energies by Monte Carlo event generator programs to find new Higgs boson signatures at the LHC. Also we study the production and decay channels for Higgs boson in this model and its interactions with the other new particles of this model namely the new neutral gauge massive boson and the new fermionic right-handed heavy neutrinos .
Leman, S.W.; McCarthy, K.A.; /MIT, MKI; Brink, P.L.; Cabrera, B.; Cherry, M.; /Stanford U., Phys. Dept.; Silva, E.Do Couto E; /SLAC; Figueroa-Feliciano, E.; /MIT, MKI; Kim, P.; /SLAC; Mirabolfathi, N.; /UC, Berkeley; Pyle, M.; /Stanford U., Phys. Dept.; Resch, R.; /SLAC; Sadoulet, B.; Serfass, B.; Sundqvist, K.M.; /UC, Berkeley; Tomada, A.; /Stanford U., Phys. Dept.; Young, B.A.; /Santa Clara U.
2012-06-05
We present results on phonon quasidiffusion and Transition Edge Sensor (TES) studies in a large, 3-inch diameter, 1-inch thick [100] high purity germanium crystal, cooled to 50 mK in the vacuum of a dilution refrigerator, and exposed with 59.5 keV gamma-rays from an Am-241 calibration source. We compare calibration data with results from a Monte Carlo which includes phonon quasidiffusion and the generation of phonons created by charge carriers as they are drifted across the detector by ionization readout channels. The phonon energy is then parsed into TES based phonon readout channels and input into a TES simulator.
Ulybyshev, M V
2015-01-01
We study electronic properties of graphene with finite concentration of vacancies or other resonant scatterers by a straightforward lattice Quantum Monte Carlo calculations. Taking into account realistic long-range Coulomb interaction we calculate distribution of spin density associated to midgap states and demonstrate antiferromagnetic ordering. Energy gap are open due to the interaction effects, both in the bare graphene spectrum and in the vacancy/impurity bands. In the case of 5 % concentration of resonant scatterers the latter gap is estimated as 0.7 eV and 1.1 eV for graphene on boron nitride and freely suspended graphene, respectively.
M. V. Ulybyshev; M. I. Katsnelson
2015-05-22
We study electronic properties of graphene with finite concentration of vacancies or other resonant scatterers by a straightforward lattice Quantum Monte Carlo calculations. Taking into account realistic long-range Coulomb interaction we calculate distribution of spin density associated to midgap states and demonstrate antiferromagnetic ordering. Energy gaps are open due to the interaction effects, both in the bare graphene spectrum and in the vacancy/impurity bands. In the case of 5 % concentration of resonant scatterers the latter gap is estimated as 0.7 eV and 1.1 eV for graphene on boron nitride and freely suspended graphene, respectively.
Monte Carlo study of very weak first-order transitions in the three-dimensional Ashkin-Teller model
Peter Arnold; Yan Zhang
1997-07-10
We propose numerical simulations of the Ashkin-Teller model as a foil for theoretical techniques for studying very weakly first-order phase transitions in three dimensions. The Ashkin-Teller model is a simple two-spin model whose parameters can be adjusted so that it has an arbitrarily weakly first-order phase transition. In this limit, there are quantities characterizing the first-order transition which are universal: we measure the relative discontinuity of the specific heat, the correlation length, and the susceptibility across the transition by Monte Carlo simulation.
Integrated Cost and Schedule using Monte Carlo Simulation of a CPM Model - 12419
Hulett, David T.; Nosbisch, Michael R.
2012-07-01
This discussion of the recommended practice (RP) 57R-09 of AACE International defines the integrated analysis of schedule and cost risk to estimate the appropriate level of cost and schedule contingency reserve on projects. The main contribution of this RP is to include the impact of schedule risk on cost risk and hence on the need for cost contingency reserves. Additional benefits include the prioritizing of the risks to cost, some of which are risks to schedule, so that risk mitigation may be conducted in a cost-effective way, scatter diagrams of time-cost pairs for developing joint targets of time and cost, and probabilistic cash flow which shows cash flow at different levels of certainty. Integrating cost and schedule risk into one analysis based on the project schedule loaded with costed resources from the cost estimate provides both: (1) more accurate cost estimates than if the schedule risk were ignored or incorporated only partially, and (2) illustrates the importance of schedule risk to cost risk when the durations of activities using labor-type (time-dependent) resources are risky. Many activities such as detailed engineering, construction or software development are mainly conducted by people who need to be paid even if their work takes longer than scheduled. Level-of-effort resources, such as the project management team, are extreme examples of time-dependent resources, since if the project duration exceeds its planned duration the cost of these resources will increase over their budgeted amount. The integrated cost-schedule risk analysis is based on: - A high quality CPM schedule with logic tight enough so that it will provide the correct dates and critical paths during simulation automatically without manual intervention. - A contingency-free estimate of project costs that is loaded on the activities of the schedule. - Resolves inconsistencies between cost estimate and schedule that often creep into those documents as project execution proceeds. - Good-quality risk data that are usually collected in risk interviews of the project team, management and others knowledgeable in the risk of the project. The risks from the risk register are used as the basis of the risk data in the risk driver method. The risk driver method is based in the fundamental principle that identifiable risks drive overall cost and schedule risk. - A Monte Carlo simulation software program that can simulate schedule risk, burn WM2012 rate risk and time-independent resource risk. The results include the standard histograms and cumulative distributions of possible cost and time results for the project. However, by simulating both cost and time simultaneously we can collect the cost-time pairs of results and hence show the scatter diagram ('football chart') that indicates the joint probability of finishing on time and on budget. Also, we can derive the probabilistic cash flow for comparison with the time-phased project budget. Finally the risks to schedule completion and to cost can be prioritized, say at the P-80 level of confidence, to help focus the risk mitigation efforts. If the cost and schedule estimates including contingency reserves are not acceptable to the project stakeholders the project team should conduct risk mitigation workshops and studies, deciding which risk mitigation actions to take, and re-run the Monte Carlo simulation to determine the possible improvement to the project's objectives. Finally, it is recommended that the contingency reserves of cost and of time, calculated at a level that represents an acceptable degree of certainty and uncertainty for the project stakeholders, be added as a resource-loaded activity to the project schedule for strategic planning purposes. The risk analysis described in this paper is correct only for the current plan, represented by the schedule. The project contingency reserve of time and cost that are the main results of this analysis apply if that plan is to be followed. Of course project managers have the option of re-planning and re-scheduling in the face of new facts, in part by m
Dominik Smith; Lorenz von Smekal
2014-03-14
We report on Hybrid-Monte-Carlo simulations of the tight-binding model with long-range Coulomb interactions for the electronic properties of graphene. We investigate the spontaneous breaking of sublattice symmetry corresponding to a transition from the semimetal to an antiferromagnetic insulating phase. Our short-range interactions thereby include the partial screening due to electrons in higher energy states from ab initio calculations based on the constrained random phase approximation [T.O.Wehling {\\it et al.}, Phys.Rev.Lett.{\\bf 106}, 236805 (2011)]. In contrast to a similar previous Monte-Carlo study [M.V.Ulybyshev {\\it et al.}, Phys.Rev.Lett.{\\bf 111}, 056801 (2013)] we also include a phenomenological model which describes the transition to the unscreened bare Coulomb interactions of graphene at half filling in the long-wavelength limit. Our results show, however, that the critical coupling for the antiferromagnetic Mott transition is largely insensitive to the strength of these long-range Coulomb tails. They hence confirm the prediction that suspended graphene remains in the semimetal phase when a realistic static screening of the Coulomb interactions is included.
Griesheimer, D. P. [Bertis Atomic Power Laboratory, P.O. Box 79, West Mifflin, PA 15122 (United States); Stedry, M. H. [Knolls Atomic Power Laboratory, P.O. Box 1072, Schenectady, NY 12301 (United States)
2013-07-01
A rigorous treatment of energy deposition in a Monte Carlo transport calculation, including coupled transport of all secondary and tertiary radiations, increases the computational cost of a simulation dramatically, making fully-coupled heating impractical for many large calculations, such as 3-D analysis of nuclear reactor cores. However, in some cases, the added benefit from a full-fidelity energy-deposition treatment is negligible, especially considering the increased simulation run time. In this paper we present a generalized framework for the in-line calculation of energy deposition during steady-state Monte Carlo transport simulations. This framework gives users the ability to select among several energy-deposition approximations with varying levels of fidelity. The paper describes the computational framework, along with derivations of four energy-deposition treatments. Each treatment uses a unique set of self-consistent approximations, which ensure that energy balance is preserved over the entire problem. By providing several energy-deposition treatments, each with different approximations for neglecting the energy transport of certain secondary radiations, the proposed framework provides users the flexibility to choose between accuracy and computational efficiency. Numerical results are presented, comparing heating results among the four energy-deposition treatments for a simple reactor/compound shielding problem. The results illustrate the limitations and computational expense of each of the four energy-deposition treatments. (authors)
Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W; Grove, Robert E
2015-01-01
Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNce reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).
Matthew G. Baring; Keith Ogilvie; Donald Ellison; Robert Forsyth
1996-10-02
The most stringent test of theoretical models of the first-order Fermi mechanism at collisionless astrophysical shocks is a comparison of the theoretical predictions with observational data on particle populations. Such comparisons have yielded good agreement between observations at the quasi-parallel portion of the Earth's bow shock and three theoretical approaches, including Monte Carlo kinetic simulations. This paper extends such model testing to the realm of oblique interplanetary shocks: here observations of proton and alpha particle distributions made by the SWICS ion mass spectrometer on Ulysses at nearby interplanetary shocks are compared with test particle Monte Carlo simulation predictions of accelerated populations. The plasma parameters used in the simulation are obtained from measurements of solar wind particles and the magnetic field upstream of individual shocks. Good agreement between downstream spectral measurements and the simulation predictions are obtained for two shocks by allowing the the ratio of the mean-free scattering length to the ionic gyroradius, to vary in an optimization of the fit to the data. Generally small values of this ratio are obtained, corresponding to the case of strong scattering. The acceleration process appears to be roughly independent of the mass or charge of the species.
Axel Hoefer; Oliver Buss; Maik Hennebach; Michael Schmid; Dieter Porsch
2014-11-12
MOCABA is a combination of Monte Carlo sampling and Bayesian updating algorithms for the prediction of integral functions of nuclear data, such as reactor power distributions or neutron multiplication factors. Similarly to the established Generalized Linear Least Squares (GLLS) methodology, MOCABA offers the capability to utilize integral experimental data to reduce the prior uncertainty of integral observables. The MOCABA approach, however, does not involve any series expansions and, therefore, does not suffer from the breakdown of first-order perturbation theory for large nuclear data uncertainties. This is related to the fact that, in contrast to the GLLS method, the updating mechanism within MOCABA is applied directly to the integral observables without having to "adjust" any nuclear data. A central part of MOCABA is the nuclear data Monte Carlo program NUDUNA, which performs random sampling of nuclear data evaluations according to their covariance information and converts them into libraries for transport code systems like MCNP or SCALE. What is special about MOCABA is that it can be applied to any integral function of nuclear data, and any integral measurement can be taken into account to improve the prediction of an integral observable of interest. In this paper we present two example applications of the MOCABA framework: the prediction of the neutron multiplication factor of a water-moderated PWR fuel assembly based on 21 criticality safety benchmark experiments and the prediction of the power distribution within a toy model reactor containing 100 fuel assemblies.
Kawano, T; Weidenmüller, H A
2015-01-01
Using a random-matrix approach and Monte-Carlo simulations, we generate scattering matrices and cross sections for compound-nucleus reactions. In the absence of direct reactions we compare the average cross sections with the analytic solution given by the Gaussian Orthogonal Ensemble (GOE) triple integral, and with predictions of statistical approaches such as the ones due to Moldauer, to Hofmann, Richert, Tepel, and Weidenm\\"{u}ller, and to Kawai, Kerman, and McVoy. We find perfect agreement with the GOE triple integral and display the limits of validity of the latter approaches. We establish a criterion for the width of the energy-averaging interval such that the relative difference between the ensemble-averaged and the energy-averaged scattering matrices lies below a given bound. Direct reactions are simulated in terms of an energy-independent background matrix. In that case, cross sections averaged over the ensemble of Monte-Carlo simulations fully agree with results from the Engelbrecht-Weidenm\\"{u}ller ...
Jiang, F.-J.; Nyfeler, M.; Kaempfer, F.
2009-07-15
Motivated by the possible mechanism for the pinning of the electronic liquid crystal direction in YBa{sub 2}Cu{sub 3}O{sub 6.45} as proposed by Pardini et al. [Phys. Rev. B 78, 024439 (2008)], we use the first-principles Monte Carlo method to study the spin-(1/2) Heisenberg model with antiferromagnetic couplings J{sub 1} and J{sub 2} on the square lattice. In particular, the low-energy constants spin stiffness {rho}{sub s}, staggered magnetization M{sub s}, and spin wave velocity c are determined by fitting the Monte Carlo data to the predictions of magnon chiral perturbation theory. Further, the spin stiffnesses {rho}{sub s1} and {rho}{sub s2} as a function of the ratio J{sub 2}/J{sub 1} of the couplings are investigated in detail. Although we find a good agreement between our results with those obtained by the series expansion method in the weakly anisotropic regime, for strong anisotropy we observe discrepancies.
Monte Carlo calculations of electron beam quality conversion factors for several ion chamber types
Muir, B. R.; Rogers, D. W. O.
2014-11-01
Purpose: To provide a comprehensive investigation of electron beam reference dosimetry using Monte Carlo simulations of the response of 10 plane-parallel and 18 cylindrical ion chamber types. Specific emphasis is placed on the determination of the optimal shift of the chambers’ effective point of measurement (EPOM) and beam quality conversion factors. Methods: The EGSnrc system is used for calculations of the absorbed dose to gas in ion chamber models and the absorbed dose to water as a function of depth in a water phantom on which cobalt-60 and several electron beam source models are incident. The optimal EPOM shifts of the ion chambers are determined by comparing calculations of R{sub 50} converted from I{sub 50} (calculated using ion chamber simulations in phantom) to R{sub 50} calculated using simulations of the absorbed dose to water vs depth in water. Beam quality conversion factors are determined as the calculated ratio of the absorbed dose to water to the absorbed dose to air in the ion chamber at the reference depth in a cobalt-60 beam to that in electron beams. Results: For most plane-parallel chambers, the optimal EPOM shift is inside of the active cavity but different from the shift determined with water-equivalent scaling of the front window of the chamber. These optimal shifts for plane-parallel chambers also reduce the scatter of beam quality conversion factors, k{sub Q}, as a function of R{sub 50}. The optimal shift of cylindrical chambers is found to be less than the 0.5 r{sub cav} recommended by current dosimetry protocols. In most cases, the values of the optimal shift are close to 0.3 r{sub cav}. Values of k{sub ecal} are calculated and compared to those from the TG-51 protocol and differences are explained using accurate individual correction factors for a subset of ion chambers investigated. High-precision fits to beam quality conversion factors normalized to unity in a beam with R{sub 50} = 7.5 cm (k{sub Q}{sup ?}) are provided. These factors avoid the use of gradient correction factors as used in the TG-51 protocol although a chamber dependent optimal shift in the EPOM is required when using plane-parallel chambers while no shift is needed with cylindrical chambers. The sensitivity of these results to parameters used to model the ion chambers is discussed and the uncertainty related to the practical use of these results is evaluated. Conclusions: These results will prove useful as electron beam reference dosimetry protocols are being updated. The analysis of this work indicates that cylindrical ion chambers may be appropriate for use in low-energy electron beams but measurements are required to characterize their use in these beams.
Statistical Exploration of Electronic Structure of Molecules from Quantum Monte-Carlo Simulations
Prabhat, Mr; Zubarev, Dmitry; Lester, Jr., William A.
2010-12-22
In this report, we present results from analysis of Quantum Monte Carlo (QMC) simulation data with the goal of determining internal structure of a 3N-dimensional phase space of an N-electron molecule. We are interested in mining the simulation data for patterns that might be indicative of the bond rearrangement as molecules change electronic states. We examined simulation output that tracks the positions of two coupled electrons in the singlet and triplet states of an H2 molecule. The electrons trace out a trajectory, which was analyzed with a number of statistical techniques. This project was intended to address the following scientific questions: (1) Do high-dimensional phase spaces characterizing electronic structure of molecules tend to cluster in any natural way? Do we see a change in clustering patterns as we explore different electronic states of the same molecule? (2) Since it is hard to understand the high-dimensional space of trajectories, can we project these trajectories to a lower dimensional subspace to gain a better understanding of patterns? (3) Do trajectories inherently lie in a lower-dimensional manifold? Can we recover that manifold? After extensive statistical analysis, we are now in a better position to respond to these questions. (1) We definitely see clustering patterns, and differences between the H2 and H2tri datasets. These are revealed by the pamk method in a fairly reliable manner and can potentially be used to distinguish bonded and non-bonded systems and get insight into the nature of bonding. (2) Projecting to a lower dimensional subspace ({approx}4-5) using PCA or Kernel PCA reveals interesting patterns in the distribution of scalar values, which can be related to the existing descriptors of electronic structure of molecules. Also, these results can be immediately used to develop robust tools for analysis of noisy data obtained during QMC simulations (3) All dimensionality reduction and estimation techniques that we tried seem to indicate that one needs 4 or 5 components to account for most of the variance in the data, hence this 5D dataset does not necessarily lie on a well-defined, low dimensional manifold. In terms of specific clustering techniques, K-means was generally useful in exploring the dataset. The partition around medoids (pam) technique produced the most definitive results for our data showing distinctive patterns for both a sample of the complete data and time-series. The gap statistic with tibshirani criteria did not provide any distinction across the 2 dataset. The gap statistic w/DandF criteria, Model based clustering and hierarchical modeling simply failed to run on our datasets. Thankfully, the vanilla PCA technique was successful in handling our entire dataset. PCA revealed some interesting patterns for the scalar value distribution. Kernel PCA techniques (vanilladot, RBF, Polynomial) and MDS failed to run on the entire dataset, or even a significant fraction of the dataset, and we resorted to creating an explicit feature map followed by conventional PCA. Clustering using K-means and PAM in the new basis set seems to produce promising results. Understanding the new basis set in the scientific context of the problem is challenging, and we are currently working to further examine and interpret the results.
Zhou, X. W., E-mail: xzhou@sandia.gov [Mechanics of Materials Department, Sandia National Laboratories, Livermore, California 94550 (United States); Yang, N. Y. C. [Energy Nanomaterials Department, Sandia National Laboratories, Livermore, California 94550 (United States)
2014-03-14
Electronic properties of semiconductor devices are sensitive to defects such as second phase precipitates, grain sizes, and voids. These defects can evolve over time especially under oxidation environments and it is therefore important to understand the resulting aging behavior in order for the reliable applications of devices. In this paper, we propose a kinetic Monte Carlo framework capable of simultaneous simulation of the evolution of second phases, precipitates, grain sizes, and voids in complicated systems involving many species including oxygen. This kinetic Monte Carlo model calculates the energy barriers of various events based directly on the experimental data. As a first step of our model implementation, we incorporate the second phase formation module in the parallel kinetic Monte Carlo codes SPPARKS. Selected aging simulations are performed to examine the formation of second phase precipitates at the eletroplated Au/Bi{sub 2}Te{sub 3} interface under oxygen and oxygen-free environments, and the results are compared with the corresponding experiments.
SU-D-19A-03: Monte Carlo Investigation of the Mobetron to Perform Modulated Electron Beam Therapy
Emam, I; Eldib, A; Hosini, M; AlSaeed, E; Ma, C
2014-06-01
Purpose: Modulated electron radiotherapy (MERT) has been proposed as a mean of delivering conformal dose to shallow tumors while sparing distal structures and surrounding tissues. In intraoperative radiotherapy (IORT) utilizing Mobetron, an applicator is placed as closely as possible to the suspected cancerous tissues to be treated. In this study we investigate the characteristics of Mobetron electron beams collimated by an in-house prospective electron multileaf collimator (eMLC) and its feasibility for MERT. Methods: IntraOp Mobetron™ dedicated to perform radiotherapy during surgery was used in the study. It provides several energies (6, 9 and 12 MeV). Dosimetry measurements were performed to obtain percentage depth dose curves (PDD) and profiles for a 10-cm diameter applicator using the PTW MP3/XS 3D-scanning system and the semiflex ion chamber. MCBEAM/MCSIM Monte Carlo codes were used for the treatment head simulation and phantom dose calculation. The design of an electron beam collimation by an eMLC attached to the Mobetron head was also investigated using Monte Carlo simulations. Isodose distributions resulting from eMLC collimated beams were compared to that collimated using cutouts. The design for our Mobetron eMLC is based on our previous experiences with eMLCs designed for clinical linear accelerators. For Mobetron the eMLC is attached to the end of a spacer-mounted rectangular applicator at 50 cm SSD. Steel will be used as the leaf material because other materials would be toxic and will not be suitable for intraoperative applications. Results: Good agreement (within 2%) was achieved between measured and calculated PDD curves and profiles for all available energies. Dose distributiosn provided by the eMLC showed reasonable agreement (?3%/1mm) with those obtained by conventional cutouts. Conclusion: Monte Carlo simulations are capable of modeling Mobetron electron beams with a reliable accuracy. An eMLC attached to the Mobteron treatment head will allow better treatment options with those machines.
Faught, A; Davidson, S; Kry, S; Ibbott, G; Followill, D; Fontenot, J; Etzel, C
2014-06-01
Purpose: To develop a comprehensive end-to-end test for Varian's TrueBeam linear accelerator for head and neck IMRT using a custom phantom designed to utilize multiple dosimetry devices. Purpose: To commission a multiple-source Monte Carlo model of Elekta linear accelerator beams of nominal energies 6MV and 10MV. Methods: A three source, Monte Carlo model of Elekta 6 and 10MV therapeutic x-ray beams was developed. Energy spectra of two photon sources corresponding to primary photons created in the target and scattered photons originating in the linear accelerator head were determined by an optimization process that fit the relative fluence of 0.25 MeV energy bins to the product of Fatigue-Life and Fermi functions to match calculated percent depth dose (PDD) data with that measured in a water tank for a 10x10cm2 field. Off-axis effects were modeled by a 3rd degree polynomial used to describe the off-axis half-value layer as a function of off-axis angle and fitting the off-axis fluence to a piecewise linear function to match calculated dose profiles with measured dose profiles for a 40×40cm2 field. The model was validated by comparing calculated PDDs and dose profiles for field sizes ranging from 3×3cm2 to 30×30cm2 to those obtained from measurements. A benchmarking study compared calculated data to measurements for IMRT plans delivered to anthropomorphic phantoms. Results: Along the central axis of the beam 99.6% and 99.7% of all data passed the 2%/2mm gamma criterion for 6 and 10MV models, respectively. Dose profiles at depths of dmax, through 25cm agreed with measured data for 99.4% and 99.6% of data tested for 6 and 10MV models, respectively. A comparison of calculated dose to film measurement in a head and neck phantom showed an average of 85.3% and 90.5% of pixels passing a 3%/2mm gamma criterion for 6 and 10MV models respectively. Conclusion: A Monte Carlo multiple-source model for Elekta 6 and 10MV therapeutic x-ray beams has been developed as a quality assurance tool for clinical trials.
Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation
Pecchia, M.; D'Auria, F.; Mazzantini, O.
2012-07-01
Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI for performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)
Zink, K.; Czarnecki, D.; Voigts-Rhetz, P. von; Looe, H. K.; Harder, D.
2014-11-01
Purpose: The electron fluence inside a parallel-plate ionization chamber positioned in a water phantom and exposed to a clinical electron beam deviates from the unperturbed fluence in water in absence of the chamber. One reason for the fluence perturbation is the well-known “inscattering effect,” whose physical cause is the lack of electron scattering in the gas-filled cavity. Correction factors determined to correct for this effect have long been recommended. However, more recent Monte Carlo calculations have led to some doubt about the range of validity of these corrections. Therefore, the aim of the present study is to reanalyze the development of the fluence perturbation with depth and to review the function of the guard rings. Methods: Spatially resolved Monte Carlo simulations of the dose profiles within gas-filled cavities with various radii in clinical electron beams have been performed in order to determine the radial variation of the fluence perturbation in a coin-shaped cavity, to study the influences of the radius of the collecting electrode and of the width of the guard ring upon the indicated value of the ionization chamber formed by the cavity, and to investigate the development of the perturbation as a function of the depth in an electron-irradiated phantom. The simulations were performed for a primary electron energy of 6 MeV. Results: The Monte Carlo simulations clearly demonstrated a surprisingly large in- and outward electron transport across the lateral cavity boundary. This results in a strong influence of the depth-dependent development of the electron field in the surrounding medium upon the chamber reading. In the buildup region of the depth-dose curve, the in–out balance of the electron fluence is positive and shows the well-known dose oscillation near the cavity/water boundary. At the depth of the dose maximum the in–out balance is equilibrated, and in the falling part of the depth-dose curve it is negative, as shown here the first time. The influences of both the collecting electrode radius and the width of the guard ring are reflecting the deep radial penetration of the electron transport processes into the gas-filled cavities and the need for appropriate corrections of the chamber reading. New values for these corrections have been established in two forms, one converting the indicated value into the absorbed dose to water in the front plane of the chamber, the other converting it into the absorbed dose to water at the depth of the effective point of measurement of the chamber. In the Appendix, the in–out imbalance of electron transport across the lateral cavity boundary is demonstrated in the approximation of classical small-angle multiple scattering theory. Conclusions: The in–out electron transport imbalance at the lateral boundaries of parallel-plate chambers in electron beams has been studied with Monte Carlo simulation over a range of depth in water, and new correction factors, covering all depths and implementing the effective point of measurement concept, have been developed.
Hui, Y.Y.; Chang, Y.-R.; Lee, H.-Y.; Chang, H.-C.; Lim, T.-S.; Fann Wunshain
2009-01-05
The number of negatively charged nitrogen-vacancy centers (N-V){sup -} in fluorescent nanodiamond (FND) has been determined by photon correlation spectroscopy and Monte Carlo simulations at the single particle level. By taking account of the random dipole orientation of the multiple (N-V){sup -} fluorophores and simulating the probability distribution of their effective numbers (N{sub e}), we found that the actual number (N{sub a}) of the fluorophores is in linear correlation with N{sub e}, with correction factors of 1.8 and 1.2 in measurements using linearly and circularly polarized lights, respectively. We determined N{sub a}=8{+-}1 for 28 nm FND particles prepared by 3 MeV proton irradiation.
Bznuni, S A; Zhamkochyan, V M; Polanski, A; Sosnin, A N; Khudaverdyan, A H
2001-01-01
Parameters of a subcritical cascade reactor driven by a proton accelerator and based on a primary lead-bismuth target, main reactor constructed analogously to the molten salt breeder (MSBR) reactor core and a booster-reactor analogous to the core of the BN-350 liquid metal cooled fast breeder reactor (LMFBR). It is shown by means of Monte-Carlo modeling that the reactor under study provides safe operation modes (k_{eff}=0.94-0.98), is apable to transmute effectively radioactive nuclear waste and reduces by an order of magnitude the requirements on the accelerator beam current. Calculations show that the maximal neutron flux in the thermal zone is 10^{14} cm^{12}\\cdot s^_{-1}, in the fast booster zone is 5.12\\cdot10^{15} cm^{12}\\cdot s{-1} at k_{eff}=0.98 and proton beam current I=2.1 mA.
Nicolas Puech; Serge Mora; Ty Phou; Gregoire Porte; Jacques Jestin; Julian Oberdisse
2010-12-04
The effect of silica nanoparticles on transient microemulsion networks made of microemulsion droplets and telechelic copolymer molecules in water is studied, as a function of droplet size and concentration, amount of copolymer, and nanoparticle volume fraction. The phase diagram is found to be affected, and in particular the percolation threshold characterized by rheology is shifted upon addition of nanoparticles, suggesting participation of the particles in the network. This leads to a peculiar reinforcement behaviour of such microemulsion nanocomposites, the silica influencing both the modulus and the relaxation time. The reinforcement is modelled based on nanoparticles connected to the network via droplet adsorption. Contrast-variation Small Angle Neutron Scattering coupled to a reverse Monte Carlo approach is used to analyse the microstructure. The rather surprising intensity curves are shown to be in good agreement with the adsorption of droplets on the nanoparticle surface.
Vladimir Kovalenko; Vladimir Vechernin
2013-08-29
The magnitude of long-range correlations between observables in two separated rapidity windows, proposed as a signature of the string fusion and percolation phenomenon, is studied in the framework of non-Glauber Monte Carlo string-parton model, based on the picture of elementary collisions of color dipoles. The predictions, obtained with and without string fusion, demonstrate effects of color string fusion on the observables in Pb-Pb collisions at the LHC: decrease of n-n correlation coefficient with centrality and negative pt-n correlations, if the sufficiently effective centrality estimator is applied. In general case it is shown that the values of n-n and pt-n correlation coefficients strongly depend on the method of collision centrality fixation. In contrast, the predictions obtained for pt-pt correlation have almost no effect of centrality determination method and the corresponding experimental data would produce the strong limitation on the transverse radius of a string.
Sarrut, David; Université Lyon 1; Centre Léon Bérard ; Bardiès, Manuel; Marcatili, Sara; Mauxion, Thibault; Boussion, Nicolas; Freud, Nicolas; Létang, Jean-Michel; Jan, Sébastien; Maigne, Lydia; Perrot, Yann; Pietrzyk, Uwe; Robert, Charlotte; and others
2014-06-15
In this paper, the authors' review the applicability of the open-source GATE Monte Carlo simulation platform based on the GEANT4 toolkit for radiation therapy and dosimetry applications. The many applications of GATE for state-of-the-art radiotherapy simulations are described including external beam radiotherapy, brachytherapy, intraoperative radiotherapy, hadrontherapy, molecular radiotherapy, and in vivo dose monitoring. Investigations that have been performed using GEANT4 only are also mentioned to illustrate the potential of GATE. The very practical feature of GATE making it easy to model both a treatment and an imaging acquisition within the same frameworkis emphasized. The computational times associated with several applications are provided to illustrate the practical feasibility of the simulations using current computing facilities.
Quantum Monte Carlo Study of the Ground-State Properties of a Fermi Gas in the BCS-BEC Crossover
Giorgini, S.; Astrakharchik, G. E.; Boronat, J.; Casulleras, J.
2006-11-07
The ground-state properties of a two-component Fermi gas with attractive short-range interactions are calculated using the fixed-node diffusion Monte Carlo method. The interaction strength is varied over a wide range by tuning the value of the s-wave scattering length of the two-body potential. We calculate the ground-state energy per particle and we characterize the equation of state of the system. Off-diagonal long-range order is investigated through the asymptotic behavior of the two-body density matrix. The condensate fraction of pairs is calculated in the unitary limit and on both sides of the BCS-BEC crossover.
Structure of Cu64.5Zr35.5 Metallic glass by reverse Monte Carlo simulations
Fang, Xikui W. [Ames Laboratory; Huang, Li [Ames Laboratory; Wang, Cai-Zhuang [Ames Laboratory; Ho, Kai-Ming [Ames Laboratory; Ding, Z. J. [University of Science and Technology of China
2014-02-07
Reverse Monte Carlo simulations (RMC) have been widely used to generate three dimensional (3D) atomistic models for glass systems. To examine the reliability of the method for metallic glass, we use RMC to predict the atomic configurations of a “known” structure from molecular dynamics (MD) simulations, and then compare the structure obtained from the RMC with the target structure from MD. We show that when the structure factors and partial pair correlation functions from the MD simulations are used as inputs for RMC simulations, the 3D atomistic structure of the glass obtained from the RMC gives the short- and medium-range order in good agreement with those from the target structure by the MD simulation. These results suggest that 3D atomistic structure model of the metallic glass alloys can be reasonably well reproduced by RMC method with a proper choice of input constraints.
Garain, Sudip K; Chakrabarti, Sandip K
2013-01-01
Low and intermediate frequency quasi-periodic oscillations (QPOs) in black hole candidates are believed to be due to oscillations of the Comptonizing regions in an accretion flow. Assuming that the general structure of an accretion disk is a Two Component Advective Flow (TCAF), we numerically simulate the light curves emitted from an accretion disk for different accretion rates and find how the QPO frequencies vary. We use a standard Keplerian disk residing at the equatorial plane as a source of soft photons. These soft photons, after suffering multiple scattering with the hot electrons of the low angular momentum, sub-Keplerian, flow emerge out as hard radiation. The hydrodynamic and thermal properties of the electron cloud is simulated using a Total Variation Diminishing (TVD) code. The TVD code is then coupled with a radiative transfer code which simulates the energy exchange between the electron and radiation using Monte Carlo technique. The resulting localized heating and cooling are included also. We fi...
Monte Carlo Simulation for Elastic Energy Loss of High-Energy Partons in Quark-Gluon Plasma
Jussi Auvinen; Kari J. Eskola; Hannu Holopainen; Thorsten Renk
2011-06-13
We examine the significance of $2 \\rightarrow 2$ partonic collisions as the suppression mechanism of high-energy partons in the strongly interacting medium formed in ultrarelativistic heavy ion collisions. For this purpose, we have developed a Monte Carlo simulation describing the interactions of perturbatively produced, non-eikonally propagating high-energy partons with the quarks and gluons from the expanding QCD medium. The partonic collision rates are computed in leading-order perturbative QCD (pQCD), while three different hydrodynamical scenarios are used to model the medium. We compare our results with the suppression observed in $\\sqrt{s_{NN}}=200$ GeV Au+Au collisions at the BNL-RHIC. We find the incoherent nature of elastic energy loss incompatible with the measured data and the effect of the initial state fluctuations small.
Horváthová, L; Mitas, L; Štich, I
2014-01-01
We present calculations of electronic and magnetic structures of vanadium-benzene multidecker clusters V$_{n}$Bz$_{n+1}$ ($n$ = 1 - 3) using advanced quantum Monte Carlo methods. These and related systems have been identified as prospective spin filters in spintronic applications, assuming that their ground states are half-metallic ferromagnets. Although we find that magnetic properties of these multideckers are consistent with ferromagnetic coupling, their electronic structures do not appear to be half-metallic as previously assumed. In fact, they are ferromagnetic insulators with large and broadly similar $\\uparrow$-/$\\downarrow$-spin gaps. This makes the potential of these and related materials as spin filtering devices very limited, unless they are further modified or functionalized.
Virgilli, E; Rosati, P; Bonnini, E; Buffagni, E; Ferrari, C; Stephen, J B; Caroli, E; Auricchio, N; Basili, A; Silvestri, S
2015-01-01
We report on results of observation of the focusing effect from the planes (220) of Gallium Arsenide (GaAs) crystals. We have compared the experimental results with the simulations of the focusing capability of GaAs tiles through a developed Monte Carlo. The GaAs tiles were bent using a lapping process developed at the cnr/imem - Parma (Italy) in the framework of the laue project, funded by ASI, dedicated to build a broad band Laue lens prototype for astrophysical applications in the hard X-/soft gamma-ray energy range (80-600 keV). We present and discuss the results obtained from their characterization, mainly in terms of focusing capability. Bent crystals will significantly increase the signal to noise ratio of a telescope based on a Laue lens, consequently leading to an unprecedented enhancement of sensitivity with respect to the present non focusing instrumentation.
Study of two- and three-meson decay modes of tau-lepton with Monte Carlo generator TAUOLA
Shekhovtsova, Olga
2015-01-01
The study of the $\\tau$-lepton decays into hadrons has contributed to a better understanding of non-perturbative QCD and light-quark meson spectroscopy, as well as to the search of new physics beyond the Standard Model. The two- and three-meson decay modes, considering only those permitted by the Standard Model, are the predominant decays and together with the one-pion mode compose more than $85\\%$ of the hadronic $\\tau$-lepton decay width. In this note we review the theoretical results for these modes implemented in the Monte Carlo event generator TAUOLA and present at the same time a comparison with the Belle Collaboration data for the two-pion decay mode and the BaBar preliminary data for the three-pion decay mode as well for the decay mode into two-kaon and one-pion.
Basden, Alastair
2015-01-01
The performance of a wide-field adaptive optics system depends on input design parameters. Here we investigate the performance of a multi-conjugate adaptive optics system design for the European Extremely Large Telescope, using an end-to-end Monte-Carlo adaptive optics simulation tool, DASP. We consider parameters such as the number of laser guide stars, sodium layer depth, wavefront sensor pixel scale, number of deformable mirrors, mirror conjugation and actuator pitch. We provide potential areas where costs savings can be made, and investigate trade-offs between performance and cost. We conclude that a 6 laser guide star system using 3 DMs seems to be a sweet spot for performance and cost compromise.
TU-F-18A-03: Improving Tissue Segmentation for Monte Carlo Dose Calculation Using DECT Data
Di, Salvio A; Bedwani, S; Carrier, J
2014-06-15
Purpose: To develop a new segmentation technique using dual energy CT (DECT) to overcome limitations related to segmentation from a standard Hounsfield unit (HU) to electron density (ED) calibration curve. Both methods are compared with a Monte Carlo analysis of dose distribution. Methods: DECT allows a direct calculation of both ED and effective atomic number (EAN) within a given voxel. The EAN is here defined as a function of the total electron cross-section of a medium. These values can be effectively acquired using a calibrated method from scans at two different energies. A prior stoichiometric calibration on a Gammex RMI phantom allows us to find the parameters to calculate EAN and ED within a voxel. Scans from a Siemens SOMATOM Definition Flash dual source system provided the data for our study. A Monte Carlo analysis compares dose distribution simulated by dosxyz-nrc, considering a head phantom defined by both segmentation techniques. Results: Results from depth dose and dose profile calculations show that materials with different atomic compositions but similar EAN present differences of less than 1%. Therefore, it is possible to define a short list of basis materials from which density can be adapted to imitate interaction behavior of any tissue. Comparison of the dose distributions on both segmentations shows a difference of 50% in dose in areas surrounding bone at low energy. Conclusion: The presented segmentation technique allows a more accurate medium definition in each voxel, especially in areas of tissue transition. Since the behavior of human tissues is highly sensitive at low energies, this reduces the errors on calculated dose distribution. This method could be further developed to optimize the tissue characterization based on anatomic site.
Shang, Yu; Lin, Yu; Yu, Guoqiang; Li, Ting; Chen, Lei; Toborek, Michal
2014-05-12
Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (?D{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of ?D{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo stroke model of mouse. Computer simulations shows that the high-order (N???5) linear algorithm was more accurate in extracting ?D{sub B} (errors?
Pázsit, Imre
2007-01-01
Nuclear Instruments and Methods in Physics Research A 582 (2007) 629637 Monte Carlo and analytical materials in applications such as nuclear nonproliferation, homeland security, and basic physics research unfolding, which have a variety of applications, including nuclear nonproliferation and homeland security
Danon, Yaron
2011-01-01
owing to the multiple scattering from ambient neutrons and from energy cuts in the detection efficiencyPHYSICAL REVIEW C 83, 064612 (2011) Advanced Monte Carlo modeling of prompt fission neutrons for thermal and fast neutron-induced fission reactions on 239 Pu P. Talou,1,* B. Becker,2 T. Kawano,1 M. B
Meirovitch, Hagai
Lower and upper bounds for the absolute free energy by the hypothetical scanning Monte Carlo method The hypothetical scanning HS method is a general approach for calculating the absolute entropy S and free energy F to provide the free energy through the analysis of a single configuration. © 2004 American Institute
Andrea Bianconi
2008-06-05
In this note I report and discuss the physical scheme and the main approximations used by the event generator code DY\\_AB. This Monte Carlo code is aimed at preliminary simulation, during the stage of apparatus planning, of Drell-Yan events characterized by azimuthal asymmetries, in experiments with moderate center of mass energy $\\sqrt{s}$ $<<$ 100 GeV.
Asher, Sanford A.
Melting of colloidal crystals: A Monte Carlo study James C. Zahorchak, FL Kesavamoorthy, @vb)Rob D) Electrostatically stabilized colloidal crystals show phase transitions into liquid and gaslike states as the ionic of four colloidal crystals (two fee crystals and two bee crystals) which have also been examined
Guidoni, Leonardo
Reaction pathways by quantum Monte Carlo: Insight on the torsion barrier of 1,3-butadiene: Insight on the torsion barrier of 1,3-butadiene, and the conrotatory ring opening of cyclobutene Matteo to investigate the intramolecular reaction pathways of 1,3-butadiene. The ground state geometries of the three
Chibani, Omar C-M Ma, Charlie
2014-05-15
Purpose: To present a new accelerated Monte Carlo code for CT-based dose calculations in high dose rate (HDR) brachytherapy. The new code (HDRMC) accounts for both tissue and nontissue heterogeneities (applicator and contrast medium). Methods: HDRMC uses a fast ray-tracing technique and detailed physics algorithms to transport photons through a 3D mesh of voxels representing the patient anatomy with applicator and contrast medium included. A precalculated phase space file for the{sup 192}Ir source is used as source term. HDRM is calibrated to calculated absolute dose for real plans. A postprocessing technique is used to include the exact density and composition of nontissue heterogeneities in the 3D phantom. Dwell positions and angular orientations of the source are reconstructed using data from the treatment planning system (TPS). Structure contours are also imported from the TPS to recalculate dose-volume histograms. Results: HDRMC was first benchmarked against the MCNP5 code for a single source in homogenous water and for a loaded gynecologic applicator in water. The accuracy of the voxel-based applicator model used in HDRMC was also verified by comparing 3D dose distributions and dose-volume parameters obtained using 1-mm{sup 3} versus 2-mm{sup 3} phantom resolutions. HDRMC can calculate the 3D dose distribution for a typical HDR cervix case with 2-mm resolution in 5 min on a single CPU. Examples of heterogeneity effects for two clinical cases (cervix and esophagus) were demonstrated using HDRMC. The neglect of tissue heterogeneity for the esophageal case leads to the overestimate of CTV D90, CTV D100, and spinal cord maximum dose by 3.2%, 3.9%, and 3.6%, respectively. Conclusions: A fast Monte Carlo code for CT-based dose calculations which does not require a prebuilt applicator model is developed for those HDR brachytherapy treatments that use CT-compatible applicators. Tissue and nontissue heterogeneities should be taken into account in modern HDR brachytherapy planning.
Kolbe, E.; Vasiliev, A.; Zimmermann, M. A. [Laboratory for Reactor Physics and Systems Behaviour, Paul Scherrer Institut, CH 5232 Villigen PSI (Switzerland)
2006-07-01
This study addresses the assessment of standard continuous-energy neutron data libraries using the Monte Carlo radiation transport code MCNPX for light water reactor criticality safety applications based on a suite of low-enriched, thermal, compound uranium benchmarks and represents a continuation of previously performed analysis using the JEF-2.2 and JENDL-3.3 nuclear data libraries. The new work enhancing the previous study includes the application of the ENDF/B-6.8 neutron data library and employs the most recent official release of the code (MCNPX-2.5.0) with an improved S({alpha}, {beta}) thermal neutron scattering treatment. Particular attention is paid to the analysis of the spectrum-related characteristics of the modeled critical experimental configurations to define the range of applicability of the reported estimates of lower tolerance bounds for k{sub eff}. Inspection of trends in k{sub eff} versus the spectrum-related characteristics or design parameters has also been performed. (authors)
G. S. Chang; R. C. Pederson
2005-07-01
Mixed oxide (MOX) test capsules prepared with weapons-derived plutonium have been irradiated to a burnup of 50 GWd/t. The MOX fuel was fabricated at Los Alamos National Laboratory by a master-mix process and has been irradiated in the Advanced Test Reactor (ATR) at the Idaho National Laboratory (INL). Previous withdrawals of the same fuel have occurred at 9, 21, 30, and 40 GWd/t. Oak Ridge National Laboratory (ORNL) manages this test series for the Department of Energy’s Fissile Materials Disposition Program (FMDP). The fuel burnup analyses presented in this study were performed using MCWO, a welldeveloped tool that couples the Monte Carlo transport code MCNP with the isotope depletion and buildup code ORIGEN-2. MCWO analysis yields time-dependent and neutron-spectrum-dependent minor actinide and Pu concentrations for the ATR small I-irradiation test position. The purpose of this report is to validate both the Weapons-Grade Mixed Oxide (WG-MOX) test assembly model and the new fuel burnup analysis methodology by comparing the computed results against the neutron monitor measurements.
Torres, J; Almansa, J F; Guerrero, R; Lallena, A M; Torres, Javier; Buades, Manuel J.; Almansa, Julio F.; Guerrero, Rafael; Lallena, Antonio M.
2003-01-01
Monte Carlo calculations using the codes PENELOPE and GEANT4 have been performed to characterize the dosimetric parameters of the new 20 mm long catheter based $^{32}$P beta source manufactured by Guidant Corporation. The dose distribution along the transverse axis and the two dimensional dose rate table have been calculated. Also, the dose rate at the reference point, the radial dose function and the anisotropy function were evaluated according to the adapted TG-60 formalism for cylindrical sources. PENELOPE and GEANT4 codes were first verified against previous results corresponding to the old 27 mm Guidant $^{32}$P beta source. The dose rate at the reference point for the unsheathed 27 mm source in water was calculated to be $0.215 \\pm 0.001$ cGy s$^{-1}$ mCi$^{-1}$, for PENELOPE, and $0.2312 \\pm 0.0008$ cGy s$^{-1}$ mCi$^{-1}$, for GEANT4. For the unsheathed 20 mm source these values were $0.2908 \\pm 0.0009$ cGy s$^{-1}$ mCi$^{-1}$ and $0.311 \\pm 0.001$ cGy s$^{-1}$ mCi$^{-1}$, respectively. Also, a compar...
Asadi, S; Vahidian, M; Marghchouei, M; Masoudi, S Farhad
2015-01-01
The aim of the present Monte Carlo study is to evaluate the variation of energy deposition in healthy tissues in the human eye which is irradiated by brachytherapy sources in comparison with the resultant dose increase in the gold nanoparticle(GNP)-loaded choroidal melanoma. The effects of these nanoparticles on normal tissues are compared between 103Pd and 125I as two ophthalmic brachytherapy sources. Dose distribution in the tumor and healthy tissues have been taken into account for both mentioned brachytherapy sources. Also, in a certain point of the eye, the ratio of the absorbed dose by the normal tissue in the presence of GNPs to the absorbed dose by the same point in the absence of GNPs has been calculated. In addition, differences observed in the comparison of simple water phantom and actual simulated human eye in presence of GNPs are also a matter of interest that have been considered in the present work. The results show that the calculated dose enhancement factor in the tumor for 125I is higher tha...
Weinman, J.P. [Lockheed Martin Corp., Schenectady, NY (United States)
1998-06-01
The purpose of this study is to investigate the eigenvalue sensitivity to new {sup 235}U, hydrogen, and oxygen cross section data sets by comparing RACER Monte Carlo calculations for several thermal and intermediate spectrum critical experiments. The new {sup 235}U library (Version 107) was derived by L. Leal and H. Derrien by fitting differential experimental data for {sup 235}U while constraining the fit to match experimental capture and fission resonance integrals and Maxwellian averaged thermal K1 (v fission minus absorption). The new hydrogen library (Version 45) consists of the ENDF/B-VI release 3 data with a 332.0 mb 2,200 m/s cross section which replaces the value of 332.6 mb in the current library. The new oxygen library (Version 39) is based on a recent evaluation of {sup 16}O by E. Caro. Nineteen Oak Ridge and Rocky Flats thermal solution benchmark critical assemblies that span a range of hydrogen-to-{sup 235}U (H/U) concentrations (2,052 to 27.1) and above-thermal neutron leakage fractions (0.555 to 0.011) were analyzed. In addition, three intermediate spectrum critical assemblies (UH3-UR, UH3-NI, and HISS-HUG) were studied.
Chan, C H
2014-01-01
The Ziff-Gulari-Barshad (ZGB) model is widely used to study the oxidation of carbon monoxide (CO) on a catalyst surface. It exhibits a non-equilibrium, discontinuous phase transition between a reactive and a CO poisoned phase. If one allows a nonzero rate of CO desorption ($k$), the line of phase transitions terminate at a critical point ($k_{c}$). In this work, instead of restricting the CO and atomic oxygen (O) to react only when they are adsorbed in close proximity, we consider a model that allows adsorbed CO and O atoms located far apart on the lattice to react to form carbon dioxide (CO$_{2}$). We employ large-scale Monte Carlo simulations and use the crossing of fourth-order cumulants to study the critical properties of this system. We find that the non-equilibrium critical point changes from the two-dimensional Ising universality class to the mean-field universality class upon introducing even a weak long-range interaction term. This behavior is consistent with that of the \\emph{equilibrium} Ising ferr...
Wes Armour; Simon Hands; Costas Strouthos
2013-02-07
We formulate a model of N_f=4 flavors of relativistic fermion in 2+1d in the presence of a chemical potential mu coupled to two flavor doublets with opposite sign, akin to isopsin chemical potential in QCD. This is argued to be an effective theory for low energy electronic excitations in bilayer graphene, in which an applied voltage between the layers ensures equal populations of particles on one layer and holes on the other. The model is then reformulated on a spacetime lattice using staggered fermions, and in the absence of a sign problem, simulated using an orthodox hybrid Monte Carlo algorithm. With the coupling strength chosen to be close to a quantum critical point believed to exist for N_f
Fabio L. Pedrocchi; N. E. Bonesteel; David P. DiVincenzo
2015-07-03
The Majorana code is an example of a stabilizer code where the quantum information is stored in a system supporting well-separated Majorana Bound States (MBSs). We focus on one-dimensional realizations of the Majorana code, as well as networks of such structures, and investigate their lifetime when coupled to a parity-preserving thermal environment. We apply the Davies prescription, a standard method that describes the basic aspects of a thermal environment, and derive a master equation in the Born-Markov limit. We first focus on a single wire with immobile MBSs and perform error correction to annihilate thermal excitations. In the high-temperature limit, we show both analytically and numerically that the lifetime of the Majorana qubit grows logarithmically with the size of the wire. We then study a trijunction with four MBSs when braiding is executed. We study the occurrence of dangerous error processes that prevent the lifetime of the Majorana code from growing with the size of the trijunction. The origin of the dangerous processes is the braiding itself, which separates pairs of excitations and renders the noise nonlocal; these processes arise from the basic constraints of moving MBSs in 1D structures. We confirm our predictions with Monte Carlo simulations in the low-temperature regime, i.e. the regime of practical relevance. Our results put a restriction on the degree of self-correction of this particular 1D topological quantum computing architecture.
B. Kamala Latha; G. Sai Preeti; K. P. N. Murthy; V. S. S. Sastry
2015-07-30
Equilibrium director structures in two thin hybrid planar films of biaxial nematics are investigated through Markov chain Monte Carlo simulations based on a lattice Hamiltonian model within the London dispersion approximation. While the substrates of the two films induce similar anchoring influences on the long axes of the liquid crystal molecules (viz. planar orientation at one end and perpendicular, or homeotropic, orientations at the other), they differ in their coupling with the minor axes of the molecules. In Type-A film the substrates do not interact with the minor axes at all (which is experimentally relatively more amenable), while in Type-B, the orientations of the molecular axes at the surface layer are influenced as well by their biaxial coupling with the surface. Both films exhibit expected bending of the director associated with ordering of the molecular long axes due to surface anchoring. Simulation results indicate that the Type-A film hosts stable and noise free director structures in the biaxial nematic phase of the LC medium, resulting from dominant ordering of one of the minor axes in the plane of the substrates. High degree of this stable order thus developed could be of practical interest for in-plane switching applications with an external field. Type-B film, on the other hand, experiences competing interactions among the minor axes, due to incompatible anchoring influences at the bounding substrates, apparently leading to frustration, and hence to noisy equilibrium director structures.
Lin, J. Y. Y. [California Institute of Technology, Pasadena] [California Institute of Technology, Pasadena; Aczel, Adam A [ORNL] [ORNL; Abernathy, Douglas L [ORNL] [ORNL; Nagler, Stephen E [ORNL] [ORNL; Buyers, W. J. L. [National Research Council of Canada] [National Research Council of Canada; Granroth, Garrett E [ORNL] [ORNL
2014-01-01
Recently an extended series of equally spaced vibrational modes was observed in uranium nitride (UN) by performing neutron spectroscopy measurements using the ARCS and SEQUOIA time-of- flight chopper spectrometers [A.A. Aczel et al, Nature Communications 3, 1124 (2012)]. These modes are well described by 3D isotropic quantum harmonic oscillator (QHO) behavior of the nitrogen atoms, but there are additional contributions to the scattering that complicate the measured response. In an effort to better characterize the observed neutron scattering spectrum of UN, we have performed Monte Carlo ray tracing simulations of the ARCS and SEQUOIA experiments with various sample kernels, accounting for the nitrogen QHO scattering, contributions that arise from the acoustic portion of the partial phonon density of states (PDOS), and multiple scattering. These simulations demonstrate that the U and N motions can be treated independently, and show that multiple scattering contributes an approximate Q-independent background to the spectrum at the oscillator mode positions. Temperature dependent studies of the lowest few oscillator modes have also been made with SEQUOIA, and our simulations indicate that the T-dependence of the scattering from these modes is strongly influenced by the uranium lattice.
Sarkadi, L
2015-01-01
The three-body dynamics of the ionization of the atomic hydrogen by 30 keV antiproton impact has been investigated by calculation of fully differential cross sections (FDCS) using the classical trajectory Monte Carlo (CTMC) method. The results of the calculations are compared with the predictions of quantum mechanical descriptions: The semi-classical time-dependent close-coupling theory, the fully quantal, time-independent close-coupling theory, and the continuum-distorted-wave-eikonal-initial-state model. In the analysis particular emphasis was put on the role of the nucleus-nucleus (NN) interaction played in the ionization process. For low-energy electron ejection CTMC predicts a large NN interaction effect on FDCS, in agreement with the quantum mechanical descriptions. By examining individual particle trajectories it was found that the relative motion between the electron and the nuclei is coupled very weakly with that between the nuclei, consequently the two motions can be treated independently. A simple ...
Y. Ishisaki; Y. Maeda; R. Fujimoto; M. Ozaki; K. Ebisawa; T. Takahashi; Y. Ueda; Y. Ogasaka; A. Ptak; K. Mukai; K. Hamaguchi; M. Hirayama; T. Kotani; H. Kubo; R. Shibata; M. Ebara; A. Furuzawa; R. Iizuka; H. Inoue; H. Mori; S. Okada; Y. Yokoyama; H. Matsumoto; H. Nakajima; H. Yamaguchi; N. Anabuki; N. Tawa; M. Nagai; S. Katsuda; K. Hayashida; A. Bamba; E. D. Miller; K. Sato; N. Y. Yamasaki
2006-10-04
We have developed a framework for the Monte-Carlo simulation of the X-Ray Telescopes (XRT) and the X-ray Imaging Spectrometers (XIS) onboard Suzaku, mainly for the scientific analysis of spatially and spectroscopically complex celestial sources. A photon-by-photon instrumental simulator is built on the ANL platform, which has been successfully used in ASCA data analysis. The simulator has a modular structure, in which the XRT simulation is based on a ray-tracing library, while the XIS simulation utilizes a spectral "Redistribution Matrix File" (RMF), generated separately by other tools. Instrumental characteristics and calibration results, e.g., XRT geometry, reflectivity, mutual alignments, thermal shield transmission, build-up of the contamination on the XIS optical blocking filters (OBF), are incorporated as completely as possible. Most of this information is available in the form of the FITS (Flexible Image Transport System) files in the standard calibration database (CALDB). This simulator can also be utilized to generate an "Ancillary Response File" (ARF), which describes the XRT response and the amount of OBF contamination. The ARF is dependent on the spatial distribution of the celestial target and the photon accumulation region on the detector, as well as observing conditions such as the observation date and satellite attitude. We describe principles of the simulator and the ARF generator, and demonstrate their performance in comparison with in-flight data.
B. Kamala Latha; Regina Jose; K. P. N. Murthy; V. S. S. Sastry
2015-09-13
Investigations of the phase diagram of biaxial liquid crystal systems through analyses of general Hamiltonian models within the simplifications of mean-field theory (MFT), as well as by computer simulations based on microscopic models, are directed towards an appreciation of the role of the underlying molecular-level interactions to facilitate its spontaneous condensation into a nematic phase with biaxial symmetry. Continuing experimental challenges in realising such a system unambiguously, despite encouraging predictions from MFT for example, are requiring more versatile simulational methodologies capable of providing insights into possible hindering barriers within the system, typically gleaned through its free energy dependences on relevant observables as the system is driven through the transitions. The recent brief report from this group [B. Kamala Latha, et. al., Phys. Rev. E 89, 050501 (R), 2014] summarizing the outcome of detailed Monte Carlo simulations carried out employing entropic sampling technique, suggested a qualitative modification of the MFT phase diagram as the Hamiltonian is asymptotically driven towards the so-called partly-repulsive regions. It was argued that the degree of the (cross) coupling between the uniaxial and biaxial tensor components of neighbouring molecules plays a crucial role in facilitating, or otherwise, a ready condensation of the biaxial phase, suggesting that this could be a plausible f actor in explaining the experimental difficulties. In this paper, we elaborate this point further, providing additional evidences from curious variations of free-energy profiles with respect to the relevant orientational order parameters, at different temperatures bracketing the phase transitions.
Kuss, M.; Markel, T.; Kramer, W.
2011-01-01
Concentrated purchasing patterns of plug-in vehicles may result in localized distribution transformer overload scenarios. Prolonged periods of transformer overloading causes service life decrements, and in worst-case scenarios, results in tripped thermal relays and residential service outages. This analysis will review distribution transformer load models developed in the IEC 60076 standard, and apply the model to a neighborhood with plug-in hybrids. Residential distribution transformers are sized such that night-time cooling provides thermal recovery from heavy load conditions during the daytime utility peak. It is expected that PHEVs will primarily be charged at night in a residential setting. If not managed properly, some distribution transformers could become overloaded, leading to a reduction in transformer life expectancy, thus increasing costs to utilities and consumers. A Monte-Carlo scheme simulated each day of the year, evaluating 100 load scenarios as it swept through the following variables: number of vehicle per transformer, transformer size, and charging rate. A general method for determining expected transformer aging rate will be developed, based on the energy needs of plug-in vehicles loading a residential transformer.
Cai, D; Snell, C M; Beardmore, K M; Cai, David; Gronbech-Jensen, Niels; Snell, Charles M.; Beardmore, Keith M.
1996-01-01
It is crucial to have a good phenomenological model of electronic stopping power for modeling the physics of ion implantation into crystalline silicon. In the spirit of the Brandt-Kitagawa effective charge theory, we develop a model for electronic stopping power for an ion, which can be factorized into (i) a globally averaged effective charge taking into account effects of close and distant collisions by target electrons with the ion, and (ii) a local charge density dependent electronic stopping power for a proton. This phenomenological model is implemented into both molecular dynamics and Monte Carlo simulations. There is only one free parameter in the model, namely, the one electron radius rs0 for unbound electrons. By fine tuning this parameter, it is shown that the model can work successfully for both boron and arsenic implants. We report that the results of the dopant profile simulation for both species are in excellent agreement with the experimental profiles measured by secondary-ion mass spectrometry(...
Prasad, Manish; Conforti, Patrick F.; Garrison, Barbara J.
2007-08-28
The coarse grained chemical reaction model is enhanced to build a molecular dynamics (MD) simulation framework with an embedded Monte Carlo (MC) based reaction scheme. The MC scheme utilizes predetermined reaction chemistry, energetics, and rate kinetics of materials to incorporate chemical reactions occurring in a substrate into the MD simulation. The kinetics information is utilized to set the probabilities for the types of reactions to perform based on radical survival times and reaction rates. Implementing a reaction involves changing the reactants species types which alters their interaction potentials and thus produces the required energy change. We discuss the application of this method to study the initiation of ultraviolet laser ablation in poly(methyl methacrylate). The use of this scheme enables the modeling of all possible photoexcitation pathways in the polymer. It also permits a direct study of the role of thermal, mechanical, and chemical processes that can set off ablation. We demonstrate that the role of laser induced heating, thermomechanical stresses, pressure wave formation and relaxation, and thermochemical decomposition of the polymer substrate can be investigated directly by suitably choosing the potential energy and chemical reaction energy landscape. The results highlight the usefulness of such a modeling approach by showing that various processes in polymer ablation are intricately linked leading to the transformation of the substrate and its ejection. The method, in principle, can be utilized to study systems where chemical reactions are expected to play a dominant role or interact strongly with other physical processes.
Almansa, J F; Anguiano, M; Guerrero, R; Lallena, A M; Al-Dweri, Feras M.O.; Almansa, Julio F.; Guerrero, Rafael
2006-01-01
Monte Carlo calculations using the codes PENELOPE and GEANT4 have been performed to characterize the dosimetric properties of monoenergetic photon point sources in water. The dose rate in water has been calculated for energies of interest in brachytherapy, ranging between 10 keV and 2 MeV. A comparison of the results obtained using the two codes with the available data calculated with other Monte Carlo codes is carried out. A chi2-like statistical test is proposed for these comparisons. PENELOPE and GEANT4 show a reasonable agreement for all energies analyzed and distances to the source larger than 1 cm. Significant differences are found at distances from the source up to 1 cm. A similar situation occurs between PENELOPE and EGS4.
Kim, Jeongnim [ORNL] [ORNL; Reboredo, Fernando A [ORNL] [ORNL
2014-01-01
The self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo J. Chem. Phys. {\\bf 136}, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. {\\bf 89}, 6316 (1988)] are blended to obtain a method for the calculation of thermodynamic properties of many-body systems at low temperatures. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric trial wave functions. A statistical method is derived for the calculation of finite temperature properties of many-body systems near the ground state. In the process we also obtain a parallel algorithm that optimizes the many-body basis of a small subspace of the many-body Hilbert space. This small subspace is optimized to have maximum overlap with the one expanded by the lower energy eigenstates of a many-body Hamiltonian. We show in a model system that the Helmholtz free energy is minimized within this subspace as the iteration number increases. We show that the subspace expanded by the small basis systematically converges towards the subspace expanded by the lowest energy eigenstates. Possible applications of this method to calculate the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can be also used to accelerate the calculation of the ground or excited states with Quantum Monte Carlo.
Reboredo, Fernando A.; Kim, Jeongnim [Materials Science and Technology Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States)] [Materials Science and Technology Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States)
2014-02-21
A statistical method is derived for the calculation of thermodynamic properties of many-body systems at low temperatures. This method is based on the self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo, J. Chem. Phys. 136, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. 89, 6316 (1988)]. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric guiding wave functions. In the process we obtain a parallel algorithm that optimizes a small subspace of the many-body Hilbert space to provide maximum overlap with the subspace spanned by the lowest-energy eigenstates of a many-body Hamiltonian. We show in a model system that the partition function is progressively maximized within this subspace. We show that the subspace spanned by the small basis systematically converges towards the subspace spanned by the lowest energy eigenstates. Possible applications of this method for calculating the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can also be used to accelerate the calculation of the ground or excited states with quantum Monte Carlo.
Andrea Zen; Ye Luo; Sandro Sorella; Leonardo Guidoni
2013-09-02
Quantum Monte Carlo methods are accurate and promising many body techniques for electronic structure calculations which, in the last years, are encountering a growing interest thanks to their favorable scaling with the system size and their efficient parallelization, particularly suited for the modern high performance computing facilities. The ansatz of the wave function and its variational flexibility are crucial points for both the accurate description of molecular properties and the capabilities of the method to tackle large systems. In this paper, we extensively analyze, using different variational ansatzes, several properties of the water molecule, namely: the total energy, the dipole and quadrupole momenta, the ionization and atomization energies, the equilibrium configuration, and the harmonic and fundamental frequencies of vibration. The investigation mainly focuses on variational Monte Carlo calculations, although several lattice regularized diffusion Monte Carlo calculations are also reported. Through a systematic study, we provide a useful guide to the choice of the wave function, the pseudo potential, and the basis set for QMC calculations. We also introduce a new strategy for the definition of the atomic orbitals involved in the Jastrow - Antisymmetrised Geminal power wave function, in order to drastically reduce the number of variational parameters. This scheme significantly improves the efficiency of QMC energy minimization in case of large basis sets.
Klaus M. Pontoppidan; Cornelis P. Dullemond; Ewine F. van Dishoeck; Geoffrey A. Blake; Adwin C. A. Boogert; Neal J. Evans II; Jacqueline E. Kessler-Silacci; Fred Lahuis
2004-11-13
We present 5.2-37.2 micron spectroscopy of the edge-on circumstellar disk CRBR 2422.8-3423 obtained using the InfraRed Spectrograph (IRS) of the Spitzer Space Telescope. The IRS spectrum is combined with ground-based 3-5 micron spectroscopy to obtain a complete inventory of solid state material present along the line of sight toward the source. We model the object with a 2D axisymmetric (effectively 3D) Monte Carlo radiative transfer code. It is found that the model disk, assuming a standard flaring structure, is too warm to contain the very large observed column density of pure CO ice, but is possibly responsible for up to 50% of the water, CO2 and minor ice species. In particular the 6.85 micron band, tentatively due to NH4+, exhibits a prominent red wing, indicating a significant contribution from warm ice in the disk. It is argued that the pure CO ice is located in the dense core Oph-F in front of the source seen in the submillimeter imaging, with the CO gas in the core highly depleted. The model is used to predict which circumstances are most favourable for direct observations of ices in edge-on circumstellar disks. Ice bands will in general be deepest for inclinations similar to the disk opening angle, i.e. ~70 degrees. Due to the high optical depths of typical disk mid-planes, ice absorption bands will often probe warmer ice located in the upper layers of nearly edge-on disks. The ratios between different ice bands are found to vary by up to an order of magnitude depending on disk inclination due to radiative transfer effects caused by the 2D structure of the disk. Ratios between ice bands of the same species can therefore be used to constrain the location of the ices in a circumstellar disk. [Abstract abridged
Watanabe, Y; Dahlman, E [University of Minnesota, Minneapolis, MN (United States)
2014-06-01
Purpose: To evaluate the analytic formula of the cell death probability after single fraction dose. Methods: Cancer cells endlessly divide, but radiation causes the cancer cells to die. Not all cells die right away after irradiation. Instead, they continue dividing for next few cell cycles before they stop dividing and die. At the end of every cell cycle, the cell decides if it undertakes the mitotic process with a certain probability, Pdiv, which is altered by the radiation. Previously, by using a simple analytic model of radiobiology experiments, we obtained a formula of Pdeath (= 1 ? Pdiv). A question is if the proposed probability can reproduce the well-known survival data of the LQ model. In this study, we evaluated the formula by doing a Monte Carlo simulation of the cell proliferation process. Starting with Ns seed cells, the cell proliferation process was simulated for N generations or until all cells die. We counted the number of living cells at the end. Assuming that the cell colony survived when more than Nc cells were still alive, the surviving fraction S was estimated. We compared the S vs. dose, or S-D curve, with the LQ model. Results: The results indicated that our formula does not reproduce the experimentally observed S-D curve without selecting appropriate ? and ?/?. With parameter optimization, there was a fair agreement between the MC result and the LQ curve of dose lower than 20Gy. However, the survival fraction of MC decreased much faster in comparison to the LQ data for doses higher than 20 Gy. Conclusion: This study showed that the previously derived probability of cell death per cell cycle is not sufficiently accurate to replicate common radiobiological experiments. The formula must be modified by considering its cell cycle dependence and some other unknown effects.
Talamo, A.; Gohar, Y. (Nuclear Engineering Division) [Nuclear Engineering Division
2011-05-12
This study investigates the performance of the YALINA Booster subcritical assembly, located in Belarus, during operation with high (90%), medium (36%), and low (21%) enriched uranium fuels in the assembly's fast zone. The YALINA Booster is a zero-power, subcritical assembly driven by a conventional neutron generator. It was constructed for the purpose of investigating the static and dynamic neutronics properties of accelerator driven subcritical systems, and to serve as a fast neutron source for investigating the properties of nuclear reactions, in particular transmutation reactions involving minor-actinides. The first part of this study analyzes the assembly's performance with several fuel types. The MCNPX and MONK Monte Carlo codes were used to determine effective and source neutron multiplication factors, effective delayed neutron fraction, prompt neutron lifetime, neutron flux profiles and spectra, and neutron reaction rates produced from the use of three neutron sources: californium, deuterium-deuterium, and deuterium-tritium. In the latter two cases, the external neutron source operates in pulsed mode. The results discussed in the first part of this report show that the use of low enriched fuel in the fast zone of the assembly diminishes neutron multiplication. Therefore, the discussion in the second part of the report focuses on finding alternative fuel loading configurations that enhance neutron multiplication while using low enriched uranium fuel. It was found that arranging the interface absorber between the fast and the thermal zones in a circular rather than a square array is an effective method of operating the YALINA Booster subcritical assembly without downgrading neutron multiplication relative to the original value obtained with the use of the high enriched uranium fuels in the fast zone.
Interpretation of 3D void measurements with Tripoli4.6/JEFF3.1.1 Monte Carlo code
Blaise, P.; Colomba, A.
2012-07-01
The present work details the first analysis of the 3D void phase conducted during the EPICURE/UM17x17/7% mixed UOX/MOX configuration. This configuration is composed of a homogeneous central 17x17 MOX-7% assembly, surrounded by portions of 17x17 1102 assemblies with guide-tubes. The void bubble is modelled by a small waterproof 5x5 fuel pin parallelepiped box of 11 cm height, placed in the centre of the MOX assembly. This bubble, initially placed at the core mid-plane, is then moved in different axial positions to study the evolution in the core of the axial perturbation. Then, to simulate the growing of this bubble in order to understand the effects of increased void fraction along the fuel pin, 3 and 5 bubbles have been stacked axially, from the core mid-plane. The C/E comparison obtained with the Monte Carlo code Tripoli4 for both radial and axial fission rate distributions, and in particular the reproduction of the very important flux gradients at the void/water interfaces, changing as the bubble is displaced along the z-axis are very satisfactory. It demonstrates both the capability of the code and its library to reproduce this kind of situation, as the very good quality of the experimental results, confirming the UM-17x17 as an excellent experimental benchmark for 3D code validation. This work has been performed within the frame of the V and V program for the future APOLL03 deterministic code of CEA starting in 2012, and its V and V benchmarking database. (authors)
Wang, J.; Biasca, R.; Liewer, P.C.
1996-01-01
Although the existence of the critical ionization velocity (CIV) is known from laboratory experiments, no agreement has been reached as to whether CIV exists in the natural space environment. In this paper the authors move towards more realistic models of CIV and present the first fully three-dimensional, electromagnetic particle-in-cell Monte-Carlo collision (PIC-MCC) simulations of typical space-based CIV experiments. In their model, the released neutral gas is taken to be a spherical cloud traveling across a magnetized ambient plasma. Simulations are performed for neutral clouds with various sizes and densities. The effects of the cloud parameters on ionization yield, wave energy growth, electron heating, momentum coupling, and the three-dimensional structure of the newly ionized plasma are discussed. The simulations suggest that the quantitative characteristics of momentum transfers among the ion beam, neutral cloud, and plasma waves is the key indicator of whether CIV can occur in space. The missing factors in space-based CIV experiments may be the conditions necessary for a continuous enhancement of the beam ion momentum. For a typical shaped charge release experiment, favorable CIV conditions may exist only in a very narrow, intermediate spatial region some distance from the release point due to the effects of the cloud density and size. When CIV does occur, the newly ionized plasma from the cloud forms a very complex structure due to the combined forces from the geomagnetic field, the motion induced emf, and the polarization. Hence the detection of CIV also critically depends on the sensor location. 32 refs., 8 figs., 2 tabs.
Q. Chang; H. M. Cuppen; E. Herbst
2007-05-24
AIM: We have recently developed a microscopic Monte Carlo approach to study surface chemistry on interstellar grains and the morphology of ice mantles. The method is designed to eliminate the problems inherent in the rate-equation formalism to surface chemistry. Here we report the first use of this method in a chemical model of cold interstellar cloud cores that includes both gas-phase and surface chemistry. The surface chemical network consists of a small number of diffusive reactions that can produce molecular oxygen, water, carbon dioxide, formaldehyde, methanol and assorted radicals. METHOD: The simulation is started by running a gas-phase model including accretion onto grains but no surface chemistry or evaporation. The starting surface consists of either flat or rough olivine. We introduce the surface chemistry of the three species H, O and CO in an iterative manner using our stochastic technique. Under the conditions of the simulation, only atomic hydrogen can evaporate to a significant extent. Although it has little effect on other gas-phase species, the evaporation of atomic hydrogen changes its gas-phase abundance, which in turn changes the flux of atomic hydrogen onto grains. The effect on the surface chemistry is treated until convergence occurs. We neglect all non-thermal desorptive processes. RESULTS: We determine the mantle abundances of assorted molecules as a function of time through 2x10^5 yr. Our method also allows determination of the abundance of each molecule in specific monolayers. The mantle results can be compared with observations of water, carbon dioxide, carbon monoxide, and methanol ices in the sources W33A and Elias 16. Other than a slight underproduction of mantle CO, our results are in very good agreement with observations.
Liu, T.; Ding, A.; Ji, W.; Xu, X. G. [Nuclear Engineering and Engineering Physics, Rensselaer Polytechnic Inst., Troy, NY 12180 (United States); Carothers, C. D. [Dept. of Computer Science, Rensselaer Polytechnic Inst. RPI (United States); Brown, F. B. [Los Alamos National Laboratory (LANL) (United States)
2012-07-01
Monte Carlo (MC) method is able to accurately calculate eigenvalues in reactor analysis. Its lengthy computation time can be reduced by general-purpose computing on Graphics Processing Units (GPU), one of the latest parallel computing techniques under development. The method of porting a regular transport code to GPU is usually very straightforward due to the 'embarrassingly parallel' nature of MC code. However, the situation becomes different for eigenvalue calculation in that it will be performed on a generation-by-generation basis and the thread coordination should be explicitly taken care of. This paper presents our effort to develop such a GPU-based MC code in Compute Unified Device Architecture (CUDA) environment. The code is able to perform eigenvalue calculation under simple geometries on a multi-GPU system. The specifics of algorithm design, including thread organization and memory management were described in detail. The original CPU version of the code was tested on an Intel Xeon X5660 2.8 GHz CPU, and the adapted GPU version was tested on NVIDIA Tesla M2090 GPUs. Double-precision floating point format was used throughout the calculation. The result showed that a speedup of 7.0 and 33.3 were obtained for a bare spherical core and a binary slab system respectively. The speedup factor was further increased by a factor of {approx}2 on a dual GPU system. The upper limit of device-level parallelism was analyzed, and a possible method to enhance the thread-level parallelism was proposed. (authors)
Su, L.; Du, X.; Liu, T.; Xu, X. G.
2013-07-01
An electron-photon coupled Monte Carlo code ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - is being developed at Rensselaer Polytechnic Institute as a software test bed for emerging heterogeneous high performance computers that utilize accelerators such as GPUs. In this paper, the preliminary results of code development and testing are presented. The electron transport in media was modeled using the class-II condensed history method. The electron energy considered ranges from a few hundred keV to 30 MeV. Moller scattering and bremsstrahlung processes above a preset energy were explicitly modeled. Energy loss below that threshold was accounted for using the Continuously Slowing Down Approximation (CSDA). Photon transport was dealt with using the delta tracking method. Photoelectric effect, Compton scattering and pair production were modeled. Voxelised geometry was supported. A serial ARHCHER-CPU was first written in C++. The code was then ported to the GPU platform using CUDA C. The hardware involved a desktop PC with an Intel Xeon X5660 CPU and six NVIDIA Tesla M2090 GPUs. ARHCHER was tested for a case of 20 MeV electron beam incident perpendicularly on a water-aluminum-water phantom. The depth and lateral dose profiles were found to agree with results obtained from well tested MC codes. Using six GPU cards, 6x10{sup 6} histories of electrons were simulated within 2 seconds. In comparison, the same case running the EGSnrc and MCNPX codes required 1645 seconds and 9213 seconds, respectively, on a CPU with a single core used. (authors)
Drummond, N. D.; Monserrat, Bartomeu; Lloyd-Williams, Jonathan H.; Lopez Rios, P.; Pickard, Chris J.; Needs, R. J.
2015-07-08
result in a real-valued wave function, compared them with the DFT energy obtained using a fine k-point grid, and we picked the twist angle which minimized this difference. The twist angles and energies are listed in the file. - / / /Calcs/hydrogen... reasons. - / / /twist/E_SJDMC_dt0.dat Human-readable text file containing the zero-timestep extrapolation of the DMC energy and its errorbar for each twist. - / / /twist/hydrogen.castep CASTEP output files detailing the generation of the DFT orbitals...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantityBonneville Power Administration wouldMass mapSpeedingProgramExemptionsProteinTotalSciTech Connect Conference:QuantumQuantum
Wang, Z; Gao, M
2014-06-01
Purpose: Monte Carlo simulation plays an important role for proton Pencil Beam Scanning (PBS) technique. However, MC simulation demands high computing power and is limited to few large proton centers that can afford a computer cluster. We study the feasibility of utilizing cloud computing in the MC simulation of PBS beams. Methods: A GATE/GEANT4 based MC simulation software was installed on a commercial cloud computing virtual machine (Linux 64-bits, Amazon EC2). Single spot Integral Depth Dose (IDD) curves and in-air transverse profiles were used to tune the source parameters to simulate an IBA machine. With the use of StarCluster software developed at MIT, a Linux cluster with 2–100 nodes can be conveniently launched in the cloud. A proton PBS plan was then exported to the cloud where the MC simulation was run. Results: The simulated PBS plan has a field size of 10×10cm{sup 2}, 20cm range, 10cm modulation, and contains over 10,000 beam spots. EC2 instance type m1.medium was selected considering the CPU/memory requirement and 40 instances were used to form a Linux cluster. To minimize cost, master node was created with on-demand instance and worker nodes were created with spot-instance. The hourly cost for the 40-node cluster was $0.63 and the projected cost for a 100-node cluster was $1.41. Ten million events were simulated to plot PDD and profile, with each job containing 500k events. The simulation completed within 1 hour and an overall statistical uncertainty of < 2% was achieved. Good agreement between MC simulation and measurement was observed. Conclusion: Cloud computing is a cost-effective and easy to maintain platform to run proton PBS MC simulation. When proton MC packages such as GATE and TOPAS are combined with cloud computing, it will greatly facilitate the pursuing of PBS MC studies, especially for newly established proton centers or individual researchers.
Wang, L; Fourkal, E; Hayes, S; Jin, L; Ma, C
2014-06-01
Purpose: To study the dosimetric difference resulted in using the pencil beam algorithm instead of Monte Carlo (MC) methods for tumors adjacent to the skull. Methods: We retrospectively calculated the dosimetric differences between RT and MC algorithms for brain tumors treated with CyberKnife located adjacent to the skull for 18 patients (total of 27 tumors). The median tumor sizes was 0.53-cc (range 0.018-cc to 26.2-cc). The absolute mean distance from the tumor to the skull was 2.11 mm (range - 17.0 mm to 9.2 mm). The dosimetric variables examined include the mean, maximum, and minimum doses to the target, the target coverage (TC) and conformality index. The MC calculation used the same MUs as the RT dose calculation without further normalization and 1% statistical uncertainty. The differences were analyzed by tumor size and distance from the skull. Results: The TC was generally reduced with the MC calculation (24 out of 27 cases). The average difference in TC between RT and MC was 3.3% (range 0.0% to 23.5%). When the TC was deemed unacceptable, the plans were re-normalized in order to increase the TC to 99%. This resulted in a 6.9% maximum change in the prescription isodose line. The maximum changes in the mean, maximum, and minimum doses were 5.4 %, 7.7%, and 8.4%, respectively, before re-normalization. When the TC was analyzed with regards to target size, it was found that the worst coverage occurred with the smaller targets (0.018-cc). When the TC was analyzed with regards to the distance to the skull, there was no correlation between proximity to the skull and TC between the RT and MC plans. Conclusions: For smaller targets (< 4.0-cc), MC should be used to re-evaluate the dose coverage after RT is used for the initial dose calculation in order to ensure target coverage.
SU-E-T-585: Commissioning of Electron Monte Carlo in Eclipse Treatment Planning System for TrueBeam
Yang, X; Lasio, G; Zhou, J; Lin, M; Yi, B; Guerrero, M
2014-06-01
Purpose: To commission electron Monte Carlo (eMC) algorithm in Eclipse Treatment Planning System (TPS) for TrueBeam Linacs, including the evaluation of dose calculation accuracy for small fields and oblique beams and comparison with the existing eMC model for Clinacs. Methods: Electron beam percent-depth-dose (PDDs) and profiles with and without applicators, as well as output factors, were measured from two Varian TrueBeam machines. Measured data were compared against the Varian TrueBeam Representative Beam Data (VTBRBD). The selected data set was transferred into Eclipse for beam configuration. Dose calculation accuracy from eMC was evaluated for open fields, small cut-out fields, and oblique beams at different incident angles. The TrueBeam data was compared to the existing Clinac data and eMC model to evaluate the differences among Linac types. Results: Our measured data indicated that electron beam PDDs from our TrueBeam machines are well matched to those from our Varian Clinac machines, but in-air profiles, cone factors and open-filed output factors are significantly different. The data from our two TrueBeam machines were well represented by the VTBRBD. Variations of TrueBeam PDDs and profiles were within the 2% /2mm criteria for all energies, and the output factors for fields with and without applicators all agree within 2%. Obliquity factor for two clinically relevant applicator sizes (10×10 and 15×15 cm{sup 2}) and three oblique angles (15, 30, and 45 degree) were measured for nominal R100, R90, and R80 of each electron beam energy. Comparisons of calculations using eMC of obliquity factors and cut-out factors versus measurements will be presented. Conclusion: eMC algorithm in Eclipse TPS can be configured using the VTBRBD. Significant differences between TrueBeam and Clinacs were found in in-air profiles and open field output factors. The accuracy of the eMC algorithm was evaluated for a wide range of cut-out factors and oblique incidence.
Fang, Yuan; Karim, Karim S.; Badano, Aldo
2014-01-15
Purpose: The authors describe the modification to a previously developed Monte Carlo model of semiconductor direct x-ray detector required for studying the effect of burst and recombination algorithms on detector performance. This work provides insight into the effect of different charge generation models for a-Se detectors on Swank noise and recombination fraction. Methods: The proposed burst and recombination models are implemented in the Monte Carlo simulation package, ARTEMIS, developed byFang et al. [“Spatiotemporal Monte Carlo transport methods in x-ray semiconductor detectors: Application to pulse-height spectroscopy in a-Se,” Med. Phys. 39(1), 308–319 (2012)]. The burst model generates a cloud of electron-hole pairs based on electron velocity, energy deposition, and material parameters distributed within a spherical uniform volume (SUV) or on a spherical surface area (SSA). A simple first-hit (FH) and a more detailed but computationally expensive nearest-neighbor (NN) recombination algorithms are also described and compared. Results: Simulated recombination fractions for a single electron-hole pair show good agreement with Onsager model for a wide range of electric field, thermalization distance, and temperature. The recombination fraction and Swank noise exhibit a dependence on the burst model for generation of many electron-hole pairs from a single x ray. The Swank noise decreased for the SSA compared to the SUV model at 4 V/?m, while the recombination fraction decreased for SSA compared to the SUV model at 30 V/?m. The NN and FH recombination results were comparable. Conclusions: Results obtained with the ARTEMIS Monte Carlo transport model incorporating drift and diffusion are validated with the Onsager model for a single electron-hole pair as a function of electric field, thermalization distance, and temperature. For x-ray interactions, the authors demonstrate that the choice of burst model can affect the simulation results for the generation of many electron-hole pairs. The SSA model is more sensitive to the effect of electric field compared to the SUV model and that the NN and FH recombination algorithms did not significantly affect simulation results.
B. M. Abramov; P. N. Alexeev; Yu. A. Borodin; S. A. Bulychjov; I. A. Dukhovskoy; A. P. Krutenkova; V. V. Kulikov; M. A. Martemianov; M. A. Matsyuk; E. N. Turdakina; A. I. Khanov; S. G. Mashnik
2015-02-05
Momentum spectra of hydrogen isotopes have been measured at 3.5 deg from C12 fragmentation on a Be target. Momentum spectra cover both the region of fragmentation maximum and the cumulative region. Differential cross sections span five orders of magnitude. The data are compared to predictions of four Monte Carlo codes: QMD, LAQGSM, BC, and INCL++. There are large differences between the data and predictions of some models in the high momentum region. The INCL++ code gives the best and almost perfect description of the data.
Spadea, Maria Francesca; Verburg, Joost Mathias; Seco, Joao; Baroni, Guido
2014-01-15
Purpose: The aim of the study was to evaluate the dosimetric impact of low-Z and high-Z metallic implants on IMRT plans. Methods: Computed tomography (CT) scans of three patients were analyzed to study effects due to the presence of Titanium (low-Z), Platinum and Gold (high-Z) inserts. To eliminate artifacts in CT images, a sinogram-based metal artifact reduction algorithm was applied. IMRT dose calculations were performed on both the uncorrected and corrected images using a commercial planning system (convolution/superposition algorithm) and an in-house Monte Carlo platform. Dose differences between uncorrected and corrected datasets were computed and analyzed using gamma index (P?{sub <1}) and setting 2 mm and 2% as distance to agreement and dose difference criteria, respectively. Beam specific depth dose profiles across the metal were also examined. Results: Dose discrepancies between corrected and uncorrected datasets were not significant for low-Z material. High-Z materials caused under-dosage of 20%–25% in the region surrounding the metal and over dosage of 10%–15% downstream of the hardware. Gamma index test yielded P?{sub <1}>99% for all low-Z cases; while for high-Z cases it returned 91% < P?{sub <1}< 99%. Analysis of the depth dose curve of a single beam for low-Z cases revealed that, although the dose attenuation is altered inside the metal, it does not differ downstream of the insert. However, for high-Z metal implants the dose is increased up to 10%–12% around the insert. In addition, Monte Carlo method was more sensitive to the presence of metal inserts than superposition/convolution algorithm. Conclusions: The reduction in terms of dose of metal artifacts in CT images is relevant for high-Z implants. In this case, dose distribution should be calculated using Monte Carlo algorithms, given their superior accuracy in dose modeling in and around the metal. In addition, the knowledge of the composition of metal inserts improves the accuracy of the Monte Carlo dose calculation significantly.
Indium-Gallium Segregation in CuIn$_{x}$Ga$_{1-x}$Se$_2$: An ab initio based Monte Carlo Study
Ludwig, Christian D R; Felser, Claudia; Schilling, Tanja; Windeln, Johannes; Kratzer, Peter
2010-01-01
Thin-film solar cells with CuIn$_x$Ga$_{1-x}$Se$_2$ (CIGS) absorber are still far below their efficiency limit, although lab cells reach already 19.9%. One important aspect is the homogeneity of the alloy. Large-scale simulations combining Monte Carlo and density functional calculations show that two phases coexist in thermal equilibrium below room temperature. Only at higher temperatures, CIGS becomes more and more a homogeneous alloy. A larger degree of inhomogeneity for Ga-rich CIGS persists over a wide temperature range, which may contribute to the low observed efficiency of Ga-rich CIGS solar cells.
Journal of Statistical Physics, Vol. 89, Nos. 5/6, 1997 Simulated Annealing Using Hybrid Monte Carlo
Toral, Raúl
of the system. It is known that if a system is heated to a very high temperature T and then it is slowly cooledJournal of Statistical Physics, Vol. 89, Nos. 5/6, 1997 Simulated Annealing Using Hybrid Monte global actualizationsvia the hybrid Monte Carloalgorithmin theirgeneralizedversion for the proposal
Mayorga, P. A.; Departamento de Física Atómica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada ; Brualla, L.; Sauerwein, W.; Lallena, A. M.
2014-01-15
Purpose: Retinoblastoma is the most common intraocular malignancy in the early childhood. Patients treated with external beam radiotherapy respond very well to the treatment. However, owing to the genotype of children suffering hereditary retinoblastoma, the risk of secondary radio-induced malignancies is high. The University Hospital of Essen has successfully treated these patients on a daily basis during nearly 30 years using a dedicated “D”-shaped collimator. The use of this collimator that delivers a highly conformed small radiation field, gives very good results in the control of the primary tumor as well as in preserving visual function, while it avoids the devastating side effects of deformation of midface bones. The purpose of the present paper is to propose a modified version of the “D”-shaped collimator that reduces even further the irradiation field with the scope to reduce as well the risk of radio-induced secondary malignancies. Concurrently, the new dedicated “D”-shaped collimator must be easier to build and at the same time produces dose distributions that only differ on the field size with respect to the dose distributions obtained by the current collimator in use. The scope of the former requirement is to facilitate the employment of the authors' irradiation technique both at the authors' and at other hospitals. The fulfillment of the latter allows the authors to continue using the clinical experience gained in more than 30 years. Methods: The Monte Carlo codePENELOPE was used to study the effect that the different structural elements of the dedicated “D”-shaped collimator have on the absorbed dose distribution. To perform this study, the radiation transport through a Varian Clinac 2100 C/D operating at 6 MV was simulated in order to tally phase-space files which were then used as radiation sources to simulate the considered collimators and the subsequent dose distributions. With the knowledge gained in that study, a new, simpler, “D”-shaped collimator is proposed. Results: The proposed collimator delivers a dose distribution which is 2.4 cm wide along the inferior-superior direction of the eyeball. This width is 0.3 cm narrower than that of the dose distribution obtained with the collimator currently in clinical use. The other relevant characteristics of the dose distribution obtained with the new collimator, namely, depth doses at clinically relevant positions, penumbrae width, and shape of the lateral profiles, are statistically compatible with the results obtained for the collimator currently in use. Conclusions: The smaller field size delivered by the proposed collimator still fully covers the planning target volume with at least 95% of the maximum dose at a depth of 2 cm and provides a safety margin of 0.2 cm, so ensuring an adequate treatment while reducing the irradiated volume.
Thfoin, I. Reverdin, C.; Duval, A.; Leboeuf, X.; Lecherbourg, L.; Rossé, B.; Hulin, S.; Batani, D.; Santos, J. J.; Vaisseau, X.; Fourment, C.; Giuffrida, L.; Szabo, C. I.; Bastiani-Ceccotti, S.; Brambrink, E.; Koenig, M.; Nakatsutsumi, M.; Morace, A.
2014-11-15
Transmission crystal spectrometers (TCS) are used on many laser facilities to record hard X-ray spectra. During experiments, signal recorded on imaging plates is often degraded by a background noise. Monte-Carlo simulations made with the code GEANT4 show that this background noise is mainly generated by diffusion of MeV electrons and very hard X-rays. An experiment, carried out at LULI2000, confirmed that the use of magnets in front of the diagnostic, that bent the electron trajectories, reduces significantly this background. The new spectrometer SPECTIX (Spectromètre PETAL à Cristal en TransmIssion X), built for the LMJ/PETAL facility, will include this optimized shielding.
Ondis, L.A., II; Tyburski, L.J.; Moskowitz, B.S.
2000-03-01
The RCP01 Monte Carlo program is used to analyze many geometries of interest in nuclear design and analysis of light water moderated reactors such as the core in its pressure vessel with complex piping arrangement, fuel storage arrays, shipping and container arrangements, and neutron detector configurations. Written in FORTRAN and in use on a variety of computers, it is capable of estimating steady state neutron or photon reaction rates and neutron multiplication factors. The energy range covered in neutron calculations is that relevant to the fission process and subsequent slowing-down and thermalization, i.e., 20 MeV to 0 eV. The same energy range is covered for photon calculations.
Casey E. Berger; Joaquín E. Drut; William J. Porter
2015-10-10
We present in detail two variants of the lattice Monte Carlo method aimed at tackling systems in external trapping potentials: a uniform-lattice approach with hard-wall boundary conditions, and a non-uniform Gauss-Hermite lattice approach. Using those two methods, we compute the ground-state energy and spatial density profile for systems of N=4 - 8 harmonically trapped fermions in one dimension. From the favorable comparison of both energies and density profiles (particularly in regions of low density), we conclude that the trapping potential is properly resolved by the hard-wall basis. Our work paves the way to higher dimensions and finite temperature analyses, as calculations with the hard-wall basis can be accelerated via fast Fourier transforms, the cost of unaccelerated methods is otherwise prohibitive due to the unfavorable scaling with system size.
Berger, Casey E; Porter, William J
2015-01-01
We present in detail two variants of the lattice Monte Carlo method aimed at tackling systems in external trapping potentials: a uniform-lattice approach with hard-wall boundary conditions, and a non-uniform Gauss-Hermite lattice approach. Using those two methods, we compute the ground-state energy and spatial density profile for systems of N=4 - 8 harmonically trapped fermions in one dimension. From the favorable comparison of both energies and density profiles (particularly in regions of low density), we conclude that the trapping potential is properly resolved by the hard-wall basis. Our work paves the way to higher dimensions and finite temperature analyses, as calculations with the hard-wall basis can be accelerated via fast Fourier transforms, the cost of unaccelerated methods is otherwise prohibitive due to the unfavorable scaling with system size.
A. K. Fomin; A. P. Serebrov
2010-05-17
We performed a detailed analysis and the Monte Carlo simulation of the neutron lifetime experiment [S. Arzumanov et al., Phys. Lett. B 483 (2000) 15] because of the strong disagreement by 5.6 standard deviations between the results of this experiment and our experiment [A. Serebrov et al., Phys. Lett. B 605 (2005) 72]. We found a few effects which were not taken into account in the experiment [S. Arzumanov et al., Phys. Lett. B 483 (2000) 15]. The possible correction is -5.5 s with uncertainty of 2.4 s which comes from initial data knowledge. We assume that after taking into account this correction the result of work [S. Arzumanov et al., Phys. Lett. B 483 (2000) 15] for neutron lifetime 885.4 +/- 0.9stat +/- 0.4syst s could be corrected to 879.9 +/- 0.9stat +/- 2.4syst s.
Doma, S B; El-Gamma, F N; Amer, A A
2015-01-01
By Using the variational Monte Carlo (VMC) method, we calculate the 1s{\\sigma}_g state energies, the dissociation energies and the binding energies of the hydrogen molecule and its molecular ion in the presence of an aligned magnetic field regime between 0 a.u. and 10 a.u. The present calculations are based on using two types of compact and accurate trial wave functions, which are put forward for consideration in calculating energies in the absence of magnetic field. The obtained results are compared with the most recent accurate values. We conclude that the applications of VMC method can be extended successfully to cover the case of molecules under the effect of the magnetic field.
Astrakharchik, G. E.; Boronat, J.; Casulleras, J.; Kurbakov, I. L.; Lozovik, Yu. E.
2009-05-15
The equation of state of a weakly interacting two-dimensional Bose gas is studied at zero temperature by means of quantum Monte Carlo methods. Going down to as low densities as na{sup 2}{proportional_to}10{sup -100} permits us to obtain agreement on beyond mean-field level between predictions of perturbative methods and direct many-body numerical simulation, thus providing an answer to the fundamental question of the equation of state of a two-dimensional dilute Bose gas in the universal regime (i.e., entirely described by the gas parameter na{sup 2}). We also show that the measure of the frequency of a breathing collective oscillation in a trap at very low densities can be used to test the universal equation of state of a two-dimensional Bose gas.
Raychaudhuri, Subhadip
2015-01-01
Death ligand mediated apoptotic activation is a mode of programmed cell death that is widely used in cellular and physiological situations. Interest in studying death ligand induced apoptosis has increased due to the promising role of recombinant soluble forms of death ligands (mainly recombinant TRAIL) in anti-cancer therapy. A clear elucidation of how death ligands activate the type 1 and type 2 apoptotic pathways in healthy and cancer cells may help develop better chemotherapeutic strategies. In this work, we use kinetic Monte Carlo simulations to address the problem of type 1/ type 2 choice in death ligand mediated apoptosis of cancer cells. Our study provides insights into the activation of membrane proximal death module that results from complex interplay between death and decoy receptors. Relative abundance of death and decoy receptors was shown to be a key parameter for activation of the initiator caspases in the membrane module. Increased concentration of death ligands frequently increased the type 1...
,
2015-01-01
We present a sophisticated likelihood reconstruction algorithm for shower-image analysis of imaging Cherenkov telescopes. The reconstruction algorithm is based on the comparison of the camera pixel amplitudes with the predictions from a Monte Carlo based model. Shower parameters are determined by a maximisation of a likelihood function. Maximisation of the likelihood as a function of shower fit parameters is performed using a numerical non-linear optimisation technique. A related reconstruction technique has already been developed by the CAT and the H.E.S.S. experiments, and provides a more precise direction and energy reconstruction of the photon induced shower compared to the second moment of the camera image analysis. Examples are shown of the performance of the analysis on simulated gamma-ray data from the VERITAS array.
Dokania, N; Mathimalar, S; Garai, A; Nanal, V; Pillay, R G; Bhushan, K G
2015-01-01
The neutron flux at low energy ($E_n\\leq15$ MeV) resulting from the radioactivity of the rock in the underground cavern of the India-based Neutrino Observatory is estimated using Geant4-based Monte Carlo simulations. The neutron production rate due to the spontaneous fission of U, Th and ($\\alpha, n$) interactions in the rock is determined employing the actual rock composition. It has been demonstrated that the total flux is equivalent to a finite size cylindrical rock ($D=L=140$ cm) element. The energy integrated neutron flux thus obtained at the center of the underground tunnel is 2.76 (0.47) $\\times 10^{-6}\\rm~n ~cm^{-2}~s^{-1}$. The estimated neutron flux is of the same order ($\\sim10^{-6}\\rm~n ~cm^{-2}~s^{-1}$)~as measured in other underground laboratories.
N. Dokania; V. Singh; S. Mathimalar; A. Garai; V. Nanal; R. G. Pillay; K. G. Bhushan
2015-09-23
The neutron flux at low energy ($E_n\\leq15$ MeV) resulting from the radioactivity of the rock in the underground cavern of the India-based Neutrino Observatory is estimated using Geant4-based Monte Carlo simulations. The neutron production rate due to the spontaneous fission of U, Th and ($\\alpha, n$) interactions in the rock is determined employing the actual rock composition. It has been demonstrated that the total flux is equivalent to a finite size cylindrical rock ($D=L=140$ cm) element. The energy integrated neutron flux thus obtained at the center of the underground tunnel is 2.76 (0.47) $\\times 10^{-6}\\rm~n ~cm^{-2}~s^{-1}$. The estimated neutron flux is of the same order ($\\sim10^{-6}\\rm~n ~cm^{-2}~s^{-1}$)~as measured in other underground laboratories.
Qin, Jianguo; Liu, Rong; Zhu, Tonghua; Zhang, Xinwei; Ye, Bangjiao
2015-01-01
To overcome the problem of inefficient computing time and unreliable results in MCNP5 calculation, a two-step method is adopted to calculate the energy deposition of prompt gamma-rays in detectors for depleted uranium spherical shells under D-T neutrons irradiation. In the first step, the gamma-ray spectrum for energy below 7 MeV is calculated by MCNP5 code; secondly, the electron recoil spectrum in a BC501A liquid scintillator detector is simulated based on EGSnrc Monte Carlo Code with the gamma-ray spectrum from the first step as input. The comparison of calculated results with experimental ones shows that the simulations agree well with experiment in the energy region 0.4-3 MeV for the prompt gamma-ray spectrum and below 4 MeVee for the electron recoil spectrum. The reliability of the two-step method in this work is validated.
Hu, Z. M.; Xie, X. F.; Chen, Z. J.; Peng, X. Y.; Du, T. F.; Cui, Z. Q.; Ge, L. J.; Li, T.; Yuan, X.; Zhang, X.; Li, X. Q.; Zhang, G. H.; Chen, J. X.; Fan, T. S., E-mail: tsfan@pku.edu.cn [State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871 (China); Hu, L. Q.; Zhong, G. Q.; Lin, S. Y.; Wan, B. N. [Institute of Plasma Physics, CAS, Hefei 230031 (China); Gorini, G. [Dipartimento di Fisica, Università di Milano-Bicocca, Milano 20126 (Italy); Istituto di Fisica del Plasma “P. Caldirola,” Milano 20126 (Italy)
2014-11-15
To assess the neutron energy spectra and the neutron dose for different positions around the Experimental Advanced Superconducting Tokamak (EAST) device, a Bonner Sphere Spectrometer (BSS) was developed at Peking University, with totally nine polyethylene spheres and a SP9 {sup 3}He counter. The response functions of the BSS were calculated by the Monte Carlo codes MCNP and GEANT4 with dedicated models, and good agreement was found between these two codes. A feasibility study was carried out with a simulated neutron energy spectrum around EAST, and the simulated “experimental” result of each sphere was obtained by calculating the response with MCNP, which used the simulated neutron energy spectrum as the input spectrum. With the deconvolution of the “experimental” measurement, the neutron energy spectrum was retrieved and compared with the preset one. Good consistence was found which offers confidence for the application of the BSS system for dose and spectrum measurements around a fusion device.
Wang Jianhua; Zhang Hualin [Shanghai Institute of Applied Physics, CAS, Shanghai 201800 (China); Department of Radiation Medicine, Ohio State University, Columbus, Ohio 43210 (United States)
2008-04-15
A recently developed alternative brachytherapy seed, Cs-1 Rev2 cesium-131, has begun to be used in clinical practice. The dosimetric characteristics of this source in various media, particularly in human tissues, have not been fully evaluated. The aim of this study was to calculate the dosimetric parameters for the Cs-1 Rev2 cesium-131 seed following the recommendations of the AAPM TG-43U1 report [Rivard et al., Med. Phys. 31, 633-674 (2004)] for new sources in brachytherapy applications. Dose rate constants, radial dose functions, and anisotropy functions of the source in water, Virtual Water, and relevant human soft tissues were calculated using MCNP5 Monte Carlo simulations following the TG-43U1 formalism. The results yielded dose rate constants of 1.048, 1.024, 1.041, and 1.044 cGy h{sup -1} U{sup -1} in water, Virtual Water, muscle, and prostate tissue, respectively. The conversion factor for this new source between water and Virtual Water was 1.02, between muscle and water was 1.006, and between prostate and water was 1.004. The authors' calculation of anisotropy functions in a Virtual Water phantom agreed closely with Murphy's measurements [Murphy et al., Med. Phys. 31, 1529-1538 (2004)]. Our calculations of the radial dose function in water and Virtual Water have good agreement with those in previous experimental and Monte Carlo studies. The TG-43U1 parameters for clinical applications in water, muscle, and prostate tissue are presented in this work.
Mohammadyari, P; Faghihi, R; Shirazi, M Mosleh; Lotfi, M; Meigooni, A
2014-06-01
Purpose: the accuboost is the most modern method of breast brachytherapy that is a boost method in compressed tissue by a mammography unit. the dose distribution in uncompressed tissue, as compressed tissue is important that should be characterized. Methods: In this study, the mechanical behavior of breast in mammography loading, the displacement of breast tissue and the dose distribution in compressed and uncompressed tissue, are investigated. Dosimetry was performed by two dosimeter methods of Monte Carlo simulations using MCNP5 code and thermoluminescence dosimeters. For Monte Carlo simulations, the dose values in cubical lattice were calculated using tally F6. The displacement of the breast elements was simulated by Finite element model and calculated using ABAQUS software, from which the 3D dose distribution in uncompressed tissue was determined. The geometry of the model is constructed from MR images of 6 volunteers. Experimental dosimetery was performed by placing the thermoluminescence dosimeters into the polyvinyl alcohol breast equivalent phantom and on the proximal edge of compression plates to the chest. Results: The results indicate that using the cone applicators would deliver more than 95% of dose to the depth of 5 to 17mm, while round applicator will increase the skin dose. Nodal displacement, in presence of gravity and 60N forces, i.e. in mammography compression, was determined with 43% contraction in the loading direction and 37% expansion in orthogonal orientation. Finally, in comparison of the acquired from thermoluminescence dosimeters with MCNP5, they are consistent with each other in breast phantom and in chest's skin with average different percentage of 13.7±5.7 and 7.7±2.3, respectively. Conclusion: The major advantage of this kind of dosimetry is the ability of 3D dose calculation by FE Modeling. Finally, polyvinyl alcohol is a reliable material as a breast tissue equivalent dosimetric phantom that provides the ability of TLD dosimetry for validation.
Zhai, Pengwang
2009-06-02
radiative transfer equation, which is the equation governing the radiation field in a multiple scattering medium. The impulse-response relation for a plane-parallel scattering medium is studied using our 3D Monte Carlo code. For a collimated light beam...