Frixione, Stefano [INFN, Sezione di Genova, Via Dodecaneso 33, 16146 Genova (Italy)
2005-10-06T23:59:59.000Z
I review recent progress in the physics of parton shower Monte Carlos, emphasizing the ideas which allow the inclusion of higher-order matrix elements into the framework of event generators.
Monte Carlo event reconstruction implemented with artificial neural networks
Tolley, Emma Elizabeth
2011-01-01T23:59:59.000Z
I implemented event reconstruction of a Monte Carlo simulation using neural networks. The OLYMPUS Collaboration is using a Monte Carlo simulation of the OLYMPUS particle detector to evaluate systematics and reconstruct ...
Status of Monte-Carlo Event Generators
Hoeche, Stefan; /SLAC
2011-08-11T23:59:59.000Z
Recent progress on general-purpose Monte-Carlo event generators is reviewed with emphasis on the simulation of hard QCD processes and subsequent parton cascades. Describing full final states of high-energy particle collisions in contemporary experiments is an intricate task. Hundreds of particles are typically produced, and the reactions involve both large and small momentum transfer. The high-dimensional phase space makes an exact solution of the problem impossible. Instead, one typically resorts to regarding events as factorized into different steps, ordered descending in the mass scales or invariant momentum transfers which are involved. In this picture, a hard interaction, described through fixed-order perturbation theory, is followed by multiple Bremsstrahlung emissions off initial- and final-state and, finally, by the hadronization process, which binds QCD partons into color-neutral hadrons. Each of these steps can be treated independently, which is the basic concept inherent to general-purpose event generators. Their development is nowadays often focused on an improved description of radiative corrections to hard processes through perturbative QCD. In this context, the concept of jets is introduced, which allows to relate sprays of hadronic particles in detectors to the partons in perturbation theory. In this talk, we briefly review recent progress on perturbative QCD in event generation. The main focus lies on the general-purpose Monte-Carlo programs HERWIG, PYTHIA and SHERPA, which will be the workhorses for LHC phenomenology. A detailed description of the physics models included in these generators can be found in [8]. We also discuss matrix-element generators, which provide the parton-level input for general-purpose Monte Carlo.
Hard-sphere melting and crystallization with event-chain Monte Carlo
Isobe, Masaharu
2015-01-01T23:59:59.000Z
We simulate crystallization and melting with local Monte Carlo (LMC), event-chain Monte Carlo (ECMC), and with event-driven molecular dynamics (EDMD) in systems with up to one million three-dimensional hard spheres. We illustrate that our implementations of the three algorithms rigorously coincide in their equilibrium properties. We then study nucleation in the NVE ensemble from the fcc crystal into the homogeneous liquid phase and from the liquid into the homogeneous crystal. ECMC and EDMD both approach equilibrium orders of magnitude faster than LMC. ECMC is also notably faster than EDMD, especially for the equilibration into a crystal from a disordered initial condition at high density. ECMC can be trivially implemented for hard-sphere and for soft-sphere potentials, and we suggest possible applications of this algorithm for studying jamming and the physics of glasses, as well as disordered systems.
Monte Carlo photon benchmark problems
Whalen, D.J.; Hollowell, D.E.; Hendricks, J.S.
1991-01-01T23:59:59.000Z
Photon benchmark calculations have been performed to validate the MCNP Monte Carlo computer code. These are compared to both the COG Monte Carlo computer code and either experimental or analytic results. The calculated solutions indicate that the Monte Carlo method, and MCNP and COG in particular, can accurately model a wide range of physical problems.
Andrea Bianconi
2008-06-05T23:59:59.000Z
In this note I report and discuss the physical scheme and the main approximations used by the event generator code DY\\_AB. This Monte Carlo code is aimed at preliminary simulation, during the stage of apparatus planning, of Drell-Yan events characterized by azimuthal asymmetries, in experiments with moderate center of mass energy $\\sqrt{s}$ $<<$ 100 GeV.
Marcus, Ryan C. [Los Alamos National Laboratory
2012-07-25T23:59:59.000Z
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
Quantum Gibbs ensemble Monte Carlo
Fantoni, Riccardo, E-mail: rfantoni@ts.infn.it [Dipartimento di Scienze Molecolari e Nanosistemi, Università Ca’ Foscari Venezia, Calle Larga S. Marta DD2137, I-30123 Venezia (Italy); Moroni, Saverio, E-mail: moroni@democritos.it [DEMOCRITOS National Simulation Center, Istituto Officina dei Materiali del CNR and SISSA Scuola Internazionale Superiore di Studi Avanzati, Via Bonomea 265, I-34136 Trieste (Italy)
2014-09-21T23:59:59.000Z
We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.
Is Monte Carlo embarrassingly parallel?
Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)
2012-07-01T23:59:59.000Z
Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)
Monte Carlo calculations of nuclei
Pieper, S.C. [Argonne National Lab., IL (United States). Physics Div.
1997-10-01T23:59:59.000Z
Nuclear many-body calculations have the complication of strong spin- and isospin-dependent potentials. In these lectures the author discusses the variational and Green`s function Monte Carlo techniques that have been developed to address this complication, and presents a few results.
Shell model Monte Carlo methods
Koonin, S.E. [California Inst. of Tech., Pasadena, CA (United States). W.K. Kellogg Radiation Lab.; Dean, D.J. [Oak Ridge National Lab., TN (United States)
1996-10-01T23:59:59.000Z
We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of {gamma}-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs.
Monte Carlo Methods in Quantum Field Theory
I. Montvay
2007-05-30T23:59:59.000Z
In these lecture notes some applications of Monte Carlo integration methods in Quantum Field Theory - in particular in Quantum Chromodynamics - are introduced and discussed.
The MC21 Monte Carlo Transport Code
Sutton TM, Donovan TJ, Trumbull TH, Dobreff PS, Caro E, Griesheimer DP, Tyburski LJ, Carpenter DC, Joo H
2007-01-09T23:59:59.000Z
MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities.
Correlations in the Monte Carlo Glauber model
Jean-Paul Blaizot; Wojciech Broniowski; Jean-Yves Ollitrault
2014-09-12T23:59:59.000Z
Event-by-event fluctuations of observables are often modeled using the Monte Carlo Glauber model, in which the energy is initially deposited in sources associated with wounded nucleons. In this paper, we analyze in detail the correlations between these sources in proton-nucleus and nucleus-nucleus collisions. There are correlations arising from nucleon-nucleon correlations within each nucleus, and correlations due to the collision mechanism, which we dub twin correlations. We investigate this new phenomenon in detail. At the RHIC and LHC energies, correlations are found to have modest effects on size and eccentricity fluctuations, such that the Glauber model produces to a good approximation a collection of independent sources.
Exponential convergence with adaptive Monte Carlo
Booth, T.E.
1997-11-01T23:59:59.000Z
For over a decade, it has been known that exponential convergence on discrete transport problems was possible using adaptive Monte Carlo techniques. Now, exponential convergence has been empirically demonstrated on a spatially continuous problem.
THE BEGINNING of the MONTE CARLO METHOD
. For a whole host of 125 #12;Monte Carlo reasons, he had become seriously inter- ested in the thermonuclear a preliminary computational model of a thermonuclear reaction for the ENIAC. He felt he could convince
The role of Monte Carlo within a diagonalization/Monte Carlo scheme
Dean Lee
2000-10-31T23:59:59.000Z
We review the method of stochastic error correction which eliminates the truncation error associated with any subspace diagonalization. Monte Carlo sampling is used to compute the contribution of the remaining basis vectors not included in the initial diagonalization. The method is part of a new approach to computational quantum physics which combines both diagonalization and Monte Carlo techniques.
Fractured reservoir evaluation using Monte Carlo techniques
Sears, G.F.; Phillips, N.V.
1987-01-01T23:59:59.000Z
Pro forma cash-flow analysis of petroleum ventures usually is considered as a deterministic model. In the last 10 years, Monte Carlo analysis has allowed the introduction of probability distributions of input variables in place of single-valued functions. Reserve determination and rate scheduling in these current Monte Carlo techniques have relied on the volumetric formula, which works well in nonfractured reservoirs. Recent massive drilling in fractured reservoirs has rendered this approach unusable. This paper develops a variation of the Arps rate-cumulative equation as a basic model for the determination of the distribution of original reserves and the decline rates. Continuation of the Monte Carlo technique into net present value analysis and internal rate of return (IRR) is also developed.
anatomy monte carlo: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
22 23 24 25 Next Page Last Page Topic Index 1 Optical Monte Carlo modeling of a true port wine stain anatomy Biology and Medicine Websites Summary: Optical Monte Carlo modeling of...
Monte Carlo Tools for Jet Quenching
Korinna Zapp
2011-09-07T23:59:59.000Z
A thorough understanding of jet quenching on the basis of multi-particle final states and jet observables requires new theoretical tools. This talk summarises the status and propects of the theoretical description of jet quenching in terms of Monte Carlo generators.
MONTE CARLO CALCULATIONS OF LR115 DETECTOR RESPONSE TO 222
Yu, K.N.
(4):414Â419; 2000 Key words: Monte Carlo; radon progeny; detector, alpha- track; thoron INTRODUCTION THE LR115
Multiple Overlapping Tiles for Contextual Monte Carlo Tree Search
for linear transforms [4] or active learning [8]. The use of Monte Carlo simulations to evaluate a situation- tions depending on the context. The modification is based on a reward function learned on a tiling of the space of Monte Carlo simulations. The tiling is done by regrouping the Monte Carlo simulations where two
John von Neumann Institute for Computing Monte Carlo Protein Folding
Hsu, Hsiao-Ping
John von Neumann Institute for Computing Monte Carlo Protein Folding: Simulations of Met://www.fz-juelich.de/nic-series/volume20 #12;#12;Monte Carlo Protein Folding: Simulations of Met-Enkephalin with Solvent-Accessible Area difficulties in applying Monte Carlo methods to protein folding. The solvent-accessible area method, a popular
Monte Carlo: in the beginning and some great expectations
Metropolis, N.
1985-01-01T23:59:59.000Z
The central theme will be on the historical setting and origins of the Monte Carlo Method. The scene was post-war Los Alamos Scientific Laboratory. There was an inevitability about the Monte Carlo Event: the ENIAC had recently enjoyed its meteoric rise (on a classified Los Alamos problem); Stan Ulam had returned to Los Alamos; John von Neumann was a frequent visitor. Techniques, algorithms, and applications developed rapidly at Los Alamos. Soon, the fascination of the Method reached wider horizons. The first paper was submitted for publication in the spring of 1949. In the summer of 1949, the first open conference was held at the University of California at Los Angeles. Of some interst perhaps is an account of Fermi's earlier, independent application in neutron moderation studies while at the University of Rome. The quantum leap expected with the advent of massively parallel processors will provide stimuli for very ambitious applications of the Monte Carlo Method in disciplines ranging from field theories to cosmology, including more realistic models in the neurosciences. A structure of multi-instruction sets for parallel processing is ideally suited for the Monte Carlo approach. One may even hope for a modest hardening of the soft sciences.
Random number stride in Monte Carlo calculations
Hendricks, J.S.
1990-01-01T23:59:59.000Z
Monte Carlo radiation transport codes use a sequence of pseudorandom numbers to sample from probability distributions. A common practice is to start each source particle a predetermined number of random numbers up the pseudorandom number sequence. This number of random numbers skipped between each source particles the random number stride, S. Consequently, the jth source particle always starts with the j{center dot}Sth random number providing correlated sampling'' between similar calculations. A new machine-portable random number generator has been written for the Monte Carlo radiation transport code MCNP providing user's control of the random number stride. First the new MCNP random number generator algorithm will be described and then the effects of varying the stride will be presented. 2 refs., 1 fig.
STORM in Monte Carlo reactor physics calculations KAUR TUTTELBERG
Haviland, David
STORM in Monte Carlo reactor physics calculations KAUR TUTTELBERG Master of Science Thesis Carlo reactor physics criticality calculations. This is achieved by optimising the number of neutron for more efficient Monte Carlo reactor physics calculations, giving results with errors that can
Quantum Monte Carlo calculations for light nuclei
Wiringa, R.B.
1998-08-01T23:59:59.000Z
Quantum Monte Carlo calculations of ground and low-lying excited states for nuclei with A {le} 8 are made using a realistic Hamiltonian that fits NN scattering data. Results for more than 30 different (j{sup {prime}}, T) states, plus isobaric analogs, are obtained and the known excitation spectra are reproduced reasonably well. Various density and momentum distributions and electromagnetic form factors and moments have also been computed. These are the first microscopic calculations that directly produce nuclear shell structure from realistic NN interactions.
Monte Carlo simulations on Graphics Processing Units
Vadim Demchik; Alexei Strelchenko
2009-03-30T23:59:59.000Z
Implementation of basic local Monte-Carlo algorithms on ATI Graphics Processing Units (GPU) is investigated. The Ising model and pure SU(2) gluodynamics simulations are realized with the Compute Abstraction Layer (CAL) of ATI Stream environment using the Metropolis and the heat-bath algorithms, respectively. We present an analysis of both CAL programming model and the efficiency of the corresponding simulation algorithms on GPU. In particular, the significant performance speed-up of these algorithms in comparison with serial execution is observed.
A Monte Carlo algorithm for degenerate plasmas
Turrell, A.E., E-mail: a.turrell09@imperial.ac.uk; Sherlock, M.; Rose, S.J.
2013-09-15T23:59:59.000Z
A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the Fermi–Dirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electron–ion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.
Monte Carlo Simulations of the Corrosion of Aluminoborosilicate...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Corrosion of Aluminoborosilicate Glasses. Monte Carlo Simulations of the Corrosion of Aluminoborosilicate Glasses. Abstract: Aluminum is one of the most common components included...
Quantum Monte Carlo methods for nuclear physics
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.
2012-01-01T23:59:59.000Z
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-bodymore »interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less
Quantum Monte Carlo methods for nuclear physics
J. Carlson; S. Gandolfi; F. Pederiva; Steven C. Pieper; R. Schiavilla; K. E. Schmidt; R. B. Wiringa
2015-04-29T23:59:59.000Z
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Quantum Monte Carlo methods for nuclear physics
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Carlson, Joseph A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Gandolfi, Stefano [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pederiva, Francesco [Univ. of Trento (Italy); Pieper, Steven C. [Argonne National Lab. (ANL), Argonne, IL (United States); Schiavilla, Rocco [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Old Dominion Univ., Norfolk, VA (United States); Schmidt, K. E, [Arizona State Univ., Tempe, AZ (United States); Wiringa, Robert B. [Argonne National Lab. (ANL), Argonne, IL (United States)
2012-01-01T23:59:59.000Z
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Monte Carlo Tools for charged Higgs boson production
K. Kovarik
2014-12-18T23:59:59.000Z
In this short review we discuss two implementations of the charged Higgs boson production process in association with a top quark in Monte Carlo event generators at next-to-leading order in QCD. We introduce the MC@NLO and the POWHEG method of matching next-to-leading order matrix elements with parton showers and compare both methods analyzing the charged Higgs boson production process in association with a top quark. We shortly discuss the case of a light charged Higgs boson where the associated charged Higgs production interferes with the charged Higgs production via t tbar-production and subsequent decay of the top quark.
Adjoint electron-photon transport Monte Carlo calculations with ITS
Lorence, L.J.; Kensek, R.P.; Halbleib, J.A. [Sandia National Labs., Albuquerque, NM (United States); Morel, J.E. [Los Alamos National Lab., NM (United States)
1995-02-01T23:59:59.000Z
A general adjoint coupled electron-photon Monte Carlo code for solving the Boltzmann-Fokker-Planck equation has recently been created. It is a modified version of ITS 3.0, a coupled electronphoton Monte Carlo code that has world-wide distribution. The applicability of the new code to radiation-interaction problems of the type found in space environments is demonstrated.
Special Topics Monte Carlo Methods in Science, Engineering and Business
Shepp, Larry
SYLLABUS Special Topics Monte Carlo Methods in Science, Engineering and Business Fall, 2007 in Probability and Statistics 3. Simple Simulation Methods 4. Sequential Monte Carlo Methods 5. Markov Chain up shortly Prerequisite: First Graduate Level Mathematical Statistics Course It should be emphasized
New Monte Carlo schemes for simulating diffusions in discontinuous media
Paris-Sud XI, Université de
New Monte Carlo schemes for simulating diffusions in discontinuous media Antoine Lejay1,2,3,4,5 Sylvain Maire6,7 April 28, 2012 Abstract We introduce new Monte Carlo simulation schemes for diffusions in a dis- continuous media divided in subdomains with piecewise constant diffusivity. These schemes
New Monte Carlo schemes for simulating diffusions in discontinuous media
Paris-Sud XI, Université de
New Monte Carlo schemes for simulating diffusions in discontinuous media Antoine Lejay1,2,3,4,5 Sylvain Maire6,7 December 13, 2012 Abstract We introduce new Monte Carlo simulation schemes for diffusions in a dis- continuous media divided in subdomains with piecewise constant diffusivity. These schemes
Monte Carlo Evaluation of Resampling-Based Hypothesis Tests
Boos, Dennis
of rejections. At each alternative this Monte Carlo estimate will be unbiased for the true power function of the function ( ), where (A) = 1 if A is true and = 0 otherwise. The connection to measurement error methods 1998 Abstract Monte Carlo estimation of the power of tests that require resampling can be very com
RADIATIVE HEAT TRANSFER WITH QUASI-MONTE CARLO METHODS
RADIATIVE HEAT TRANSFER WITH QUASI-MONTE CARLO METHODS A. Kersch1 W. Moroko2 A. Schuster1 1Siemens of Quasi-Monte Carlo to this problem. 1.1 Radiative Heat Transfer Reactors In the manufacturing of the problems which can be solved by such a simulation is high accuracy modeling of the radiative heat transfer
FREYA-a new Monte Carlo code for improved modeling of fission chains
Hagmann, C A; Randrup, J; Vogt, R L
2012-06-12T23:59:59.000Z
A new simulation capability for modeling of individual fission events and chains and the transport of fission products in materials is presented. FREYA ( Fission Yield Event Yield Algorithm ) is a Monte Carlo code for generating fission events providing correlated kinematic information for prompt neutrons, gammas, and fragments. As a standalone code, FREYA calculates quantities such as multiplicity-energy, angular, and gamma-neutron energy sharing correlations. To study materials with multiplication, shielding effects, and detectors, we have integrated FREYA into the general purpose Monte Carlo code MCNP. This new tool will allow more accurate modeling of detector responses including correlations and the development of SNM detectors with increased sensitivity.
Quantum Ice : a quantum Monte Carlo study
Nic Shannon; Olga Sikora; Frank Pollmann; Karlo Penc; Peter Fulde
2011-12-13T23:59:59.000Z
Ice states, in which frustrated interactions lead to a macroscopic ground-state degeneracy, occur in water ice, in problems of frustrated charge order on the pyrochlore lattice, and in the family of rare-earth magnets collectively known as spin ice. Of particular interest at the moment are "quantum spin ice" materials, where large quantum fluctuations may permit tunnelling between a macroscopic number of different classical ground states. Here we use zero-temperature quantum Monte Carlo simulations to show how such tunnelling can lift the degeneracy of a spin or charge ice, stabilising a unique "quantum ice" ground state --- a quantum liquid with excitations described by the Maxwell action of 3+1-dimensional quantum electrodynamics. We further identify a competing ordered "squiggle" state, and show how both squiggle and quantum ice states might be distinguished in neutron scattering experiments on a spin ice material.
Parametric Learning and Monte Carlo Optimization
Wolpert, David H
2007-01-01T23:59:59.000Z
This paper uncovers and explores the close relationship between Monte Carlo Optimization of a parametrized integral (MCO), Parametric machine-Learning (PL), and `blackbox' or `oracle'-based optimization (BO). We make four contributions. First, we prove that MCO is mathematically identical to a broad class of PL problems. This identity potentially provides a new application domain for all broadly applicable PL techniques: MCO. Second, we introduce immediate sampling, a new version of the Probability Collectives (PC) algorithm for blackbox optimization. Immediate sampling transforms the original BO problem into an MCO problem. Accordingly, by combining these first two contributions, we can apply all PL techniques to BO. In our third contribution we validate this way of improving BO by demonstrating that cross-validation and bagging improve immediate sampling. Finally, conventional MC and MCO procedures ignore the relationship between the sample point locations and the associated values of the integrand; only th...
A hybrid Monte Carlo and response matrix Monte Carlo method in criticality calculation
Li, Z.; Wang, K. [Dept. of Engineering Physics, Tsinghua Univ., Beijing, 100084 (China)
2012-07-01T23:59:59.000Z
Full core calculations are very useful and important in reactor physics analysis, especially in computing the full core power distributions, optimizing the refueling strategies and analyzing the depletion of fuels. To reduce the computing time and accelerate the convergence, a method named Response Matrix Monte Carlo (RMMC) method based on analog Monte Carlo simulation was used to calculate the fixed source neutron transport problems in repeated structures. To make more accurate calculations, we put forward the RMMC method based on non-analog Monte Carlo simulation and investigate the way to use RMMC method in criticality calculations. Then a new hybrid RMMC and MC (RMMC+MC) method is put forward to solve the criticality problems with combined repeated and flexible geometries. This new RMMC+MC method, having the advantages of both MC method and RMMC method, can not only increase the efficiency of calculations, also simulate more complex geometries rather than repeated structures. Several 1-D numerical problems are constructed to test the new RMMC and RMMC+MC method. The results show that RMMC method and RMMC+MC method can efficiently reduce the computing time and variations in the calculations. Finally, the future research directions are mentioned and discussed at the end of this paper to make RMMC method and RMMC+MC method more powerful. (authors)
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Urbatsch, T.J.
1995-11-01T23:59:59.000Z
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Monte Carlo model for electron degradation in methane
Bhardwaj, Anil
2015-01-01T23:59:59.000Z
We present a Monte Carlo model for degradation of 1-10,000 eV electrons in an atmosphere of methane. The electron impact cross sections for CH4 are compiled and analytical representations of these cross sections are used as input to the model.model.Yield spectra, which provides information about the number of inelastic events that have taken place in each energy bin, is used to calculate the yield (or population) of various inelastic processes. The numerical yield spectra, obtained from the Monte Carlo simulations, is represented analytically, thus generating the Analytical Yield Spectra (AYS). AYS is employed to obtain the mean energy per ion pair and efficiencies of various inelastic processes.Mean energy per ion pair for neutral CH4 is found to be 26 (27.8) eV at 10 (0.1) keV. Efficiency calculation showed that ionization is the dominant process at energies >50 eV, for which more than 50% of the incident electron energy is used. Above 25 eV, dissociation has an efficiency of 27%. Below 10 eV, vibrational e...
Sequential Monte Carlo Methods for Protein Folding
Peter Grassberger
2004-08-26T23:59:59.000Z
We describe a class of growth algorithms for finding low energy states of heteropolymers. These polymers form toy models for proteins, and the hope is that similar methods will ultimately be useful for finding native states of real proteins from heuristic or a priori determined force fields. These algorithms share with standard Markov chain Monte Carlo methods that they generate Gibbs-Boltzmann distributions, but they are not based on the strategy that this distribution is obtained as stationary state of a suitably constructed Markov chain. Rather, they are based on growing the polymer by successively adding individual particles, guiding the growth towards configurations with lower energies, and using "population control" to eliminate bad configurations and increase the number of "good ones". This is not done via a breadth-first implementation as in genetic algorithms, but depth-first via recursive backtracking. As seen from various benchmark tests, the resulting algorithms are extremely efficient for lattice models, and are still competitive with other methods for simple off-lattice models.
Variance Reduction Techniques for Implicit Monte Carlo Simulations
Landman, Jacob Taylor
2013-09-19T23:59:59.000Z
The Implicit Monte Carlo (IMC) method is widely used for simulating thermal radiative transfer and solving the radiation transport equation. During an IMC run a grid network is constructed and particles are sourced into the problem to simulate...
An Analysis Tool for Flight Dynamics Monte Carlo Simulations
Restrepo, Carolina 1982-
2011-05-20T23:59:59.000Z
and analysis work to understand vehicle operating limits and identify circumstances that lead to mission failure. A Monte Carlo simulation approach that varies a wide range of physical parameters is typically used to generate thousands of test cases...
Enhancements in Continuous-Energy Monte Carlo Capabilities in SCALE
Bekar, Kursat B [ORNL] [ORNL; Celik, Cihangir [ORNL] [ORNL; Wiarda, Dorothea [ORNL] [ORNL; Peplow, Douglas E. [ORNL] [ORNL; Rearden, Bradley T [ORNL] [ORNL; Dunn, Michael E [ORNL] [ORNL
2013-01-01T23:59:59.000Z
Monte Carlo tools in SCALE are commonly used in criticality safety calculations as well as sensitivity and uncertainty analysis, depletion, and criticality alarm system analyses. Recent improvements in the continuous-energy data generated by the AMPX code system and significant advancements in the continuous-energy treatment in the KENO Monte Carlo eigenvalue codes facilitate the use of SCALE Monte Carlo codes to model geometrically complex systems with enhanced solution fidelity. The addition of continuous-energy treatment to the SCALE Monaco code, which can be used with automatic variance reduction in the hybrid MAVRIC sequence, provides significant enhancements, especially for criticality alarm system modeling. This paper describes some of the advancements in continuous-energy Monte Carlo codes within the SCALE code system.
Variance Reduction Techniques for Implicit Monte Carlo Simulations
Landman, Jacob Taylor
2013-09-19T23:59:59.000Z
The Implicit Monte Carlo (IMC) method is widely used for simulating thermal radiative transfer and solving the radiation transport equation. During an IMC run a grid network is constructed and particles are sourced into the problem to simulate...
Shift: A Massively Parallel Monte Carlo Radiation Transport Package
Pandya, Tara M [ORNL; Johnson, Seth R [ORNL; Davidson, Gregory G [ORNL; Evans, Thomas M [ORNL; Hamilton, Steven P [ORNL
2015-01-01T23:59:59.000Z
This paper discusses the massively-parallel Monte Carlo radiation transport package, Shift, de- veloped at Oak Ridge National Laboratory. It reviews the capabilities, implementation, and parallel performance of this code package. Scaling results demonstrate very good strong and weak scaling behavior of the implemented algorithms. Benchmark results from various reactor problems show that Shift results compare well to other contemporary Monte Carlo codes and experimental results.
Monte Carlos of the new generation: status and progress
Frixione, Stefano [INFN, Sezione di Genova, Via Dodecaneso 33, 16146 Genova (Italy)
2005-03-22T23:59:59.000Z
Standard parton shower monte carlos are designed to give reliable descriptions of low-pT physics. In the very high-energy regime of modern colliders, this is may lead to largely incorrect predictions of the basic reaction processes. This motivated the recent theoretical efforts aimed at improving monte carlos through the inclusion of matrix elements computed beyond the leading order in QCD. I briefly review the progress made, and discuss bottom production at the Tevatron.
Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments
Pevey, Ronald E.
2005-09-15T23:59:59.000Z
Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL.
Lattice Monte Carlo Simulations of Polymer Melts
Hsiao-Ping Hsu
2015-03-03T23:59:59.000Z
We use Monte Carlo simulations to study polymer melts consisting of fully flexible and moderately stiff chains in the bond fluctuation model at a volume fraction $0.5$. In order to reduce the local density fluctuations, we test a pre-packing process for the preparation of the initial configurations of the polymer melts, before the excluded volume interaction is switched on completely. This process leads to a significantly faster decrease of the number of overlapping monomers on the lattice. This is useful for simulating very large systems, where the statistical properties of the model with a marginally incomplete elimination of excluded volume violations are the same as those of the model with strictly excluded volume. We find that the internal mean square end-to-end distance for moderately stiff chains in a melt can be very well described by a freely rotating chain model with a precise estimate of the bond-bond orientational correlation between two successive bond vectors in equilibrium. The plot of the probability distributions of the reduced end-to-end distance of chains of different stiffness also shows that the data collapse is excellent and described very well by the Gaussian distribution for ideal chains. However, while our results confirm the systematic deviations between Gaussian statistics for the chain structure factor $S_c(q)$ [minimum in the Kratky-plot] found by Wittmer et al.~\\{EPL {\\bf 77} 56003 (2007).\\} for fully flexible chains in a melt, we show that for the available chain length these deviations are no longer visible, when the chain stiffness is included. The mean square bond length and the compressibility estimated from collective structure factors depend slightly on the stiffness of the chains.
Monte Carlo simulation study of scanning Auger electron images
Li, Y. G.; Ding, Z. J. [Department of Physics and Hefei National Laboratory for Physical Sciences at Microscale, University of Science and Technology of China, Hefei, Anhui 230026 (China); Zhang, Z. M. [Department of Astronomy and Applied Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China)
2009-07-15T23:59:59.000Z
Simulation of contrast formation in Auger electron imaging of surfaces is helpful for analyzing scanning Auger microscopy/microanalysis (SAM) images. In this work, we have extended our previous Monte Carlo model and the simulation method for calculation of scanning electron microscopy (SEM) images to SAM images of complex structures. The essentials of the simulation method are as follows. (1) We use a constructive solid geometry modeling for a sample geometry, which is complex in elemental distribution, as well as in topographical configuration and a ray-tracing technique in the calculation procedure of electron flight steps that across the different element zones. The combination of the basic objects filled with elements, alloys, or compounds enables the simulation to a variety of sample geometries. (2) Sampled Auger signal electrons with a characteristic energy are generated in the simulation following an inner-shell ionization event, whose description is based on the Castani's inner-shell ionization cross section. This paper discusses in detail the features of simulated SAM images and of line scans for structured samples, i.e., the objects embedded in a matrix, under various experimental conditions (object size, location depth, beam energy, and the incident angle). Several effects are predicted and explained, such as the contrast reversion for nanoparticles in sizes of 10-60 nm, the contrast enhancement for particles made of different elements and wholly embedded in a matrix, and the artifact contrast due to nearby objects containing different elements. The simulated SAM images are also compared with the simulated SEM images of secondary electrons and of backscattered electrons. The results indicate that the Monte Carlo simulation can play an important role in quantitative SAM mapping.
Quantum Monte Carlo Calculations of Light Nuclei Using Chiral Potentials
J. E. Lynn; J. Carlson; E. Epelbaum; S. Gandolfi; A. Gezerlis; A. Schwenk
2014-11-09T23:59:59.000Z
We present the first Green's function Monte Carlo calculations of light nuclei with nuclear interactions derived from chiral effective field theory up to next-to-next-to-leading order. Up to this order, the interactions can be constructed in a local form and are therefore amenable to quantum Monte Carlo calculations. We demonstrate a systematic improvement with each order for the binding energies of $A=3$ and $A=4$ systems. We also carry out the first few-body tests to study perturbative expansions of chiral potentials at different orders, finding that higher-order corrections are more perturbative for softer interactions. Our results confirm the necessity of a three-body force for correct reproduction of experimental binding energies and radii, and pave the way for studying few- and many-nucleon systems using quantum Monte Carlo methods with chiral interactions.
A Multivariate Time Series Method for Monte Carlo Reactor Analysis
Taro Ueki
2008-08-14T23:59:59.000Z
A robust multivariate time series method has been established for the Monte Carlo calculation of neutron multiplication problems. The method is termed Coarse Mesh Projection Method (CMPM) and can be implemented using the coarse statistical bins for acquisition of nuclear fission source data. A novel aspect of CMPM is the combination of the general technical principle of projection pursuit in the signal processing discipline and the neutron multiplication eigenvalue problem in the nuclear engineering discipline. CMPM enables reactor physicists to accurately evaluate major eigenvalue separations of nuclear reactors with continuous energy Monte Carlo calculation. CMPM was incorporated in the MCNP Monte Carlo particle transport code of Los Alamos National Laboratory. The great advantage of CMPM over the traditional Fission Matrix method is demonstrated for the three space-dimensional modeling of the initial core of a pressurized water reactor.
The Monte Carlo method in quantum field theory
Colin Morningstar
2007-02-20T23:59:59.000Z
This series of six lectures is an introduction to using the Monte Carlo method to carry out nonperturbative studies in quantum field theories. Path integrals in quantum field theory are reviewed, and their evaluation by the Monte Carlo method with Markov-chain based importance sampling is presented. Properties of Markov chains are discussed in detail and several proofs are presented, culminating in the fundamental limit theorem for irreducible Markov chains. The example of a real scalar field theory is used to illustrate the Metropolis-Hastings method and to demonstrate the effectiveness of an action-preserving (microcanonical) local updating algorithm in reducing autocorrelations. The goal of these lectures is to provide the beginner with the basic skills needed to start carrying out Monte Carlo studies in quantum field theories, as well as to present the underlying theoretical foundations of the method.
Guan, Fada 1982-
2012-04-27T23:59:59.000Z
Monte Carlo method has been successfully applied in simulating the particles transport problems. Most of the Monte Carlo simulation tools are static and they can only be used to perform the static simulations for the problems with fixed physics...
Romano, Paul K. (Paul Kollath)
2013-01-01T23:59:59.000Z
Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there ...
Monte Carlo Filtering on Lie Groups Alessandro Chiuso 1 and Stefano Soatto 2
Soatto, Stefano
Monte Carlo Filtering on Lie Groups Alessandro Chiuso 1 and Stefano Soatto 2 Abstract We propose to be consistent with the updated conditional distribution. The algorithm proposed, like other Monte Carlo methods
Pasciak, Alexander Samuel
2007-04-25T23:59:59.000Z
Advancements in parallel and cluster computing have made many complex Monte Carlo simulations possible in the past several years. Unfortunately, cluster computers are large, expensive, and still not fast enough to make the Monte Carlo technique...
Guan, Fada 1982-
2012-04-27T23:59:59.000Z
Monte Carlo method has been successfully applied in simulating the particles transport problems. Most of the Monte Carlo simulation tools are static and they can only be used to perform the static simulations for the problems with fixed physics...
Calculating coherent pair production with Monte Carlo methods
Bottcher, C.; Strayer, M.R.
1989-01-01T23:59:59.000Z
We discuss calculations of the coherent electromagnetic pair production in ultra-relativistic hadron collisions. This type of production, in lowest order, is obtained from three diagrams which contain two virtual photons. We discuss simple Monte Carlo methods for evaluating these classes of diagrams without recourse to involved algebraic reduction schemes. 19 refs., 11 figs.
Multiple Overlapping Tiles for Contextual Monte Carlo Tree Search
Paris-Sud XI, Université de
generation of libraries for linear transforms [4] or active learning [8]. The use of Monte Carlo simulations is to group simulations where two particular actions have been selected by the same player. Then, we learn simulations in the MCTS algorithm has been proposed. We first present reinforcement learning, the principle
ENVIRONMENTAL MODELING: 1 APPLICATIONS: MONTE CARLO SENSITIVITY SIMULATIONS
Dimov, Ivan
SIMULATIONS TO THE PROBLEM OF AIR POLLUTION TRANSPORT 3 1.1 The Danish Eulerian Model #12;Chapter 1 APPLICATIONS: MONTE CARLO SENSITIVITY SIMULATIONS TO THE PROBLEM OF AIR POLLUTION of pollutants in a real-live scenario of air-pollution transport over Europe. First, the developed technique
Nonlocal Monte Carlo algorithms for statistical physics applications
Janke, Wolfhard
magnets to polymers or proteins, to mention only a few classical problems. Quantum statistical problems different theoretical approaches such as field theory or series expansions, and, of course, with experimentsNonlocal Monte Carlo algorithms for statistical physics applications Wolfhard Janke1 Institut fu
Auxiliary field Monte Carlo for charged particles A. C. Maggs
Maggs, Anthony
~ . This is the wrong statistical weight for particles interacting via Coulomb's law. While evaluation of the energy; accepted 20 November 2003 This article describes Monte Carlo algorithms for charged systems using.1063/1.1642587 I. INTRODUCTION Fast methods for calculating Coulomb interactions are of the greatest importance
Selection Criteria Based on Monte Carlo Simulation and Cross Validation
Shang, Junfeng
Shang Bowling Green State University, USA Abstract In the mixed modeling framework, Monte Carlo State University, Bowling Green, OH 43403. #12;1 Introduction The Akaike (1973, 1974) information-mail: jshang@bgnet.bgsu.edu. Department of Mathematics and Statistics, 450 Math Science Building, Bowling Green
Evolutionary Monte Carlo for protein folding simulations Faming Lianga)
Liang, Faming
Evolutionary Monte Carlo for protein folding simulations Faming Lianga) Department of Statistics to simulations of protein folding on simple lattice models, and to finding the ground state of a protein. In all structures in protein folding. The numerical results show that it is drastically superior to other methods
Thermal Properties of Supercritical Carbon Dioxide by Monte Carlo Simulations
Lisal, Martin
Thermal Properties of Supercritical Carbon Dioxide by Monte Carlo Simulations C.M. COLINAa,b, *, C and speed of sound for carbon dioxide (CO2) in the supercritical region, using the fluctuation method based: Fluctuations; Carbon dioxide; 2CLJQ; JouleThomson coefficient; Speed of sound INTRODUCTION Simulation methods
Quantum Monte Carlo calculations of symmetric nuclear matter
Stefano Gandolfi; Francesco Pederiva; Stefano Fantoni; Kevin E. Schmidt
2007-04-13T23:59:59.000Z
We present an accurate numerical study of the equation of state of nuclear matter based on realistic nucleon--nucleon interactions by means of Auxiliary Field Diffusion Monte Carlo (AFDMC) calculations. The AFDMC method samples the spin and isospin degrees of freedom allowing for quantum simulations of large nucleonic systems and can provide quantitative understanding of problems in nuclear structure and astrophysics.
Hybrid Probabilistic Roadmap and Monte Carlo Methods for Biomolecule Conformational Changes
Han, Li
1 Hybrid Probabilistic Roadmap and Monte Carlo Methods for Biomolecule Conformational Changes Li Han 1 Keywords: Conformation space, conformational changes, Monte Carlo, probabilistic roadmaps. 1. In this work, we have developed a hybrid Probabilistic Roadmap and Monte Carlo planner for biomolecule
Monte Carlo Studies of the CALICE AHCAL Tiles Gaps and Non-uniformities
Felix Sefkow; Angela Lucaci-Timoce
2010-06-18T23:59:59.000Z
The CALICE analog HCAL is a highly granular calorimeter, proposed for the International Linear Collider. It is based on scintillator tiles, read out by silicon photomultipliers (SiPMs). The effects of gaps between the calorimeter tiles, as well as the non-uniform response of the tiles, in view of the impact on the energy resolution, are studied in Monte Carlo events. It is shown that these type of effects do not have a significant influence on the measurement of hadron showers.
Review of Monte Carlo simulations for backgrounds from radioactivity
Selvi, Marco [INFN - Sezione di Bologna (Italy)] [INFN - Sezione di Bologna (Italy)
2013-08-08T23:59:59.000Z
For all experiments dealing with the rare event searches (neutrino, dark matter, neutrino-less double-beta decay), the reduction of the radioactive background is one of the most important and difficult tasks. There are basically two types of background, electron recoils and nuclear recoils. The electron recoil background is mostly from the gamma rays through the radioactive decay. The nuclear recoil background is from neutrons from spontaneous fission, (?, n) reactions and muoninduced interactions (spallations, photo-nuclear and hadronic interaction). The external gammas and neutrons from the muons and laboratory environment, can be reduced by operating the detector at deep underground laboratories and by placing active or passive shield materials around the detector. The radioactivity of the detector materials also contributes to the background; in order to reduce it a careful screening campaign is mandatory to select highly radio-pure materials. In this review I present the status of current Monte Carlo simulations aimed to estimate and reproduce the background induced by gamma and neutron radioactivity of the materials and the shield of rare event search experiment. For the electromagnetic background a good level of agreement between the data and the MC simulation has been reached by the XENON100 and EDELWEISS experiments, using the GEANT4 toolkit. For the neutron background, a comparison between the yield of neutrons from spontaneous fission and (?, n) obtained with two dedicated softwares, SOURCES-4A and the one developed by Mei-Zhang-Hime, show a good overall agreement, with total yields within a factor 2 difference. The energy spectra from SOURCES-4A are in general smoother, while those from MZH presents sharp peaks. The neutron propagation through various materials has been studied with two MC codes, GEANT4 and MCNPX, showing a reasonably good agreement, inside 50% discrepancy.
Efficient, automated Monte Carlo methods for radiation transport
Kong Rong; Ambrose, Martin [Claremont Graduate University, 150 E. 10th Street, Claremont, CA 91711 (United States); Spanier, Jerome [Claremont Graduate University, 150 E. 10th Street, Claremont, CA 91711 (United States); Beckman Laser Institute and Medical Clinic, University of California, 1002 Health Science Road E., Irvine, CA 92612 (United States)], E-mail: jspanier@uci.edu
2008-11-20T23:59:59.000Z
Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k+1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed.
Fixed-Node Diffusion Monte Carlo of Lithium Systems
Rasch, Kevin
2015-01-01T23:59:59.000Z
We study lithium systems over a range of number of atoms, e.g., atomic anion, dimer, metallic cluster, and body-centered cubic crystal by the diffusion Monte Carlo method. The calculations include both core and valence electrons in order to avoid any possible impact by pseudo potentials. The focus of the study is the fixed-node errors, and for that purpose we test several orbital sets in order to provide the most accurate nodal hyper surfaces. We compare our results to other high accuracy calculations wherever available and to experimental results so as to quantify the the fixed-node errors. The results for these Li systems show that fixed-node quantum Monte Carlo achieves remarkably high accuracy total energies and recovers 97-99 % of the correlation energy.
MC++: Parallel, portable, Monte Carlo neutron transport in C++
Lee, S.R.; Cummings, J.C. [Los Alamos National Lab., NM (United States); Nolen, S.D. [Texas A& M Univ., College Station, TX (United States). Dept. of Nuclear Engineering
1997-02-01T23:59:59.000Z
We have developed an implicit Monte Carlo neutron transport code in C++ using the Parallel Object-Oriented Methods and Applications (POOMA) class library. MC++ runs in parallel on and is portable to a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and parallelism. Current capabilities of MC++ are discussed, along with future plans and physics and performance results on many different platforms.
OBJECT KINETIC MONTE CARLO SIMULATIONS OF MICROSTRUCTURE EVOLUTION
Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.; Roche, Kenneth J.; Kurtz, Richard J.; Wirth, Brian D.
2013-09-30T23:59:59.000Z
The objective is to report the development of the flexible object kinetic Monte Carlo (OKMC) simulation code KSOME (kinetic simulation of microstructure evolution) which can be used to simulate microstructure evolution of complex systems under irradiation. In this report we briefly describe the capabilities of KSOME and present preliminary results for short term annealing of single cascades in tungsten at various primary-knock-on atom (PKA) energies and temperatures.
Regional Monte Carlo solution of elliptic partial differential equations
Booth, T.E.
1981-01-01T23:59:59.000Z
A continuous random walk procedure for solving some elliptic partial differential equations at a single point is generalized to estimate the solution everywhere. The Monte Carlo method described here is exact (except at the boundary) in the sense that the only error is the statistical sampling error that tends to zero as the sample size increases. A method to estimate the error introduced at the boundary is provided so that the boundary error can always be made less than the statistical error.
The hybrid Monte Carlo Algorithm and the chiral transition
Gupta, R.
1987-01-01T23:59:59.000Z
In this talk the author describes tests of the Hybrid Monte Carlo Algorithm for QCD done in collaboration with Greg Kilcup and Stephen Sharpe. We find that the acceptance in the glubal Metropolis step for Staggered fermions can be tuned and kept large without having to make the step-size prohibitively small. We present results for the finite temperature transition on 4/sup 4/ and 4 x 6/sup 3/ lattices using this algorithm.
Monte Carlo approach to nuclei and nuclear matter
Fantoni, Stefano [S.I.S.S.A., International School of Advanced Studies, INFN, Sezione di Trieste and INFM, CNR-DEMOCRITOS National Supercomputing Center (Italy); Gandolfi, Stefano; Illarionov, Alexey Yu. [S.I.S.S.A., International School of Advanced Studies, INFN, Sezione di Trieste (Italy); Schmidt, Kevin E. [Department of Physics, Arizona State University (United States); Pederiva, Francesco [Dipartimento di Fisica, University of Trento (Italy); INFM, CNR-DEMOCRITOS National Supercomputing Center (Greece)
2008-10-13T23:59:59.000Z
We report on the most recent applications of the Auxiliary Field Diffusion Monte Carlo (AFDMC) method. The equation of state (EOS) for pure neutron matter in both normal and BCS phase and the superfluid gap in the low-density regime are computed, using a realistic Hamiltonian containing the Argonne AV8' plus Urbana IX three-nucleon interaction. Preliminary results for the EOS of isospin-asymmetric nuclear matter are also presented.
Monte Carlo approach to nuclei and nuclear matter
Stefano Fantoni; Stefano Gandolfi; Alexey Yu. Illarionov; Kevin E. Schmidt; Francesco Pederiva
2008-07-31T23:59:59.000Z
We report on the most recent applications of the Auxiliary Field Diffusion Monte Carlo (AFDMC) method. The equation of state (EOS) for pure neutron matter in both normal and BCS phase and the superfluid gap in the low--density regime are computed, using a realistic Hamiltonian containing the Argonne AV8' plus Urbana IX three--nucleon interaction. Preliminary results for the EOS of isospin--asymmetric nuclear matter are also presented.
Quantum Monte Carlo Calculations of Symmetric Nuclear Matter
Gandolfi, Stefano [Dipartimento di Fisica and INFN, University of Trento, via Sommarive 14, I-38050 Povo, Trento (Italy); Pederiva, Francesco [Dipartimento di Fisica and INFN, University of Trento, via Sommarive 14, I-38050 Povo, Trento (Italy); CNR-DEMOCRITOS National Supercomputing Center, Trieste (Italy); Fantoni, Stefano [Scuola Internazionale Superiore di Studi Avanzati and INFN via Beirut 2/4, 34014 Trieste (Italy); CNR-DEMOCRITOS National Supercomputing Center, Trieste (Italy); Schmidt, Kevin E. [Department of Physics, Arizona State University, Tempe, Arizona (United States)
2007-03-09T23:59:59.000Z
We present an accurate numerical study of the equation of state of nuclear matter based on realistic nucleon-nucleon interactions by means of auxiliary field diffusion Monte Carlo (AFDMC) calculations. The AFDMC method samples the spin and isospin degrees of freedom allowing for quantum simulations of large nucleonic systems and represents an important step forward towards a quantitative understanding of problems in nuclear structure and astrophysics.
A Wigner Monte Carlo approach to density functional theory
Sellier, J.M., E-mail: jeanmichel.sellier@gmail.com; Dimov, I.
2014-08-01T23:59:59.000Z
In order to simulate quantum N-body systems, stationary and time-dependent density functional theories rely on the capacity of calculating the single-electron wave-functions of a system from which one obtains the total electron density (Kohn–Sham systems). In this paper, we introduce the use of the Wigner Monte Carlo method in ab-initio calculations. This approach allows time-dependent simulations of chemical systems in the presence of reflective and absorbing boundary conditions. It also enables an intuitive comprehension of chemical systems in terms of the Wigner formalism based on the concept of phase-space. Finally, being based on a Monte Carlo method, it scales very well on parallel machines paving the way towards the time-dependent simulation of very complex molecules. A validation is performed by studying the electron distribution of three different systems, a Lithium atom, a Boron atom and a hydrogenic molecule. For the sake of simplicity, we start from initial conditions not too far from equilibrium and show that the systems reach a stationary regime, as expected (despite no restriction is imposed in the choice of the initial conditions). We also show a good agreement with the standard density functional theory for the hydrogenic molecule. These results demonstrate that the combination of the Wigner Monte Carlo method and Kohn–Sham systems provides a reliable computational tool which could, eventually, be applied to more sophisticated problems.
MCViNE -- An object oriented Monte Carlo neutron ray tracing simulation package
Lin, Jiao Y Y; Granroth, Garrett E; Abernathy, Douglas L; Lumsden, Mark D; Winn, Barry; Aczel, Adam A; Aivazis, Michael; Fultz, Brent
2015-01-01T23:59:59.000Z
MCViNE (Monte-Carlo VIrtual Neutron Experiment) is a versatile Monte Carlo (MC) neutron ray-tracing program that provides researchers with tools for performing computer modeling and simulations that mirror real neutron scattering experiments. By adopting modern software engineering practices such as using composite and visitor design patterns for representing and accessing neutron scatterers, and using recursive algorithms for multiple scattering, MCViNE is flexible enough to handle sophisticated neutron scattering problems including, for example, neutron detection by complex detector systems, and single and multiple scattering events in a variety of samples and sample environments. In addition, MCViNE can take advantage of simulation components in linear-chain-based MC ray tracing packages widely used in instrument design and optimization, as well as NumPy-based components that make prototypes useful and easy to develop. These developments have enabled us to carry out detailed simulations of neutron scatteri...
Monte Carlo Methods for Uncertainty Quantification Mathematical Institute, University of Oxford
Giles, Mike
Lecture 1: Introduction and Monte Carlo basics some model applications random number generation Monte force being outside some specified range Note: if we turn this into a full finite element analysis on the boundary. Mike Giles (Oxford) Monte Carlo methods October 25, 2013 7 / 28 #12;Application 3 In modelling
Improved quantum Monte Carlo calculation of the ground-state energy of the hydrogen molecule
Anderson, James B.
variational energies. The accuracy of the new Monte Carlo energy is approximately equal to that of recentImproved quantum Monte Carlo calculation of the ground-state energy of the hydrogen molecule Bin Carlo calculation of the nonrelativistic ground-state energy of the hydrogen molecule, without the use
FZ2MC: A Tool for Monte Carlo Transport Code Geometry Manipulation
Hackel, B M; Nielsen Jr., D E; Procassini, R J
2009-02-25T23:59:59.000Z
The process of creating and validating combinatorial geometry representations of complex systems for use in Monte Carlo transport simulations can be both time consuming and error prone. To simplify this process, a tool has been developed which employs extensions of the Form-Z commercial solid modeling tool. The resultant FZ2MC (Form-Z to Monte Carlo) tool permits users to create, modify and validate Monte Carlo geometry and material composition input data. Plugin modules that export this data to an input file, as well as parse data from existing input files, have been developed for several Monte Carlo codes. The FZ2MC tool is envisioned as a 'universal' tool for the manipulation of Monte Carlo geometry and material data. To this end, collaboration on the development of plug-in modules for additional Monte Carlo codes is desired.
Properties of Reactive Oxygen Species by Quantum Monte Carlo
Andrea Zen; Bernhardt L. Trout; Leonardo Guidoni
2014-06-16T23:59:59.000Z
The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of Chemistry, Biology and Atmospheric Science. Nevertheless, the electronic structure of such species is a challenge for ab-initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as $N^3-N^4$, where $N$ is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.
Monte Carlo tests of Orbital-Free Density Functional Theory
D. I. Palade
2014-12-12T23:59:59.000Z
The relationship between the exact kinetic energy density in a quantum system in the frame of Density Functional Theory and the semiclassical functional expression for the same quantity is investigated. The analysis is performed with Monte Carlo simulations of the Kohn-Sham potentials. We find that the semiclassical form represents the statistical expectation value of the quantum nature. Based on the numerical results, we propose an empirical correction to the existing functional and an associated method to improve the Orbital-Free results.
Adaptively Learning an Importance Function Using Transport Constrained Monte Carlo
Booth, T.E.
1998-06-22T23:59:59.000Z
It is well known that a Monte Carlo estimate can be obtained with zero-variance if an exact importance function for the estimate is known. There are many ways that one might iteratively seek to obtain an ever more exact importance function. This paper describes a method that has obtained ever more exact importance functions that empirically produce an error that is dropping exponentially with computer time. The method described herein constrains the importance function to satisfy the (adjoint) Boltzmann transport equation. This constraint is provided by using the known form of the solution, usually referred to as the Case eigenfunction solution.
Bounded limit for the Monte Carlo point-flux-estimator
Grimesey, R.A.
1981-01-01T23:59:59.000Z
In a Monte Carlo random walk the kernel K(R,E) is used as an expected value estimator at every collision for the collided flux phi/sub c/ r vector,E) at the detector point. A limiting value for the kernel is derived from a diffusion approximation for the probability current at a radius R/sub 1/ from the detector point. The variance of the collided flux at the detector point is thus bounded using this asymptotic form for K(R,E). The bounded point flux estimator is derived. (WHK)
Monte Carlo beam capture and charge breeding simulation
Kim, J.S.; Liu, C.; Edgell, D.H.; Pardo, R. [FAR-TECH, Inc., 10350 Science Center Drive, San Diego, California 92121 (United States); FAR-TECH, Inc., 10350 Science Center Drive, San Diego, California 92121 (United States) and University of Rochester, Rochester, New York (United States); Argonne National Laboratory, Argonne, Illinois (United States)
2006-03-15T23:59:59.000Z
A full six-dimensional (6D) phase space Monte Carlo beam capture charge-breeding simulation code examines the beam capture processes of singly charged ion beams injected to an electron cyclotron resonance (ECR) charge breeder from entry to exit. The code traces injected beam ions in an ECR ion source (ECRIS) plasma including Coulomb collisions, ionization, and charge exchange. The background ECRIS plasma is modeled within the current frame work of the generalized ECR ion source model. A simple sample case of an oxygen background plasma with an injected Ar +1 ion beam produces lower charge breeding efficiencies than experimentally obtained. Possible reasons for discrepancies are discussed.
Burnup calculation methodology in the serpent 2 Monte Carlo code
Leppaenen, J. [VTT Technical Research Centre of Finland, P.O.Box 1000, FI-02044 VTT (Finland); Isotalo, A. [Aalto Univ., Dept. of Applied Physics, P.O.Box 14100, FI-00076 AALTO (Finland)
2012-07-01T23:59:59.000Z
This paper presents two topics related to the burnup calculation capabilities in the Serpent 2 Monte Carlo code: advanced time-integration methods and improved memory management, accomplished by the use of different optimization modes. The development of the introduced methods is an important part of re-writing the Serpent source code, carried out for the purpose of extending the burnup calculation capabilities from 2D assembly-level calculations to large 3D reactor-scale problems. The progress is demonstrated by repeating a PWR test case, originally carried out in 2009 for the validation of the newly-implemented burnup calculation routines in Serpent 1. (authors)
Electron scattering in helium for Monte Carlo simulations
Khrabrov, Alexander V.; Kaganovich, Igor D. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States)
2012-09-15T23:59:59.000Z
An analytical approximation for differential cross-section of electron scattering on helium atoms is introduced. It is intended for Monte Carlo simulations, which, instead of angular distributions based on experimental data (or on first-principle calculations), usually rely on approximations that are accurate yet numerically efficient. The approximation is based on the screened-Coulomb differential cross-section with energy-dependent screening. For helium, a two-pole approximation of the screening parameter is found to be highly accurate over a wide range of energies.
Quantitative Monte Carlo-based holmium-166 SPECT reconstruction
Elschot, Mattijs; Smits, Maarten L. J.; Nijsen, Johannes F. W.; Lam, Marnix G. E. H.; Zonnenberg, Bernard A.; Bosch, Maurice A. A. J. van den; Jong, Hugo W. A. M. de [Department of Radiology and Nuclear Medicine, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht (Netherlands); Viergever, Max A. [Image Sciences Institute, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht (Netherlands)] [Image Sciences Institute, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht (Netherlands)
2013-11-15T23:59:59.000Z
Purpose: Quantitative imaging of the radionuclide distribution is of increasing interest for microsphere radioembolization (RE) of liver malignancies, to aid treatment planning and dosimetry. For this purpose, holmium-166 ({sup 166}Ho) microspheres have been developed, which can be visualized with a gamma camera. The objective of this work is to develop and evaluate a new reconstruction method for quantitative {sup 166}Ho SPECT, including Monte Carlo-based modeling of photon contributions from the full energy spectrum.Methods: A fast Monte Carlo (MC) simulator was developed for simulation of {sup 166}Ho projection images and incorporated in a statistical reconstruction algorithm (SPECT-fMC). Photon scatter and attenuation for all photons sampled from the full {sup 166}Ho energy spectrum were modeled during reconstruction by Monte Carlo simulations. The energy- and distance-dependent collimator-detector response was modeled using precalculated convolution kernels. Phantom experiments were performed to quantitatively evaluate image contrast, image noise, count errors, and activity recovery coefficients (ARCs) of SPECT-fMC in comparison with those of an energy window-based method for correction of down-scattered high-energy photons (SPECT-DSW) and a previously presented hybrid method that combines MC simulation of photopeak scatter with energy window-based estimation of down-scattered high-energy contributions (SPECT-ppMC+DSW). Additionally, the impact of SPECT-fMC on whole-body recovered activities (A{sup est}) and estimated radiation absorbed doses was evaluated using clinical SPECT data of six {sup 166}Ho RE patients.Results: At the same noise level, SPECT-fMC images showed substantially higher contrast than SPECT-DSW and SPECT-ppMC+DSW in spheres ?17 mm in diameter. The count error was reduced from 29% (SPECT-DSW) and 25% (SPECT-ppMC+DSW) to 12% (SPECT-fMC). ARCs in five spherical volumes of 1.96–106.21 ml were improved from 32%–63% (SPECT-DSW) and 50%–80% (SPECT-ppMC+DSW) to 76%–103% (SPECT-fMC). Furthermore, SPECT-fMC recovered whole-body activities were most accurate (A{sup est}= 1.06 × A ? 5.90 MBq, R{sup 2}= 0.97) and SPECT-fMC tumor absorbed doses were significantly higher than with SPECT-DSW (p = 0.031) and SPECT-ppMC+DSW (p = 0.031).Conclusions: The quantitative accuracy of {sup 166}Ho SPECT is improved by Monte Carlo-based modeling of the image degrading factors. Consequently, the proposed reconstruction method enables accurate estimation of the radiation absorbed dose in clinical practice.
Global neutrino parameter estimation using Markov Chain Monte Carlo
Steen Hannestad
2007-10-10T23:59:59.000Z
We present a Markov Chain Monte Carlo global analysis of neutrino parameters using both cosmological and experimental data. Results are presented for the combination of all presently available data from oscillation experiments, cosmology, and neutrinoless double beta decay. In addition we explicitly study the interplay between cosmological, tritium decay and neutrinoless double beta decay data in determining the neutrino mass parameters. We furthermore discuss how the inference of non-neutrino cosmological parameters can benefit from future neutrino mass experiments such as the KATRIN tritium decay experiment or neutrinoless double beta decay experiments.
Computational radiology and imaging with the MCNP Monte Carlo code
Estes, G.P.; Taylor, W.M.
1995-05-01T23:59:59.000Z
MCNP, a 3D coupled neutron/photon/electron Monte Carlo radiation transport code, is currently used in medical applications such as cancer radiation treatment planning, interpretation of diagnostic radiation images, and treatment beam optimization. This paper will discuss MCNP`s current uses and capabilities, as well as envisioned improvements that would further enhance MCNP role in computational medicine. It will be demonstrated that the methodology exists to simulate medical images (e.g. SPECT). Techniques will be discussed that would enable the construction of 3D computational geometry models of individual patients for use in patient-specific studies that would improve the quality of care for patients.
Continuous-Estimator Representation for Monte Carlo Criticality Diagnostics
Kiedrowski, Brian C. [Los Alamos National Laboratory; Brown, Forrest B. [Los Alamos National Laboratory
2012-06-18T23:59:59.000Z
An alternate means of computing diagnostics for Monte Carlo criticality calculations is proposed. Overlapping spherical regions or estimators are placed covering the fissile material with a minimum center-to-center separation of the 'fission distance', which is defined herein, and a radius that is some multiple thereof. Fission neutron production is recorded based upon a weighted average of proximities to centers for all the spherical estimators. These scores are used to compute the Shannon entropy, and shown to reproduce the value, to within an additive constant, determined from a well-placed mesh by a user. The spherical estimators are also used to assess statistical coverage.
Thermoelectric transport perpendicular to thin-film heterostructures calculated using the Monte The Monte Carlo technique is used to calculate electrical as well as thermoelectric transport properties ballistic thermionic transport and fully diffusive thermoelectric transport is also described. DOI: 10
Four-quark energies in SU(2) lattice Monte Carlo using a tetrahedral geometry
A. M. Green; J. Lukkarinen; P. Pennanen; C. Michael; S. Furui
1994-12-05T23:59:59.000Z
This contribution -- a continuation of earlier work -- reports on recent developments in the calculation and understanding of 4-quark energies generated using lattice Monte Carlo techniques.
Monte Carlo simulation of quantum Zeno effect in the brain
Danko Georgiev
2014-12-11T23:59:59.000Z
Environmental decoherence appears to be the biggest obstacle for successful construction of quantum mind theories. Nevertheless, the quantum physicist Henry Stapp promoted the view that the mind could utilize quantum Zeno effect to influence brain dynamics and that the efficacy of such mental efforts would not be undermined by environmental decoherence of the brain. To address the physical plausibility of Stapp's claim, we modeled the brain using quantum tunneling of an electron in a multiple-well structure such as the voltage sensor in neuronal ion channels and performed Monte Carlo simulations of quantum Zeno effect exerted by the mind upon the brain in the presence or absence of environmental decoherence. The simulations unambiguously showed that the quantum Zeno effect breaks down for timescales greater than the brain decoherence time. To generalize the Monte Carlo simulation results for any n-level quantum system, we further analyzed the change of brain entropy due to the mind probing actions and proved a theorem according to which local projections cannot decrease the von Neumann entropy of the unconditional brain density matrix. The latter theorem establishes that Stapp's model is physically implausible but leaves a door open for future development of quantum mind theories provided the brain has a decoherence-free subspace.
Package for the Interactive Analysis of Line Emission: MarkovChain and Monte Carlo Methods
ods in the Package for Interactive Analysis of Line Emis sion (PINTofALE), which is a collection to determine errors in spectral line parameters, and use MarkovChain Monte Carlo meth ods to construct ated using a known DEM. Monte Carlo and MCMC meth ods have attained increasing popularity in a diverse
On Filtering the Noise from the Random Parameters in Monte Carlo Rendering
Sen, Pradeep
On Filtering the Noise from the Random Parameters in Monte Carlo Rendering PRADEEP SEN and SOHEIL DARABI UNM Advanced Graphics Lab Monte Carlo (MC) rendering systems can produce spectacular images from a small number of input samples. To do this, we treat the rendering system as a black box
Path Integral Monte Carlo and Density Functional Molecular Dynamics Simulations of Hot, Dense Helium
Militzer, Burkhard
Path Integral Monte Carlo and Density Functional Molecular Dynamics Simulations of Hot, Dense integral Monte Carlo (PIMC) and density func- tional molecular dynamics (DFT-MD), are applied to study hot excitation mecha- nisms that determine their behavior at high temperature. The helium atom has two ionization
Hybrid Probabilistic RoadMap -Monte Carlo Motion Planning for Closed Chain Systems with
Han, Li
Hybrid Probabilistic RoadMap - Monte Carlo Motion Planning for Closed Chain Systems with Spherical@clarku.edu Abstract-- In this paper we propose a hybrid Probabilistic RoadMap - Monte Carlo (PRM-MC) motion planner and connect a large number of robot configurations in order to build a roadmap that reflects the properties
Schulze, Tim
An Energy Localization Principle and its Application to Fast Kinetic Monte Carlo Simulation of Michigan, Ann Arbor, MI 48109-1109 Abstract Simulation of heteroepitaxial growth using kinetic Monte Carlo (KMC) is often based on rates determined by differences in elastic energy between two configurations
Kemner, Ken
Tuning Green's Function Monte Carlo for Mira Steven C. Pieper, Physics Division, Argonne National Laboratory Partners in crime Ralph Butler (Middle Tennessee State) Joseph Carlson (Los Alamos) Stefano for comparisons of models to data Â· Quantum Monte Carlo has made much progress for A 12 Â· Nuclei go up to A=238
A new quasi-Monte Carlo technique based on nonnegative least squares and
De Marchi, Stefano
A new quasi-Monte Carlo technique based on nonnegative least squares and approximate Fekete points Claudia Bittantea , Stefano De Marchia, , Alvise Sommarivaa aUniversity of Padova, Department of the quasi-Monte Carlo method. The method, simple in its formulation, be- comes computationally inefficient
BAYESIAN INFERENCE FOR MODELS OF TRANSCRIPTIONAL REGULATION USING MARKOV CHAIN MONTE CARLO SAMPLING
Opper, Manfred
]. In this contribution we present a Markov chain Monte Carlo (MCMC) sampler which infers the TF activity based on a modelBAYESIAN INFERENCE FOR MODELS OF TRANSCRIPTIONAL REGULATION USING MARKOV CHAIN MONTE CARLO SAMPLING]. Transcription of genes is controlled by proteins which can bind to particular base-sequences of DNA
Monte Carlo Methods for Uncertainty Quantification Mathematical Institute, University of Oxford
Giles, Mike
methods October 25, 2013 7 / 28 Application 3 In modelling groundwater flow in nuclear waste repositories: Introduction and Monte Carlo basics some model applications random number generation Monte Carlo estimation specified range Note: if we turn this into a full finite element analysis, then the computational cost
Brachytherapy structural shielding calculations using Monte Carlo generated, monoenergetic data
Zourari, K.; Peppa, V.; Papagiannis, P., E-mail: ppapagi@phys.uoa.gr [Medical Physics Laboratory, Medical School, University of Athens, 75 Mikras Asias, 11527 Athens (Greece); Ballester, Facundo [Department of Atomic, Molecular and Nuclear Physics, University of Valencia, Burjassot 46100 (Spain)] [Department of Atomic, Molecular and Nuclear Physics, University of Valencia, Burjassot 46100 (Spain); Siebert, Frank-André [Clinic of Radiotherapy, University Hospital of Schleswig-Holstein, Campus Kiel 24105 (Germany)] [Clinic of Radiotherapy, University Hospital of Schleswig-Holstein, Campus Kiel 24105 (Germany)
2014-04-15T23:59:59.000Z
Purpose: To provide a method for calculating the transmission of any broad photon beam with a known energy spectrum in the range of 20–1090 keV, through concrete and lead, based on the superposition of corresponding monoenergetic data obtained from Monte Carlo simulation. Methods: MCNP5 was used to calculate broad photon beam transmission data through varying thickness of lead and concrete, for monoenergetic point sources of energy in the range pertinent to brachytherapy (20–1090 keV, in 10 keV intervals). The three parameter empirical model introduced byArcher et al. [“Diagnostic x-ray shielding design based on an empirical model of photon attenuation,” Health Phys. 44, 507–517 (1983)] was used to describe the transmission curve for each of the 216 energy-material combinations. These three parameters, and hence the transmission curve, for any polyenergetic spectrum can then be obtained by superposition along the lines of Kharrati et al. [“Monte Carlo simulation of x-ray buildup factors of lead and its applications in shielding of diagnostic x-ray facilities,” Med. Phys. 34, 1398–1404 (2007)]. A simple program, incorporating a graphical user interface, was developed to facilitate the superposition of monoenergetic data, the graphical and tabular display of broad photon beam transmission curves, and the calculation of material thickness required for a given transmission from these curves. Results: Polyenergetic broad photon beam transmission curves of this work, calculated from the superposition of monoenergetic data, are compared to corresponding results in the literature. A good agreement is observed with results in the literature obtained from Monte Carlo simulations for the photon spectra emitted from bare point sources of various radionuclides. Differences are observed with corresponding results in the literature for x-ray spectra at various tube potentials, mainly due to the different broad beam conditions or x-ray spectra assumed. Conclusions: The data of this work allow for the accurate calculation of structural shielding thickness, taking into account the spectral variation with shield thickness, and broad beam conditions, in a realistic geometry. The simplicity of calculations also obviates the need for the use of crude transmission data estimates such as the half and tenth value layer indices. Although this study was primarily designed for brachytherapy, results might also be useful for radiology and nuclear medicine facility design, provided broad beam conditions apply.
SKIRT: the design of a suite of input models for Monte Carlo radiative transfer simulations
Baes, Maarten
2015-01-01T23:59:59.000Z
The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can...
Koh, Wonshill
2013-02-22T23:59:59.000Z
The light propagation in highly scattering turbid media composed of the particles with different size distribution is studied using a Monte Carlo simulation model implemented in Standard C. Monte Carlo method has been widely utilized to study...
Monte Carlo Simulation Tool Installation and Operation Guide
Aguayo Navarrete, Estanislao; Ankney, Austin S.; Berguson, Timothy J.; Kouzes, Richard T.; Orrell, John L.; Troy, Meredith D.; Wiseman, Clinton G.
2013-09-02T23:59:59.000Z
This document provides information on software and procedures for Monte Carlo simulations based on the Geant4 toolkit, the ROOT data analysis software and the CRY cosmic ray library. These tools have been chosen for its application to shield design and activation studies as part of the simulation task for the Majorana Collaboration. This document includes instructions for installation, operation and modification of the simulation code in a high cyber-security computing environment, such as the Pacific Northwest National Laboratory network. It is intended as a living document, and will be periodically updated. It is a starting point for information collection by an experimenter, and is not the definitive source. Users should consult with one of the authors for guidance on how to find the most current information for their needs.
Atomistic Kinetic Monte Carlo Simulations of Polycrystalline Copper Electrodeposition
Treeratanaphitak, Tanyakarn; Abukhdeir, Nasser Mohieddin
2014-01-01T23:59:59.000Z
A high-fidelity kinetic Monte Carlo (KMC) simulation method (T. Treeratanaphitak, M. Pritzker, N. M. Abukhdeir, Electrochim. Acta 121 (2014) 407--414) using the semi-empirical multi-body embedded-atom method (EAM) potential has been extended to model polycrystalline metal electrodeposition. The presented KMC-EAM method enables true three-dimensional atomistic simulations of electrodeposition over experimentally relevant timescales. Simulations using KMC-EAM are performed over a range of overpotentials to predict the effect on deposit texture evolution. Results show strong agreement with past experimental results both with respect to deposition rates on various copper surfaces and roughness-time power law behaviour. It is found that roughness scales with time $\\propto t^\\beta$ where $\\beta=0.62 \\pm 0.12$, which is in good agreement with past experimental results. Furthermore, the simulations provide insights into sub-surface deposit morphologies which are not directly accessible from experimental measurements.
Peelle's pertinent puzzle using the Monte Carlo technique
Kawano, Toshihiko [Los Alamos National Laboratory; Talou, Patrick [Los Alamos National Laboratory; Burr, Thomas [Los Alamos National Laboratory; Pan, Feng [Los Alamos National Laboratory
2009-01-01T23:59:59.000Z
We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, and if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.
The neutron instrument Monte Carlo library MCLIB: Recent developments
Seeger, P.A.; Daemen, L.L.; Hjelm, R.P. Jr.; Thelliez, T.G.
1998-12-31T23:59:59.000Z
A brief review is given of the developments since the ICANS-XIII meeting made in the neutron instrument design codes using the Monte Carlo library MCLIB. Much of the effort has been to assure that the library and the executing code MC{_}RUN connect efficiently with the World Wide Web application MC-WEB as part of the Los Alamos Neutron Instrument Simulation Package (NISP). Since one of the most important features of MCLIB is its open structure and capability to incorporate any possible neutron transport or scattering algorithm, this document describes the current procedure that would be used by an outside user to add a feature to MCLIB. Details of the calling sequence of the core subroutine OPERATE are discussed, and questions of style are considered and additional guidelines given. Suggestions for standardization are solicited, as well as code for new algorithms.
Hybrid Monte Carlo simulation on the graphene hexagonal lattice
Richard Brower; Claudio Rebbi; David Schaich
2012-04-24T23:59:59.000Z
One of the many remarkable properties of graphene is that in the low energy limit the dynamics of its electrons can be effectively described by the massless Dirac equation. This has prompted investigations of graphene based on the lattice simulation of a system of 2-dimensional fermions on a square staggered lattice. We demonstrate here how to construct the path integral for graphene working directly on the graphene hexagonal lattice. For the nearest neighbor tight binding model interacting with a long range Coulomb interaction between the electrons, this leads to the hybrid Monte Carlo algorithm with no sign problem. The only approximation is the discretization of the Euclidean time. So as we extrapolate to the time continuum limit, the exact tight binding solution maybe found numerically to arbitrary precession on a finite hexagonal lattice. The potential for this approach is tested on a single hexagonal cell.
RMC - A Monte Carlo code for reactor physics analysis
Wang, K.; Li, Z.; She, D.; Liang, J.; Xu, Q.; Qiu, A.; Yu, J.; Sun, J.; Fan, X.; Yu, G. [Department of Engineering Physics, Tsinghua University, Liuqing Building, Beijing, 100084 (China)
2013-07-01T23:59:59.000Z
A new Monte Carlo neutron transport code RMC has been being developed by Department of Engineering Physics, Tsinghua University, Beijing as a tool for reactor physics analysis on high-performance computing platforms. To meet the requirements of reactor analysis, RMC now has such functions as criticality calculation, fixed-source calculation, burnup calculation and kinetics simulations. Some techniques for geometry treatment, new burnup algorithm, source convergence acceleration, massive tally and parallel calculation, and temperature dependent cross sections processing are researched and implemented in RMC to improve the efficiency. Validation results of criticality calculation, burnup calculation, source convergence acceleration, tallies performance and parallel performance shown in this paper prove the capabilities of RMC in dealing with reactor analysis problems with good performances. (authors)
Monte Carlo reactor calculation with substantially reduced number of cycles
Lee, M. J.; Joo, H. G. [Seoul National Univ., 599 Gwanak-ro, Gwanak-gu, Seoul, 151-744 (Korea, Republic of); Lee, D. [Ulsan National Inst. of Science and Technology, UNIST-gil 50, Eonyang-eup, Ulju-gun, Ulsan, 689-798 (Korea, Republic of); Smith, K. [Massachusetts Inst. of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139-4307 (United States)
2012-07-01T23:59:59.000Z
A new Monte Carlo (MC) eigenvalue calculation scheme that substantially reduces the number of cycles is introduced with the aid of coarse mesh finite difference (CMFD) formulation. First, it is confirmed in terms of pin power errors that using extremely many particles resulting in short active cycles is beneficial even in the conventional MC scheme although wasted operations in inactive cycles cannot be reduced with more particles. A CMFD-assisted MC scheme is introduced as an effort to reduce the number of inactive cycles and the fast convergence behavior and reduced inter-cycle effect of the CMFD assisted MC calculation is investigated in detail. As a practical means of providing a good initial fission source distribution, an assembly based few-group condensation and homogenization scheme is introduced and it is shown that efficient MC eigenvalue calculations with fewer than 20 total cycles (including inactive cycles) are possible for large power reactor problems. (authors)
Velocity renormalization in graphene from lattice Monte Carlo
Joaquín E. Drut; Timo A. Lähde
2014-03-26T23:59:59.000Z
We compute the Fermi velocity of the Dirac quasiparticles in clean graphene at the charge neutrality point for strong Coulomb coupling alpha_g. We perform a Lattice Monte Carlo calculation within the low-energy Dirac theory, which includes an instantaneous, long-range Coulomb interaction. We find a renormalized Fermi velocity v_FR > v_F, where v_F = c/300. Our results are consistent with a momentum-independent v_FR which increases approximately linearly with alpha_g, although a logarithmic running with momentum cannot be excluded at present. At the predicted critical coupling alpha_gc for the semimetal-insulator transition due to excitonic pair formation, we find v_FR/v_F = 3.3, which we discuss in light of experimental findings for v_FR/v_F at the charge neutrality point in ultra-clean suspended graphene.
Quality assurance for the ALICE Monte Carlo procedure
M. Ajaz; Seforo Mohlalisi; Peter Hristov; Jean Pierre Revol
2009-04-10T23:59:59.000Z
We implement the already existing macro,$ALICE_ROOT/STEER /CheckESD.C that is ran after reconstruction to compute the physics efficiency, as a task that will run on proof framework like CAF. The task was implemented in a C++ class called AliAnalysisTaskCheckESD and it inherits from AliAnalysisTaskSE base class. The function of AliAnalysisTaskCheckESD is to compute the ratio of the number of reconstructed particles to the number of particle generated by the Monte Carlo generator.The class AliAnalysisTaskCheckESD was successfully implemented. It was used during the production for first physics and permitted to discover several problems (missing track in the MUON arm reconstruction, low efficiency in the PHOS detector etc.). The code is committed to the SVN repository and will become standard tool for quality assurance.
Normality of Monte Carlo criticality eigenfunction decomposition coefficients
Toth, B. E.; Martin, W. R. [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109 (United States); Griesheimer, D. P. [Bechtel Bettis, Inc., P.O. Box 79, West Mifflin, PA 15122 (United States)
2013-07-01T23:59:59.000Z
A proof is presented, which shows that after a single Monte Carlo (MC) neutron transport power method iteration without normalization, the coefficients of an eigenfunction decomposition of the fission source density are normally distributed when using analog or implicit capture MC. Using a Pearson correlation coefficient test, the proof is corroborated by results from a uniform slab reactor problem, and those results also suggest that the coefficients are normally distributed with normalization. The proof and numerical test results support the application of earlier work on the convergence of eigenfunctions under stochastic operators. Knowledge of the Gaussian shape of decomposition coefficients allows researchers to determine an appropriate level of confidence in the distribution of fission sites taken from a MC simulation. This knowledge of the shape of the probability distributions of decomposition coefficients encourages the creation of new predictive convergence diagnostics. (authors)
Single temperature for Monte Carlo optimization on complex landscapes
Tolkunov, Denis
2012-01-01T23:59:59.000Z
We propose a new strategy for Monte Carlo (MC) optimization on rugged multidimensional landscapes. The strategy is based on querying the statistical properties of the landscape in order to find the temperature at which the mean first passage time across the current region of the landscape is minimized. Thus, in contrast to other algorithms such as simulated annealing (SA), we explicitly match the temperature schedule to the statistics of landscape irregularities. In cases where this statistics is approximately the same over the entire landscape, or where non-local moves couple distant parts of the landscape, single-temperature MC will outperform any other MC algorithm with the same move set. We also find that in strongly anisotropic Coulomb spin glass and traveling salesman problems, the only relevant statistics (which we use to assign a single MC temperature) is that of irregularities in low-energy funnels. Our results may explain why protein folding in nature is efficient at room temperatures.
Strain in the mesoscale kinetic Monte Carlo model for sintering
Bjørk, R; Tikare, V; Olevsky, E; Pryds, N
2014-01-01T23:59:59.000Z
Shrinkage strains measured from microstructural simulations using the mesoscale kinetic Monte Carlo (kMC) model for solid state sintering are discussed. This model represents the microstructure using digitized discrete sites that are either grain or pore sites. The algorithm used to simulate densification by vacancy annihilation removes an isolated pore site at a grain boundary and collapses a column of sites extending from the vacancy to the surface of sintering compact, through the center of mass of the nearest grain. Using this algorithm, the existing published kMC models are shown to produce anisotropic strains for homogeneous powder compacts with aspect ratios different from unity. It is shown that the line direction biases shrinkage strains in proportion the compact dimension aspect ratios. A new algorithm that corrects this bias in strains is proposed; the direction for collapsing the column is determined by choosing a random sample face and subsequently a random point on that face as the end point for...
Monte Carlo solution of a semi-discrete transport equation
Urbatsch, T.J.; Morel, J.E.; Gulick, J.C.
1999-09-01T23:59:59.000Z
The authors present the S{sub {infinity}} method, a hybrid neutron transport method in which Monte Carlo particles traverse discrete space. The goal of any deterministic/stochastic hybrid method is to couple selected characters from each of the methods in hopes of producing a better method. The S{sub {infinity}} method has the features of the lumped, linear-discontinuous (LLD) spatial discretization, yet it has no ray-effects because of the continuous angular variable. They derive the S{sub {infinity}} method for the solid-state, mono-energetic transport equation in one-dimensional slab geometry with isotropic scattering and an isotropic internal source. They demonstrate the viability of the S{sub {infinity}} method by comparing their results favorably to analytic and deterministic results.
Quantum Monte Carlo study of inhomogeneous neutron matter
Stefano Gandolfi
2012-08-31T23:59:59.000Z
We present an ab-initio study of neutron drops. We use Quantum Monte Carlo techniques to calculate the energy up to 54 neutrons in different external potentials, and we compare the results with Skyrme forces. We also calculate the rms radii and radial densities, and we find that a re-adjustment of the gradient term in Skyrme is needed in order to reproduce the properties of these systems given by the ab-initio calculation. By using the ab-initio results for neutron drops for close- and open-shell configurations, we suggest how to improve Skyrme forces when dealing with systems with large isospin-asymmetries like neutron-rich nuclei.
Monte Carlo modeling of spallation targets containing uranium and americium
Yury Malyshkin; Igor Pshenichnov; Igor Mishustin; Walter Greiner
2014-05-02T23:59:59.000Z
Neutron production and transport in spallation targets made of uranium and americium are studied with a Geant4-based code MCADS (Monte Carlo model for Accelerator Driven Systems). A good agreement of MCADS results with experimental data on neutron- and proton-induced reactions on $^{241}$Am and $^{243}$Am nuclei allows to use this model for simulations with extended Am targets. It was demonstrated that MCADS model can be used for calculating the values of critical mass for $^{233,235}$U, $^{237}$Np, $^{239}$Pu and $^{241}$Am. Several geometry options and material compositions (U, U+Am, Am, Am$_2$O$_3$) are considered for spallation targets to be used in Accelerator Driven Systems. All considered options operate as deep subcritical targets having neutron multiplication factor of $k \\sim 0.5$. It is found that more than 4 kg of Am can be burned in one spallation target during the first year of operation.
Enhanced physics design with hexagonal repeated structure tools using Monte Carlo methods
Carter, L L; Lan, J S; Schwarz, R A
1991-01-01T23:59:59.000Z
This report discusses proposed new missions for the Fast Flux Test Facility (FFTF) reactor which involve the use of target assemblies containing local hydrogenous moderation within this otherwise fast reactor. Parametric physics design studies with Monte Carlo methods are routinely utilized to analyze the rapidly changing neutron spectrum. An extensive utilization of the hexagonal lattice within lattice capabilities of the Monte Carlo Neutron Photon (MCNP) continuous energy Monte Carlo computer code is applied here to solving such problems. Simpler examples that use the lattice capability to describe fuel pins within a brute force'' description of the hexagonal assemblies are also given.
Franke, B. C. [Sandia National Laboratories, Albuquerque, NM 87185 (United States); Prinja, A. K. [Department of Chemical and Nuclear Engineering, University of New Mexico, Albuquerque, NM 87131 (United States)
2013-07-01T23:59:59.000Z
The stochastic Galerkin method (SGM) is an intrusive technique for propagating data uncertainty in physical models. The method reduces the random model to a system of coupled deterministic equations for the moments of stochastic spectral expansions of result quantities. We investigate solving these equations using the Monte Carlo technique. We compare the efficiency with brute-force Monte Carlo evaluation of uncertainty, the non-intrusive stochastic collocation method (SCM), and an intrusive Monte Carlo implementation of the stochastic collocation method. We also describe the stability limitations of our SGM implementation. (authors)
Coupled Deterministic-Monte Carlo Transport for Radiation Portal Modeling
Smith, Leon E.; Miller, Erin A.; Wittman, Richard S.; Shaver, Mark W.
2008-01-14T23:59:59.000Z
Radiation portal monitors are being deployed, both domestically and internationally, to detect illicit movement of radiological materials concealed in cargo. Evaluation of the current and next generations of these radiation portal monitor (RPM) technologies is an ongoing process. 'Injection studies' that superimpose, computationally, the signature from threat materials onto empirical vehicle profiles collected at ports of entry, are often a component of the RPM evaluation process. However, measurement of realistic threat devices can be both expensive and time-consuming. Radiation transport methods that can predict the response of radiation detection sensors with high fidelity, and do so rapidly enough to allow the modeling of many different threat-source configurations, are a cornerstone of reliable evaluation results. Monte Carlo methods have been the primary tool of the detection community for these kinds of calculations, in no small part because they are particularly effective for calculating pulse-height spectra in gamma-ray spectrometers. However, computational times for problems with a high degree of scattering and absorption can be extremely long. Deterministic codes that discretize the transport in space, angle, and energy offer potential advantages in computational efficiency for these same kinds of problems, but the pulse-height calculations needed to predict gamma-ray spectrometer response are not readily accessible. These complementary strengths for radiation detection scenarios suggest that coupling Monte Carlo and deterministic methods could be beneficial in terms of computational efficiency. Pacific Northwest National Laboratory and its collaborators are developing a RAdiation Detection Scenario Analysis Toolbox (RADSAT) founded on this coupling approach. The deterministic core of RADSAT is Attila, a three-dimensional, tetrahedral-mesh code originally developed by Los Alamos National Laboratory, and since expanded and refined by Transpire, Inc. [1]. MCNP5 is used to calculate sensor pulse-height tallies. RADSAT methods, including adaptive, problem-specific energy-group creation, ray-effect mitigation strategies and the porting of deterministic angular flux to MCNP for individual particle creation are described in [2][3][4]. This paper discusses the application of RADSAT to the modeling of gamma-ray spectrometers in RPMs.
A Monte Carlo Study of Multiplicity Fluctuations in Pb-Pb Collisions at LHC Energies
Ramni Gupta
2015-01-15T23:59:59.000Z
With large volumes of data available from LHC, it has become possible to study the multiplicity distributions for the various possible behaviours of the multiparticle production in collisions of relativistic heavy ion collisions, where a system of dense and hot partons has been created. In this context it is important and interesting as well to check how well the Monte Carlo generators can describe the properties or the behaviour of multiparticle production processes. One such possible behaviour is the self-similarity in the particle production, which can be studied with the intermittency studies and further with chaoticity/erraticity, in the heavy ion collisions. We analyse the behaviour of erraticity index in central Pb-Pb collisions at centre of mass energy of 2.76 TeV per nucleon using the AMPT monte carlo event generator, following the recent proposal by R.C. Hwa and C.B. Yang, concerning the local multiplicity fluctuation study as a signature of critical hadronization in heavy-ion collisions. We report the values of erraticity index for the two versions of the model with default settings and their dependence on the size of the phase space region. Results presented here may serve as a reference sample for the experimental data from heavy ion collisions at these energies.
treatment of the ge- ometry, but successive versions added such features as cross-section libraries and green above. As the temperature of the plasma decreases, lattice-like peaks begin to form in the pair
Monte Carlo Sampling of Negative-temperature Plasma States
John A. Krommes; Sharadini Rath
2002-07-19T23:59:59.000Z
A Monte Carlo procedure is used to generate N-particle configurations compatible with two-temperature canonical equilibria in two dimensions, with particular attention to nonlinear plasma gyrokinetics. An unusual feature of the problem is the importance of a nontrivial probability density function R0(PHI), the probability of realizing a set {Phi} of Fourier amplitudes associated with an ensemble of uniformly distributed, independent particles. This quantity arises because the equilibrium distribution is specified in terms of {Phi}, whereas the sampling procedure naturally produces particles states gamma; {Phi} and gamma are related via a gyrokinetic Poisson equation, highly nonlinear in its dependence on gamma. Expansion and asymptotic methods are used to calculate R0(PHI) analytically; excellent agreement is found between the large-N asymptotic result and a direct numerical calculation. The algorithm is tested by successfully generating a variety of states of both positive and negative temperature, including ones in which either the longest- or shortest-wavelength modes are excited to relatively very large amplitudes.
Global variance reduction for Monte Carlo reactor physics calculations
Zhang, Q.; Abdel-Khalik, H. S. [Department of Nuclear Engineering, North Carolina State University, P.O. Box 7909, Raleigh, NC 27695-7909 (United States)
2013-07-01T23:59:59.000Z
Over the past few decades, hybrid Monte-Carlo-Deterministic (MC-DT) techniques have been mostly focusing on the development of techniques primarily with shielding applications in mind, i.e. problems featuring a limited number of responses. This paper focuses on the application of a new hybrid MC-DT technique: the SUBSPACE method, for reactor analysis calculation. The SUBSPACE method is designed to overcome the lack of efficiency that hampers the application of MC methods in routine analysis calculations on the assembly level where typically one needs to execute the flux solver in the order of 10{sup 3}-10{sup 5} times. It places high premium on attaining high computational efficiency for reactor analysis application by identifying and capitalizing on the existing correlations between responses of interest. This paper places particular emphasis on using the SUBSPACE method for preparing homogenized few-group cross section sets on the assembly level for subsequent use in full-core diffusion calculations. A BWR assembly model is employed to calculate homogenized few-group cross sections for different burn-up steps. It is found that using the SUBSPACE method significant speedup can be achieved over the state of the art FW-CADIS method. While the presented speed-up alone is not sufficient to render the MC method competitive with the DT method, we believe this work will become a major step on the way of leveraging the accuracy of MC calculations for assembly calculations. (authors)
Monte Carlo simulations for generic granite repository studies
Chu, Shaoping [Los Alamos National Laboratory; Lee, Joon H [SNL; Wang, Yifeng [SNL
2010-12-08T23:59:59.000Z
In a collaborative study between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL) for the DOE-NE Office of Fuel Cycle Technologies Used Fuel Disposition (UFD) Campaign project, we have conducted preliminary system-level analyses to support the development of a long-term strategy for geologic disposal of high-level radioactive waste. A general modeling framework consisting of a near- and a far-field submodel for a granite GDSE was developed. A representative far-field transport model for a generic granite repository was merged with an integrated systems (GoldSim) near-field model. Integrated Monte Carlo model runs with the combined near- and farfield transport models were performed, and the parameter sensitivities were evaluated for the combined system. In addition, a sub-set of radionuclides that are potentially important to repository performance were identified and evaluated for a series of model runs. The analyses were conducted with different waste inventory scenarios. Analyses were also conducted for different repository radionuelide release scenarios. While the results to date are for a generic granite repository, the work establishes the method to be used in the future to provide guidance on the development of strategy for long-term disposal of high-level radioactive waste in a granite repository.
Ensemble bayesian model averaging using markov chain Monte Carlo sampling
Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL
2008-01-01T23:59:59.000Z
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.
Monte Carlo Simulations of Cosmic Rays Hadronic Interactions
Aguayo Navarrete, Estanislao; Orrell, John L.; Kouzes, Richard T.
2011-04-01T23:59:59.000Z
This document describes the construction and results of the MaCoR software tool, developed to model the hadronic interactions of cosmic rays with different geometries of materials. The ubiquity of cosmic radiation in the environment results in the activation of stable isotopes, referred to as cosmogenic activities. The objective is to use this application in conjunction with a model of the MAJORANA DEMONSTRATOR components, from extraction to deployment, to evaluate cosmogenic activation of such components before and after deployment. The cosmic ray showers include several types of particles with a wide range of energy (MeV to GeV). It is infeasible to compute an exact result with a deterministic algorithm for this problem; Monte Carlo simulations are a more suitable approach to model cosmic ray hadronic interactions. In order to validate the results generated by the application, a test comparing experimental muon flux measurements and those predicted by the application is presented. The experimental and simulated results have a deviation of 3%.
A review of Monte Carlo simulations of polymers with PERM
Hsiao-Ping Hsu; Peter Grassberger
2011-07-06T23:59:59.000Z
In this review, we describe applications of the pruned-enriched Rosenbluth method (PERM), a sequential Monte Carlo algorithm with resampling, to various problems in polymer physics. PERM produces samples according to any given prescribed weight distribution, by growing configurations step by step with controlled bias, and correcting "bad" configurations by "population control". The latter is implemented, in contrast to other population based algorithms like e.g. genetic algorithms, by depth-first recursion which avoids storing all members of the population at the same time in computer memory. The problems we discuss all concern single polymers (with one exception), but under various conditions: Homopolymers in good solvents and at the $\\Theta$ point, semi-stiff polymers, polymers in confining geometries, stretched polymers undergoing a forced globule-linear transition, star polymers, bottle brushes, lattice animals as a model for randomly branched polymers, DNA melting, and finally -- as the only system at low temperatures, lattice heteropolymers as simple models for protein folding. PERM is for some of these problems the method of choice, but it can also fail. We discuss how to recognize when a result is reliable, and we discuss also some types of bias that can be crucial in guiding the growth into the right directions.
Hyperon Puzzle: Hints from Quantum Monte Carlo Calculations
Diego Lonardoni; Alessandro Lovato; Stefano Gandolfi; Francesco Pederiva
2015-02-27T23:59:59.000Z
The onset of hyperons in the core of neutron stars and the consequent softening of the equation of state have been questioned for a long time. Controversial theoretical predictions and recent astrophysical observations of neutron stars are the grounds for the so-called hyperon puzzle. We calculate the equation of state and the neutron star mass-radius relation of an infinite systems of neutrons and $\\Lambda$ particles by using the auxiliary field diffusion Monte Carlo algorithm. We find that the three-body hyperon-nucleon interaction plays a fundamental role in the softening of the equation of state and for the consequent reduction of the predicted maximum mass. We have considered two different models of three-body force that successfully describe the binding energy of medium mass hypernuclei. Our results indicate that they give dramatically different results on the maximum mass of neutron stars, not necessarily incompatible with the recent observation of very massive neutron stars. We conclude that stronger constraints on the hyperon-neutron force are necessary in order to properly assess the role of hyperons in neutron stars.
Monte Carlo simulations of lattice models for single polymer systems
Hsu, Hsiao-Ping, E-mail: hsu@mpip-mainz.mpg.de [Max-Planck-Institut für Polymerforschung, Ackermannweg 10, D-55128 Mainz (Germany)
2014-10-28T23:59:59.000Z
Single linear polymer chains in dilute solutions under good solvent conditions are studied by Monte Carlo simulations with the pruned-enriched Rosenbluth method up to the chain length N?O(10{sup 4}). Based on the standard simple cubic lattice model (SCLM) with fixed bond length and the bond fluctuation model (BFM) with bond lengths in a range between 2 and ?(10), we investigate the conformations of polymer chains described by self-avoiding walks on the simple cubic lattice, and by random walks and non-reversible random walks in the absence of excluded volume interactions. In addition to flexible chains, we also extend our study to semiflexible chains for different stiffness controlled by a bending potential. The persistence lengths of chains extracted from the orientational correlations are estimated for all cases. We show that chains based on the BFM are more flexible than those based on the SCLM for a fixed bending energy. The microscopic differences between these two lattice models are discussed and the theoretical predictions of scaling laws given in the literature are checked and verified. Our simulations clarify that a different mapping ratio between the coarse-grained models and the atomistically realistic description of polymers is required in a coarse-graining approach due to the different crossovers to the asymptotic behavior.
APR1400 LBLOCA uncertainty quantification by Monte Carlo method and comparison with Wilks' formula
Hwang, M.; Bae, S.; Chung, B. D. [Korea Atomic Energy Research Inst., 150 Dukjin-dong, Yuseong-gu, Daejeon (Korea, Republic of)
2012-07-01T23:59:59.000Z
An analysis of the uncertainty quantification for the PWR LBLOCA by the Monte Carlo calculation has been performed and compared with the tolerance level determined by Wilks' formula. The uncertainty range and distribution of each input parameter associated with the LBLOCA accident were determined by the PIRT results from the BEMUSE project. The Monte-Carlo method shows that the 95. percentile PCT value can be obtained reliably with a 95% confidence level using the Wilks' formula. The extra margin by the Wilks' formula over the true 95. percentile PCT by the Monte-Carlo method was rather large. Even using the 3 rd order formula, the calculated value using the Wilks' formula is nearly 100 K over the true value. It is shown that, with the ever increasing computational capability, the Monte-Carlo method is accessible for the nuclear power plant safety analysis within a realistic time frame. (authors)
Maruyama, Shigeo
Adsorption characteristics of alkanes onto carbon nanotube bundles: Grand Canonical Monte Carlo alkane adsorption and separation. Rather than remaining isolated however, nanotubes tend to bundle together, and the adsorption properties of such bundles and subsequent potential for practical alkane
Annealing contour Monte Carlo algorithm for structure optimization in an off-lattice protein model
Liang, Faming
. For example, the HP model1 treats each amino acid as a point particle and restricts the model to fold of the energy landscape, so it is an excellent tool for Monte Carlo optimization. The ACMC algorithm is an accel
ATLAS Monte Carlo production Run-1 experience and readiness for Run-2 challenges
Chapman, John Derek; The ATLAS collaboration; Garcia Navarro, Jose Enrique; Gwenlan, Claire; Mehlhase, Sascha; Tsulaia, Vakhtang; Vaniachine, Alexandre; Zhong, Jiahang; Pacheco Pages, Andres
2015-01-01T23:59:59.000Z
In this presentation we will review the ATLAS Monte Carlo production setup including the different production steps involved in full and fast detector simulation. A report on the Monte Carlo production campaigns during Run-I, Long Shutdown 1 (LS1) and status of the production for Run-2 will be presented. The presentation will include the details on various performance aspects. Important improvements in the workflow and software will be highlighted. Besides standard Monte Carlo production for data analyses at 7 and 8 TeV, the production accommodates for various specialised activities. These range from extended Monte Carlo validation, Geant4 validation, pileup simulation using zero bias data and production for various upgrade studies. The challenges of these activities will be discussed.
Physics-based Predictive Time Propagation Method for Monte Carlo Coupled Depletion Simulations
Johns, Jesse Merlin
2014-12-18T23:59:59.000Z
Monte Carlo techniques for numerical simulation has humble beginnings during the Manhattan project. They were developed to rein in intractable problems of nuclear implosion hydrodynamics, thermonuclear reactions, and computing neutron fluxes and core...
Efficient scene simulation for robust monte carlo localization using an RGB-D camera
Fallon, Maurice Francis
2013-05-14T23:59:59.000Z
This paper presents Kinect Monte Carlo Localization (KMCL), a new method for localization in three dimensional indoor environments using RGB-D cameras, such as the Microsoft Kinect. The approach makes use of a low fidelity ...
MARKOV CHAIN MONTE CARLO FOR AUTOMATED TRACKING OF GENEALOGY IN MICROSCOPY VIDEOS
MARKOV CHAIN MONTE CARLO FOR AUTOMATED TRACKING OF GENEALOGY IN MICROSCOPY VIDEOS KATHLEEN CHAMPION of the nuclei in the images and their genealogies. Evan Tice '09 has already developed some code that aims
Physics-based Predictive Time Propagation Method for Monte Carlo Coupled Depletion Simulations
Johns, Jesse Merlin
2014-12-18T23:59:59.000Z
Monte Carlo techniques for numerical simulation has humble beginnings during the Manhattan project. They were developed to rein in intractable problems of nuclear implosion hydrodynamics, thermonuclear reactions, and computing neutron fluxes and core...
Combining Strategies for Parallel Stochastic Approximation Monte Carlo Algorithm of Big Data
Lin, Fang-Yu
2014-10-15T23:59:59.000Z
of iterations and is prone to get trapped into local optima. On the other hand, Stochastic Approximation in Monte Carlo algorithm (SAMC), a very sophisticated algorithm in theory and applications, can avoid getting trapped into local optima and produce more...
Walsh, Jonathan A. (Jonathan Alan)
2014-01-01T23:59:59.000Z
This thesis presents the development and analysis of computational methods for efficiently accessing and utilizing nuclear data in Monte Carlo neutron transport code simulations. Using the OpenMC code, profiling studies ...
Improvements and applications of the Uniform Fission Site method in Monte Carlo
Hunter, Jessica Lynn
2014-01-01T23:59:59.000Z
Monte Carlo methods for reactor analysis have been in development with the eventual goal of full-core analysis. To attain results with reasonable uncertainties, large computational resources are needed. Variance reduction ...
Monte Carlo and thermal hydraulic coupling using low-order nonlinear diffusion acceleration
Herman, Bryan R. (Bryan Robert)
2014-01-01T23:59:59.000Z
Monte Carlo (MC) methods for reactor analysis are most often employed as a benchmark tool for other transport and diffusion methods. In this work, we identify and resolve a few of the issues associated with using MC as a ...
Serdar Elhatisari; Dean Lee
2014-12-01T23:59:59.000Z
We present lattice Monte Carlo calculations of fermion-dimer scattering in the limit of zero-range interactions using the adiabatic projection method. The adiabatic projection method uses a set of initial cluster states and Euclidean time projection to give a systematically improvable description of the low-lying scattering cluster states in a finite volume. We use L\\"uscher's finite-volume relations to determine the s-wave, p-wave, and d-wave phase shifts. For comparison, we also compute exact lattice results using Lanczos iteration and continuum results using the Skorniakov-Ter-Martirosian equation. For our Monte Carlo calculations we use a new lattice algorithm called impurity lattice Monte Carlo. This algorithm can be viewed as a hybrid technique which incorporates elements of both worldline and auxiliary-field Monte Carlo simulations.
Protein folding and phylogenetic tree reconstruction using stochastic approximation Monte Carlo
Cheon, Sooyoung
2007-09-17T23:59:59.000Z
folding problems. The numerical results indicate that it outperforms simulated annealing and conventional Monte Carlo algorithms as a stochastic optimization algorithm. We also propose one method for the use of secondary structures in protein folding...
Washington at Seattle, University of - Department of Physics, Electroweak Interaction Research Group
Nuclear Structure and Reactions (Quantum Monte Carlo, Lanczos Methods, Density Functional Methods systems: nuclei and the unitary Fermi gas" Thursday, June 9 10:00 am Stefano Gandolfi "Ab
Xu, Sheng, S.M. Massachusetts Institute of Technology
2013-01-01T23:59:59.000Z
In order to use Monte Carlo methods for reactor simulations beyond benchmark activities, the traditional way of preparing and using nuclear cross sections needs to be changed, since large datasets of cross sections at many ...
Laporte, Claude Y.
Software Process Improvement 98, Monte Carlo, December 1998. 1 Development and Integration Issues about Software Engineering, Systems Engineering and Project Management Processes Claude Y. Laporte software engineering, systems engineering, supporting processes and project management process over
Moffitt, John Russell
1972-01-01T23:59:59.000Z
SEMIANALYTIC MONTE CARLO CALCULATION OF REFLECTED AND TRANSMITTED RADIANCE IN A PLANE PARALLEL ATMOSPHERE A Thesis by JOHN RUSSELL MOFFITT Submitted to the Graduate College of Texas A&M University in partial fulfillment of the requirement... for the degree of MASTER OF SCIENCE August 1972 Major Subject: Physics SEMIANALYTIC MONTE CARLO CALCULATION OF REFLECTED AND TRANSMITTED RADIANCE IN A PLANE PARALLEL ATMOSPHERE A Thesis by JOHN RUSSELL MOFFITT Approved as to style and content by: (Cha...
PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code
Iandola, F N; O'Brien, M J; Procassini, R J
2010-11-29T23:59:59.000Z
Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.
Ibrahim, Ahmad M [ORNL] [ORNL; Peplow, Douglas E. [ORNL] [ORNL; Peterson, Joshua L [ORNL] [ORNL; Grove, Robert E [ORNL] [ORNL
2013-01-01T23:59:59.000Z
The rigorous 2-step (R2S) method uses three-dimensional Monte Carlo transport simulations to calculate the shutdown dose rate (SDDR) in fusion reactors. Accurate full-scale R2S calculations are impractical in fusion reactors because they require calculating space- and energy-dependent neutron fluxes everywhere inside the reactor. The use of global Monte Carlo variance reduction techniques was suggested for accelerating the neutron transport calculation of the R2S method. The prohibitive computational costs of these approaches, which increase with the problem size and amount of shielding materials, inhibit their use in the accurate full-scale neutronics analyses of fusion reactors. This paper describes a novel hybrid Monte Carlo/deterministic technique that uses the Consistent Adjoint Driven Importance Sampling (CADIS) methodology but focuses on multi-step shielding calculations. The Multi-Step CADIS (MS-CADIS) method speeds up the Monte Carlo neutron calculation of the R2S method using an importance function that represents the importance of the neutrons to the final SDDR. Using a simplified example, preliminarily results showed that the use of MS-CADIS enhanced the efficiency of the neutron Monte Carlo simulation of an SDDR calculation by a factor of 550 compared to standard global variance reduction techniques, and that the increase over analog Monte Carlo is higher than 10,000.
Utility of Monte Carlo Modelling for Holdup Measurements.
Belian, Anthony P.; Russo, P. A. (Phyllis A.); Weier, Dennis R. (Dennis Ray),
2005-01-01T23:59:59.000Z
Non-destructive assay (NDA) measurements performed to locate and quantify holdup in the Oak Ridge K25 enrichment cascade used neutron totals counting and low-resolution gamma-ray spectroscopy. This facility housed the gaseous diffusion process for enrichment of uranium, in the form of UF{sub 6} gas, from {approx} 20% to 93%. Inventory of {sup 235}U inventory in K-25 is all holdup. These buildings have been slated for decontaminatino and decommissioning. The NDA measurements establish the inventory quantities and will be used to assure criticality safety and meet criteria for waste analysis and transportation. The tendency to err on the side of conservatism for the sake of criticality safety in specifying total NDA uncertainty argues, in the interests of safety and costs, for obtaining the best possible value of uncertainty at the conservative confidence level for each item of process equipment. Variable deposit distribution is a complex systematic effect (i.e., determined by multiple independent variables) on the portable NDA results for very large and bulk converters that contributes greatly to total uncertainty for holdup in converters measured by gamma or neutron NDA methods. Because the magnitudes of complex systematic effects are difficult to estimate, computational tools are important for evaluating those that are large. Motivated by very large discrepancies between gamma and neutron measurements of high-mass converters with gamma results tending to dominate, the Monte Carlo code MCNP has been used to determine the systematic effects of deposit distribution on gamma and neutron results for {sup 235}U holdup mass in converters. This paper details the numerical methodology used to evaluate large systematic effects unique to each measurement type, validates the methodology by comparison with measurements, and discusses how modeling tools can supplement the calibration of instruments used for holdup measurements by providing realistic values at well-defined confidence levels for dominating systematic effects.
The ATLAS collaboration
2015-01-01T23:59:59.000Z
This note summarizes some of the latest Monte Carlo generator studies using ttbar events in ATLAS. Variations of the h_damp parameters and PDFs in the Powheg+Pythia8 setup are compared to ATLAS measurements of ttbar production. In addition, Powheg+Pythia6, Powheg+Herwig++ and Sherpa MEPS@NLO are also compared to the same measurements.
Paris-Sud XI, UniversitÃ© de
produced by an earthquake and its aftershocks (the external events) on a nuclear power plant (the critical plant) embedded in the connected power and water distribution, and transportation networks which support1 Goal Tree Success Tree - Dynamic Master Logic Diagram and Monte Carlo Simulation for the Safety
Final Report: 06-LW-013, Nuclear Physics the Monte Carlo Way
Ormand, W E
2009-03-01T23:59:59.000Z
This is document reports the progress and accomplishments achieved in 2006-2007 with LDRD funding under the proposal 06-LW-013, 'Nuclear Physics the Monte Carlo Way'. The project was a theoretical study to explore a novel approach to dealing with a persistent problem in Monte Carlo approaches to quantum many-body systems. The goal was to implement a solution to the notorious 'sign-problem', which if successful, would permit, for the first time, exact solutions to quantum many-body systems that cannot be addressed with other methods. In this document, we outline the progress and accomplishments achieved during FY2006-2007 with LDRD funding in the proposal 06-LW-013, 'Nuclear Physics the Monte Carlo Way'. This project was funded under the Lab Wide LDRD competition at Lawrence Livermore National Laboratory. The primary objective of this project was to test the feasibility of implementing a novel approach to solving the generic quantum many-body problem, which is one of the most important problems being addressed in theoretical physics today. Instead of traditional methods based matrix diagonalization, this proposal focused a Monte Carlo method. The principal difficulty with Monte Carlo methods, is the so-called 'sign problem'. The sign problem, which will discussed in some detail later, is endemic to Monte Carlo approaches to the quantum many-body problem, and is the principal reason that they have not been completely successful in the past. Here, we outline our research in the 'shifted-contour method' applied the Auxiliary Field Monte Carlo (AFMC) method.
A Fano cavity test for Monte Carlo proton transport algorithms
Sterpin, Edmond, E-mail: esterpin@yahoo.fr [Université catholique de Louvain, Center of Molecular Imaging, Radiotherapy and Oncology, Institut de Recherche Experimentale et Clinique, Avenue Hippocrate 54, 1200 Brussels (Belgium)] [Université catholique de Louvain, Center of Molecular Imaging, Radiotherapy and Oncology, Institut de Recherche Experimentale et Clinique, Avenue Hippocrate 54, 1200 Brussels (Belgium); Sorriaux, Jefferson; Souris, Kevin [Université catholique de Louvain, Center of Molecular Imaging, Radiotherapy and Oncology, Institut de Recherche Experimentale et Clinique, Avenue Hippocrate 54, 1200 Brussels, Belgium and Université catholique de Louvain, ICTEAM institute, Chemin du cyclotron 6, 1348 Louvain-la-Neuve (Belgium)] [Université catholique de Louvain, Center of Molecular Imaging, Radiotherapy and Oncology, Institut de Recherche Experimentale et Clinique, Avenue Hippocrate 54, 1200 Brussels, Belgium and Université catholique de Louvain, ICTEAM institute, Chemin du cyclotron 6, 1348 Louvain-la-Neuve (Belgium); Vynckier, Stefaan [Université catholique de Louvain, Center of Molecular Imaging, Radiotherapy and Oncology, Institut de Recherche Experimentale et Clinique, Avenue Hippocrate 54, 1200 Brussels, Belgium and Département de Radiothérapie, Cliniques Universitaires Saint-Luc, Avenue Hippocrate 54, 1200 Brussels (Belgium)] [Université catholique de Louvain, Center of Molecular Imaging, Radiotherapy and Oncology, Institut de Recherche Experimentale et Clinique, Avenue Hippocrate 54, 1200 Brussels, Belgium and Département de Radiothérapie, Cliniques Universitaires Saint-Luc, Avenue Hippocrate 54, 1200 Brussels (Belgium); Bouchard, Hugo [Département de radio-oncologie, Centre hospitalier de l’Université de Montréal (CHUM), 1560 Sherbrooke est, Montréal, Québec H2L 4M1 (Canada)] [Département de radio-oncologie, Centre hospitalier de l’Université de Montréal (CHUM), 1560 Sherbrooke est, Montréal, Québec H2L 4M1 (Canada)
2014-01-15T23:59:59.000Z
Purpose: In the scope of reference dosimetry of radiotherapy beams, Monte Carlo (MC) simulations are widely used to compute ionization chamber dose response accurately. Uncertainties related to the transport algorithm can be verified performing self-consistency tests, i.e., the so-called “Fano cavity test.” The Fano cavity test is based on the Fano theorem, which states that under charged particle equilibrium conditions, the charged particle fluence is independent of the mass density of the media as long as the cross-sections are uniform. Such tests have not been performed yet for MC codes simulating proton transport. The objectives of this study are to design a new Fano cavity test for proton MC and to implement the methodology in two MC codes: Geant4 and PENELOPE extended to protons (PENH). Methods: The new Fano test is designed to evaluate the accuracy of proton transport. Virtual particles with an energy ofE{sub 0} and a mass macroscopic cross section of (?)/(?) are transported, having the ability to generate protons with kinetic energy E{sub 0} and to be restored after each interaction, thus providing proton equilibrium. To perform the test, the authors use a simplified simulation model and rigorously demonstrate that the computed cavity dose per incident fluence must equal (?E{sub 0})/(?) , as expected in classic Fano tests. The implementation of the test is performed in Geant4 and PENH. The geometry used for testing is a 10 × 10 cm{sup 2} parallel virtual field and a cavity (2 × 2 × 0.2 cm{sup 3} size) in a water phantom with dimensions large enough to ensure proton equilibrium. Results: For conservative user-defined simulation parameters (leading to small step sizes), both Geant4 and PENH pass the Fano cavity test within 0.1%. However, differences of 0.6% and 0.7% were observed for PENH and Geant4, respectively, using larger step sizes. For PENH, the difference is attributed to the random-hinge method that introduces an artificial energy straggling if step size is not small enough. Conclusions: Using conservative user-defined simulation parameters, both PENH and Geant4 pass the Fano cavity test for proton transport. Our methodology is applicable to any kind of charged particle, provided that the considered MC code is able to track the charged particle considered.
Del Moral , Pierre
MÃ©thodes de Monte Carlo et processus stochastiques Pierre Del Moral - Stefano De Marco Monte Carlo et processus stochastiques: du linÃ©aire au non linÃ©aire (E. Gobet). On considÃ¨re un systÃ¨me MÃ©thodes de Monte Carlo et processus stochastiques: du linÃ©aire au non linÃ©aire (E. Gobet). On considÃ¨re
Del Moral , Pierre
MÃ©thodes de Monte Carlo et processus stochastiques Pierre Del Moral - Stefano De Marco de refaire l'une des expÃ©riences de simulation de Monte Carlo les plus anciennes, proposÃ©e en 1733 aiguille touche le bord d'une latte. 1. MÃ©thode de Monte Carlo : VÃ©rifier numÃ©riquement que la probabilitÃ©
Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations
Arampatzis, Georgios, E-mail: garab@math.uoc.gr [Department of Applied Mathematics, University of Crete (Greece) [Department of Applied Mathematics, University of Crete (Greece); Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 (United States); Katsoulakis, Markos A., E-mail: markos@math.umass.edu [Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 (United States)
2014-03-28T23:59:59.000Z
In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.
Fission matrix-based Monte Carlo criticality analysis of fuel storage pools
Farlotti, M. [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI 48109 (United States); Ecole Polytechnique, Palaiseau, F 91128 (France); Larsen, E. W. [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI 48109 (United States)
2013-07-01T23:59:59.000Z
Standard Monte Carlo transport procedures experience difficulties in solving criticality problems in fuel storage pools. Because of the strong neutron absorption between fuel assemblies, source convergence can be very slow, leading to incorrect estimates of the eigenvalue and the eigenfunction. This study examines an alternative fission matrix-based Monte Carlo transport method that takes advantage of the geometry of a storage pool to overcome this difficulty. The method uses Monte Carlo transport to build (essentially) a fission matrix, which is then used to calculate the criticality and the critical flux. This method was tested using a test code on a simple problem containing 8 assemblies in a square pool. The standard Monte Carlo method gave the expected eigenfunction in 5 cases out of 10, while the fission matrix method gave the expected eigenfunction in all 10 cases. In addition, the fission matrix method provides an estimate of the error in the eigenvalue and the eigenfunction, and it allows the user to control this error by running an adequate number of cycles. Because of these advantages, the fission matrix method yields a higher confidence in the results than standard Monte Carlo. We also discuss potential improvements of the method, including the potential for variance reduction techniques. (authors)
Lipid droplets fusion in adipocyte differentiated 3T3-L1 cells: A Monte Carlo simulation
Boschi, Federico, E-mail: federico.boschi@univr.it [Department of Neurological and Movement Sciences, University of Verona, Strada Le Grazie 8, 37134 Verona (Italy); Department of Computer Science, University of Verona, Strada Le Grazie 15, 37134 Verona (Italy); Rizzatti, Vanni; Zamboni, Mauro [Department of Medicine, Geriatric Section, University of Verona, Piazzale Stefani 1, 37126 Verona (Italy); Sbarbati, Andrea [Department of Neurological and Movement Sciences, University of Verona, Strada Le Grazie 8, 37134 Verona (Italy)
2014-02-15T23:59:59.000Z
Several human worldwide diseases like obesity, type 2 diabetes, hepatic steatosis, atherosclerosis and other metabolic pathologies are related to the excessive accumulation of lipids in cells. Lipids accumulate in spherical cellular inclusions called lipid droplets (LDs) whose sizes range from fraction to one hundred of micrometers in adipocytes. It has been suggested that LDs can grow in size due to a fusion process by which a larger LD is obtained with spherical shape and volume equal to the sum of the progenitors’ ones. In this study, the size distribution of two populations of LDs was analyzed in immature and mature (5-days differentiated) 3T3-L1 adipocytes (first and second populations, respectively) after Oil Red O staining. A Monte Carlo simulation of interaction between LDs has been developed in order to quantify the size distribution and the number of fusion events needed to obtain the distribution of the second population size starting from the first one. Four models are presented here based on different kinds of interaction: a surface weighted interaction (R2 Model), a volume weighted interaction (R3 Model), a random interaction (Random model) and an interaction related to the place where the LDs are born (Nearest Model). The last two models mimic quite well the behavior found in the experimental data. This work represents a first step in developing numerical simulations of the LDs growth process. Due to the complex phenomena involving LDs (absorption, growth through additional neutral lipid deposition in existing droplets, de novo formation and catabolism) the study focuses on the fusion process. The results suggest that, to obtain the observed size distribution, a number of fusion events comparable with the number of LDs themselves is needed. Moreover the MC approach results a powerful tool for investigating the LDs growth process. Highlights: • We evaluated the role of the fusion process in the synthesis of the lipid droplets. • We compared the size distribution of the lipid droplets in immature and mature cells. • We used the Monte Carlo simulation approach, simulating 10 thousand of fusion events. • Four different interaction models between the lipid droplets were tested. • The best model which mimics the experimental measures was selected.
General purpose dynamic Monte Carlo with continuous energy for transient analysis
Sjenitzer, B. L.; Hoogenboom, J. E. [Delft Univ. of Technology, Dept. of Radiation, Radionuclide and Reactors, Mekelweg 15, 2629JB Delft (Netherlands)
2012-07-01T23:59:59.000Z
For safety assessments transient analysis is an important tool. It can predict maximum temperatures during regular reactor operation or during an accident scenario. Despite the fact that this kind of analysis is very important, the state of the art still uses rather crude methods, like diffusion theory and point-kinetics. For reference calculations it is preferable to use the Monte Carlo method. In this paper the dynamic Monte Carlo method is implemented in the general purpose Monte Carlo code Tripoli4. Also, the method is extended for use with continuous energy. The first results of Dynamic Tripoli demonstrate that this kind of calculation is indeed accurate and the results are achieved in a reasonable amount of time. With the method implemented in Tripoli it is now possible to do an exact transient calculation in arbitrary geometry. (authors)
Revised methods for few-group cross sections generation in the Serpent Monte Carlo code
Fridman, E. [Reactor Safety Div., Helmholz-Zentrum Dresden-Rossendorf, POB 51 01 19, Dresden, 01314 (Germany); Leppaenen, J. [VTT Technical Research Centre of Finland, POB 1000, FI-02044 VTT (Finland)
2012-07-01T23:59:59.000Z
This paper presents new calculation methods, recently implemented in the Serpent Monte Carlo code, and related to the production of homogenized few-group constants for deterministic 3D core analysis. The new methods fall under three topics: 1) Improved treatment of neutron-multiplying scattering reactions, 2) Group constant generation in reflectors and other non-fissile regions and 3) Homogenization in leakage-corrected criticality spectrum. The methodology is demonstrated by a numerical example, comparing a deterministic nodal diffusion calculation using Serpent-generated cross sections to a reference full-core Monte Carlo simulation. It is concluded that the new methodology improves the results of the deterministic calculation, and paves the way for Monte Carlo based group constant generation. (authors)
A Proposal for a Standard Interface Between Monte Carlo Tools And One-Loop Programs
Binoth, T.; /Edinburgh U.; Boudjema, F.; /Annecy, LAPP; Dissertori, G.; Lazopoulos, A.; /Zurich, ETH; Denner, A.; /PSI, Villigen; Dittmaier, S.; /Freiburg U.; Frederix, R.; Greiner, N.; Hoeche, Stefan; /Zurich U.; Giele, W.; Skands, P.; Winter, J.; /Fermilab; Gleisberg, T.; /SLAC; Archibald, J.; Heinrich, G.; Krauss, F.; Maitre, D.; /Durham U., IPPP; Huber, M.; /Munich, Max Planck Inst.; Huston, J.; /Michigan State U.; Kauer, N.; /Royal Holloway, U. of London; Maltoni, F.; /Louvain U., CP3 /Milan Bicocca U. /INFN, Turin /Turin U. /Granada U., Theor. Phys. Astrophys. /CERN /NIKHEF, Amsterdam /Heidelberg U. /Oxford U., Theor. Phys.
2011-11-11T23:59:59.000Z
Many highly developed Monte Carlo tools for the evaluation of cross sections based on tree matrix elements exist and are used by experimental collaborations in high energy physics. As the evaluation of one-loop matrix elements has recently been undergoing enormous progress, the combination of one-loop matrix elements with existing Monte Carlo tools is on the horizon. This would lead to phenomenological predictions at the next-to-leading order level. This note summarises the discussion of the next-to-leading order multi-leg (NLM) working group on this issue which has been taking place during the workshop on Physics at TeV Colliders at Les Houches, France, in June 2009. The result is a proposal for a standard interface between Monte Carlo tools and one-loop matrix element programs.
Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes
Frambati, S.; Frignani, M. [Ansaldo Nucleare S.p.A., Corso F.M. Perrone 25, 1616 Genova (Italy)
2012-07-01T23:59:59.000Z
We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design for radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)
Data decomposition of Monte Carlo particle transport simulations via tally servers
Romano, Paul K., E-mail: paul.k.romano@gmail.com [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States); Siegel, Andrew R., E-mail: siegala@mcs.anl.gov [Argonne National Laboratory, Theory and Computing Sciences, 9700 S Cass Ave., Argonne, IL 60439 (United States); Forget, Benoit, E-mail: bforget@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States)] [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States); Smith, Kord, E-mail: kord@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States)] [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States)
2013-11-01T23:59:59.000Z
An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, William P. (Tracy, CA); Hartmann-Siantar, Christine L. (San Ramon, CA); Rathkopf, James A. (Livermore, CA)
1999-01-01T23:59:59.000Z
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, W.P.; Hartmann-Siantar, C.L.; Rathkopf, J.A.
1999-02-09T23:59:59.000Z
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media. 57 figs.
Crossing the mesoscale no-mans land via parallel kinetic Monte Carlo.
Garcia Cardona, Cristina (San Diego State University); Webb, Edmund Blackburn, III; Wagner, Gregory John; Tikare, Veena; Holm, Elizabeth Ann; Plimpton, Steven James; Thompson, Aidan Patrick; Slepoy, Alexander (U. S. Department of Energy, NNSA); Zhou, Xiao Wang; Battaile, Corbett Chandler; Chandross, Michael Evan
2009-10-01T23:59:59.000Z
The kinetic Monte Carlo method and its variants are powerful tools for modeling materials at the mesoscale, meaning at length and time scales in between the atomic and continuum. We have completed a 3 year LDRD project with the goal of developing a parallel kinetic Monte Carlo capability and applying it to materials modeling problems of interest to Sandia. In this report we give an overview of the methods and algorithms developed, and describe our new open-source code called SPPARKS, for Stochastic Parallel PARticle Kinetic Simulator. We also highlight the development of several Monte Carlo models in SPPARKS for specific materials modeling applications, including grain growth, bubble formation, diffusion in nanoporous materials, defect formation in erbium hydrides, and surface growth and evolution.
Search for New Heavy Higgs Boson in B-L model at the LHC using Monte Carlo Simulation
Hesham Mansour; Nady Bakhet
2013-04-24T23:59:59.000Z
The aim of this work is to search for a new heavy Higgs boson in the B-L extension of the Standard Model at LHC using the data produced from simulated collisions between two protons at different center of mass energies by Monte Carlo event generator programs to find new Higgs boson signatures at the LHC. Also we study the production and decay channels for Higgs boson in this model and its interactions with the other new particles of this model namely the new neutral gauge massive boson and the new fermionic right-handed heavy neutrinos .
Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis
Wilson, Paul; Evans, Thomas; Tautges, Tim
2012-12-24T23:59:59.000Z
This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well-suited to coupling with the unstructured meshes that are used in other physics simulations.
Calculating kinetics parameters and reactivity changes with continuous-energy Monte Carlo
Kiedrowski, Brian C [Los Alamos National Laboratory; Brown, Forrest B [Los Alamos National Laboratory; Wilson, Paul [UNIV. WISCONSIN
2009-01-01T23:59:59.000Z
The iterated fission probability interpretation of the adjoint flux forms the basis for a method to perform adjoint weighting of tally scores in continuous-energy Monte Carlo k-eigenvalue calculations. Applying this approach, adjoint-weighted tallies are developed for two applications: calculating point reactor kinetics parameters and estimating changes in reactivity from perturbations. Calculations are performed in the widely-used production code, MCNP, and the results of both applications are compared with discrete ordinates calculations, experimental measurements, and other Monte Carlo calculations.
Pseudo-random number generators for Monte Carlo simulations on Graphics Processing Units
Vadim Demchik
2010-03-09T23:59:59.000Z
Basic uniform pseudo-random number generators are implemented on ATI Graphics Processing Units (GPU). The performance results of the realized generators (multiplicative linear congruential (GGL), XOR-shift (XOR128), RANECU, RANMAR, RANLUX and Mersenne Twister (MT19937)) on CPU and GPU are discussed. The obtained speed-up factor is hundreds of times in comparison with CPU. RANLUX generator is found to be the most appropriate for using on GPU in Monte Carlo simulations. The brief review of the pseudo-random number generators used in modern software packages for Monte Carlo simulations in high-energy physics is present.
A Monte Carlo synthetic-acceleration method for solving the thermal radiation diffusion equation
Evans, Thomas M., E-mail: evanstm@ornl.gov [Oak Ridge National Laboratory, 1 Bethel Valley Rd., Oak Ridge, TN 37831 (United States); Mosher, Scott W., E-mail: moshersw@ornl.gov [Oak Ridge National Laboratory, 1 Bethel Valley Rd., Oak Ridge, TN 37831 (United States); Slattery, Stuart R., E-mail: sslattery@wisc.edu [University of Wisconsin–Madison, 1500 Engineering Dr., Madison, WI 53716 (United States); Hamilton, Steven P., E-mail: hamiltonsp@ornl.gov [Oak Ridge National Laboratory, 1 Bethel Valley Rd., Oak Ridge, TN 37831 (United States)
2014-02-01T23:59:59.000Z
We present a novel synthetic-acceleration-based Monte Carlo method for solving the equilibrium thermal radiation diffusion equation in three spatial dimensions. The algorithm performance is compared against traditional solution techniques using a Marshak benchmark problem and a more complex multiple material problem. Our results show that our Monte Carlo method is an effective solver for sparse matrix systems. For solutions converged to the same tolerance, it performs competitively with deterministic methods including preconditioned conjugate gradient and GMRES. We also discuss various aspects of preconditioning the method and its general applicability to broader classes of problems.
A Monte Carlo Synthetic-Acceleration Method for Solving the Thermal Radiation Diffusion Equation
Evans, Thomas M [ORNL] [ORNL; Mosher, Scott W [ORNL] [ORNL; Slattery, Stuart [University of Wisconsin, Madison] [University of Wisconsin, Madison
2014-01-01T23:59:59.000Z
We present a novel synthetic-acceleration based Monte Carlo method for solving the equilibrium thermal radiation diusion equation in three dimensions. The algorithm performance is compared against traditional solution techniques using a Marshak benchmark problem and a more complex multiple material problem. Our results show that not only can our Monte Carlo method be an eective solver for sparse matrix systems, but also that it performs competitively with deterministic methods including preconditioned Conjugate Gradient while producing numerically identical results. We also discuss various aspects of preconditioning the method and its general applicability to broader classes of problems.
Matching NLO QCD with parton shower in Monte Carlo scheme - the KrkNLO method
S. Jadach; W. Placzek; S. Sapeta; A. Siodmok; M. Skrzypek
2015-05-11T23:59:59.000Z
A new method of including the complete NLO QCD corrections to hard processes in the LO parton-shower Monte Carlo (PSMC) is presented. This method, called KrkNLO, requires the use of parton distribution functions in a dedicated Monte Carlo factorization scheme, which is also discussed in this paper. In the future, it may simplify introduction of the NNLO corrections to hard processes and the NLO corrections to PSMC. Details of the method and numerical examples of its practical implementation, as well as comparisons with other calculations, such as MCFM, MC@NLO, POWHEG, for single $Z/\\gamma^*$-boson production at the LHC, are presented.
Monte Carlo simulations of the HP model (the "Ising model" of protein folding)
Li, Ying Wai; Landau, David P; 10.1016/j.cpc.2010.12.049
2011-01-01T23:59:59.000Z
Using Wang-Landau sampling with suitable Monte Carlo trial moves (pull moves and bond-rebridging moves combined) we have determined the density of states and thermodynamic properties for a short sequence of the HP protein model. For free chains these proteins are known to first undergo a collapse "transition" to a globule state followed by a second "transition" into a native state. When placed in the proximity of an attractive surface, there is a competition between surface adsorption and folding that leads to an intriguing sequence of "transitions". These transitions depend upon the relative interaction strengths and are largely inaccessible to "standard" Monte Carlo methods.
An Advanced Neutronic Analysis Toolkit with Inline Monte Carlo capability for BHTR Analysis
William R. Martin; John C. Lee
2009-12-30T23:59:59.000Z
Monte Carlo capability has been combined with a production LWR lattice physics code to allow analysis of high temperature gas reactor configurations, accounting for the double heterogeneity due to the TRISO fuel. The Monte Carlo code MCNP5 has been used in conjunction with CPM3, which was the testbench lattice physics code for this project. MCNP5 is used to perform two calculations for the geometry of interest, one with homogenized fuel compacts and the other with heterogeneous fuel compacts, where the TRISO fuel kernels are resolved by MCNP5.
Chung, Kiwhan
1996-01-01T23:59:59.000Z
While the use of Monte Carlo method has been prevalent in nuclear engineering, it has yet to fully blossom in the study of solute transport in porous media. By using an etched-glass micromodel, an attempt is made to apply Monte Carlo method...
Wu, Zhigang
Quantum Monte Carlo calculations of the energy-level alignment at hybrid interfaces: Role of many; published 29 May 2009 An approach is presented for obtaining a highly accurate description of the energy-level alignment at hybrid interfaces, using quantum Monte Carlo calculations to include many-body effects
Anderson, James B.
Direct Monte Carlo simulation of chemical reaction systems: Internal energy transfer and an energy a direct Monte Carlo simulation of an energy-dependent t&molecular reaction system of the type A+ B simulation of a unimo- lecular reaction with an energy-dependent rate constant k3 and with explicit treatment
Del Moral , Pierre
MÂ´ethodes de Monte Carlo et processus stochastiques. Pierre Del Moral Â Stefano De Marco la mÂ´ethode de Monte Carlo multi-niveaux. L'Â´equation diffÂ´erentielle stochastique de Black-Scholes d
Subramanian, Venkat
Kinetic Monte Carlo Simulation of Surface Heterogeneity in Graphite Anodes for Lithium-Ion Batteries: Passive Layer Formation Ravi N. Methekar,a,* Paul W. C. Northrop,a Kejia Chen,b Richard D. Braatz fade, and cycle life of Li-ion secondary batteries. In this paper, Kinetic Monte Carlo (KMC) simulation
Comparison of Monte-Carlo and Einstein methods in the light-gas interactions
Jacques Moret-Bailly
2010-01-18T23:59:59.000Z
To study the propagation of light in nebulae, many astrophysicists use a Monte-Carlo computation which does not take interferences into account. Replacing the wrong method by Einstein coefficients theory gives, on an example, a theoretical spectrum much closer to the observed one.
Dose distribution close to metal implants in Gamma Knife Radiosurgery: A Monte Carlo study
Yu, K.N.
Detachable Coil GDC system was used to localize and obliterate the aneurysm.5 Soft platinum coils were8 II. METHODOLOGY The Monte Carlo system employed is the PRESTA Pa- rameter Reduced Electron be predicted correctly by the present treatment planning system, GammaPlan,1 be- cause the calculations
SCALE Continuous-Energy Monte Carlo Depletion with Parallel KENO in TRITON
Goluoglu, Sedat [ORNL] [ORNL; Bekar, Kursat B [ORNL] [ORNL; Wiarda, Dorothea [ORNL] [ORNL
2012-01-01T23:59:59.000Z
The TRITON sequence of the SCALE code system is a powerful and robust tool for performing multigroup (MG) reactor physics analysis using either the 2-D deterministic solver NEWT or the 3-D Monte Carlo transport code KENO. However, as with all MG codes, the accuracy of the results depends on the accuracy of the MG cross sections that are generated and/or used. While SCALE resonance self-shielding modules provide rigorous resonance self-shielding, they are based on 1-D models and therefore 2-D or 3-D effects such as heterogeneity of the lattice structures may render final MG cross sections inaccurate. Another potential drawback to MG Monte Carlo depletion is the need to perform resonance self-shielding calculations at each depletion step for each fuel segment that is being depleted. The CPU time and memory required for self-shielding calculations can often eclipse the resources needed for the Monte Carlo transport. This summary presents the results of the new continuous-energy (CE) calculation mode in TRITON. With the new capability, accurate reactor physics analyses can be performed for all types of systems using the SCALE Monte Carlo code KENO as the CE transport solver. In addition, transport calculations can be performed in parallel mode on multiple processors.
Collective enhancement of nuclear state densities by the shell model Monte Carlo approach
C. Özen; Y. Alhassid; H. Nakada
2015-01-22T23:59:59.000Z
The shell model Monte Carlo (SMMC) approach allows for the microscopic calculation of statistical and collective properties of heavy nuclei using the framework of the configuration-interaction shell model in very large model spaces. We present recent applications of the SMMC method to the calculation of state densities and their collective enhancement factors in rare-earth nuclei.
MonteCarloType Techniques for Processing Interval Uncertainty, and Their Geophysical and
Ward, Karen
MonteCarloType Techniques for Processing Interval Uncertainty, and Their Geophysical contact email vladik@cs.utep.edu Abstract To determine the geophysical structure of a region, we measure are independently normally distributed. Problem: the resulting accuracies are not in line with geophysical intuition
Monte-Carlo-Type Techniques for Processing Interval Uncertainty, and Their Geophysical and
Ward, Karen
Monte-Carlo-Type Techniques for Processing Interval Uncertainty, and Their Geophysical contact email vladik@cs.utep.edu Abstract To determine the geophysical structure of a region, we measure are independently normally distributed. Problem: the resulting accuracies are not in line with geophysical intuition
The S/sub N//Monte Carlo response matrix hybrid method
Filippone, W.L.; Alcouffe, R.E.
1987-01-01T23:59:59.000Z
A hybrid method has been developed to iteratively couple S/sub N/ and Monte Carlo regions of the same problem. This technique avoids many of the restrictions and limitations of previous attempts to do the coupling and results in a general and relatively efficient method. We demonstrate the method with some simple examples.
Path Integral Monte Carlo Simulation of the Low-Density Hydrogen Plasma B. Militzer y
Militzer, Burkhard
Path Integral Monte Carlo Simulation of the Low-Density Hydrogen Plasma B. Militzer y Lawrence to calculate the equilibrium properties of hydrogen in the density and temperature range of 9:83 #2; 10 4 #20 surface. We calculate the equation of state and compare with other models for hydrogen valid
Explicit estimation of higher order modes in fission source distribution of Monte-Carlo calculation
Yamamoto, A.; Sakata, K.; Endo, T. [Nagoya University, Department of Materials, Physics and Energy Engineering, Furo-cho, Chikusa-ku, Nagoya, 464-8603 (Japan)
2013-07-01T23:59:59.000Z
Magnitude of higher order modes in fission source distribution of a multi-group Monte-Carlo calculation is estimated using the orthogonal property of forward and adjoint fission source distributions. Calculation capability of the forward and adjoint fission source distributions for fundamental and higher order modes are implemented in the AEGIS code, which is a two-dimensional transport code based on the method of characteristics. With the calculation results of the AEGIS code, magnitudes of the first to fifth higher order modes in fission source distribution obtained by the multi-group Monte-Carlo code GMVP are estimated. There are two contributions in the present study - (1) establishment of a surrogate model, which represents convergence of fission source distribution taking into account the inherent statistical 'noise' of higher order modes of Monte-Carlo calculations and (2) independent confirmation of the estimated dominance ratio in a Monte-Carlo calculation. The surrogate model would contribute to studies of the inter-cycle correlation and estimation of sufficient number of inactive/active cycles. (authors)
Comparison of the Monte Carlo adjoint-weighted and differential operator perturbation methods
Kiedrowski, Brian C [Los Alamos National Laboratory; Brown, Forrest B [Los Alamos National Laboratory
2010-01-01T23:59:59.000Z
Two perturbation theory methodologies are implemented for k-eigenvalue calculations in the continuous-energy Monte Carlo code, MCNP6. A comparison of the accuracy of these techniques, the differential operator and adjoint-weighted methods, is performed numerically and analytically. Typically, the adjoint-weighted method shows better performance over a larger range; however, there are exceptions.
Monte Carlo Simulation of Electrodeposition of Copper: A Multistep Free Energy Calculation
Subramanian, Venkat
Monte Carlo Simulation of Electrodeposition of Copper: A Multistep Free Energy Calculation S is carried out to evaluate the step wise free energy change in the process of electrochemical copper the number of species (CuCl2 or CuSO4 or Cu as the case may be) and in turn the free energy. The effect
Simulations of polycrystalline CVD diamond film growth using a simplified Monte Carlo model
Bristol, University of
Simulations of polycrystalline CVD diamond film growth using a simplified Monte Carlo model P online 6 November 2009 Keywords: CVD diamond growth Modelling Nucleation Nanodiamond A simple 1) of a diamond (100) surface. The model considers adsorption, etching/desorption, lattice incorporation
Study of CANDU Thorium-based Fuel Cycles by Deterministic and Monte Carlo Methods
Paris-Sud XI, Université de
Study of CANDU Thorium-based Fuel Cycles by Deterministic and Monte Carlo Methods A. Nuttin1 , P, there is a renewal of interest in self-sustainable thorium fuel cycles applied to various concepts such as Molten here, with a shorter term view, to re-evaluate the economic competitiveness of once-through thorium
Sources of Traffic Demand Variability and Use of Monte Carlo for Network Capacity Planning
Cortes, Corinna
to deal with rightfully angry business and finance teams: physical resources start depreciating the moment the sources of traffic demand variability and dive into Monte-Carlo methodology as an efficient way; throughput; traffic; concurrency; availability; node-and-link model; fast-time simulation; agent
Quantum Monte Carlo calculations of electronic excitation energies: the case of the singlet n
Paris-Sud XI, UniversitÃ© de
) transition in acrolein Julien Toulouse1 , Michel Caffarel2 , Peter Reinhardt1 , Philip E. Hoggan3 , and C. J-of-the-art quantum Monte Carlo calculations of the singlet n (CO) vertical excitation energy in the acrolein in the acrolein molecule without reoptimization of the determinantal part of the wave function. The acrolein
A Methodological Comparison of Monte Carlo Simulation and Epoch-Era Analysis for
de Weck, Olivier L.
techniques, morphological analysis, scenario planning · Semi-quantitative methods (can be used to initialize%) Probabilistic risk assessment (PRA), Fault Tree Analysis (FTA), Hazards Analysis (HA), Failure modes and effectsA Methodological Comparison of Monte Carlo Simulation and Epoch-Era Analysis for Tradespace
Instabilities in Molecular Dynamics Integrators used in Hybrid Monte Carlo Simulations
B. Joo; UKQCD Collaboration
2001-10-11T23:59:59.000Z
We discuss an instability in the leapfrog integration algorithm, widely used in current Hybrid Monte Carlo (HMC) simulations of lattice QCD. We demonstrate the instability in the simple harmonic oscillator (SHO) system where it is manifest. We demonstrate the instability in HMC simulations of lattic QCD with dynamical Wilson-Clover fermions and discuss implications for future simulations of lattice QCD.
Use of single scatter electron monte carlo transport for medical radiation sciences
Svatos, Michelle M. (Oakland, CA)
2001-01-01T23:59:59.000Z
The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.
Monte Carlo Posterior Integration in GARCH Peter M uller and Andy Pole
West, Mike
Monte Carlo Posterior Integration in GARCH Models Peter MÂ¨ uller and Andy Pole Peter M along both lines to apply to the analysis of GARCH (generalized autoregressive conditionalÂ tion to GARCH models in Bollerslev (1986). There are now over 300 papers in the mainstream statistics
Supertrack Monte Carlo variance reduction experience for non-Boltzmann tallies
Estes, G.P.; Booth, T.E.
1995-02-01T23:59:59.000Z
This paper applies a recently developed variance reduction technique to the first principles calculations of photon detector responses. This technique makes possible the direct comparison of pulse height calculations with measurements without the need for unfolding techniques. Comparisons are made between several experiments and the calculations to demonstrate the utility of the supertrack Monte Carlo technique for reproducing and interpreting experimental count rate spectra.
Elsevier Science 1 Use of the GATE Monte Carlo package for dosimetry
Paris-Sud XI, UniversitÃ© de
Elsevier Science 1 Use of the GATE Monte Carlo package for dosimetry applications D. Visvikis, a* M Angeles, USA Abstract One of the roles for MC simulation studies is in the area of dosimetry. A number of different codes dedicated to dosimetry applications are available and widely used today, such as MCNP
Monte Carlo Simulation of Radiation in Gases with a NarrowBand Model
Dufresne, Jean-Louis
, France (\\Phi) now at the Institute of Energy and Power Plant Technology, TH Darmstadt, 64287 DarmstadtMonte Carlo Simulation of Radiation in Gases with a NarrowÂBand Model and a Net is used for simulation of radiative heat transfers in nonÂgray gases. The proposed procedure is based
Sequential Monte Carlo for Simultaneous Passive Device-Free Tracking and Sensor Localization Using
Rabbat, Michael
Sequential Monte Carlo for Simultaneous Passive Device-Free Tracking and Sensor Localization Using Men Beijing Univ. Posts & Telecom. Beijing, China menad@bupt.edu.cn ABSTRACT This paper presents and evaluates a method for simulta- neously tracking a target while localizing the sensor nodes of a passive
Green's function Monte Carlo calculation for the ground state of helium trimers
Cabral, F.; Kalos, M.H.
1981-02-01T23:59:59.000Z
The ground state energy of weakly bound boson trimers interacting via Lennard-Jones (12,6) pair potentials is calculated using a Monte Carlo Green's Function Method. Threshold coupling constants for self binding are obtained by extrapolation to zero binding.
Combining Monte Carlo Simulations and Options to Manage the Risk of Real
Boyer, Edmond
of real estate portfolio valuations can be improved through the simultaneous use of Monte Carlo simulations and options theory. Our method considers the options embedded in Continental European lease are more reliable that those usually computed by the traditional method of discounted cash flow. Moreover
First-row hydrides: Dissociation and ground state energies using quantum Monte Carlo
Anderson, James B.
First-row hydrides: Dissociation and ground state energies using quantum Monte Carlo Arne Lu, Pennsylvania 16802 Received 20 May 1996; accepted 24 July 1996 Accurate ground state energies comparable FN-DQMC method. The residual energy, the nodal error due to the error in the nodal structure
A Combined Density Functional and Monte Carlo Study of Polycarbonate R. O. Jones and P. Ballone[*
A Combined Density Functional and Monte Carlo Study of Polycarbonate R. O. Jones and P. Ballone and reactivity for organic systems closely related to bisphenol-A-polycarbonate(BPA- PC). The results provide a detailed description of polymers, using bisphenol A polycarbonate (BPA- PC) as an example
K-effective of the world: and other concerns for Monte Carlo Eigenvalue calculations
Brown, Forrest B [Los Alamos National Laboratory
2010-01-01T23:59:59.000Z
Monte Carlo methods have been used to compute k{sub eff} and the fundamental model eigenfunction of critical systems since the 1950s. Despite the sophistication of today's Monte Carlo codes for representing realistic geometry and physics interactions, correct results can be obtained in criticality problems only if users pay attention to source convergence in the Monte Carlo iterations and to running a sufficient number of neutron histories to adequately sample all significant regions of the problem. Recommended best practices for criticality calculations are reviewed and applied to several practical problems for nuclear reactors and criticality safety, including the 'K-effective of the World' problem. Numerical results illustrate the concerns about convergence and bias. The general conclusion is that with today's high-performance computers, improved understanding of the theory, new tools for diagnosing convergence (e.g., Shannon entropy of the fission distribution), and clear practical guidance for performing calculations, practitioners will have a greater degree of confidence than ever of obtaining correct results for Monte Carlo criticality calculations.
Monte Carlo Simulation of Alzheimer's Disease in the United States: 2010-2060
Feres, Renato
Monte Carlo Simulation of Alzheimer's Disease in the United States: 2010-2060 Michael Blech concerns facing the United States over the next 50 years. This progressive disease is currently the sixth on the United States population, and second, the simulation models both prevalence and mortality. Both
Sequential Monte Carlo in Model Comparison: Example in Cellular Dynamics in Systems Biology
Richardson, David
: American Statistical Association (2009): 1274-1287. Abstract Sequential Monte Carlo analysis of time series. Mukherjee L. You M. West -- Published in: JSM Proceedings/Bayesian Statistical Science. Alexandria, VA statistical model assessment is really just beginning in this new field. Single cell time series data
A new approach to Monte Carlo simulations in statistical physics: Wang-Landau sampling
Holzwarth, Natalie
it to models exhibiting first-order or second-order phase transitions. © 2004 American Association of PhysicsA new approach to Monte Carlo simulations in statistical physics: Wang-Landau sampling D. P. Landau for doing simulations in classical statistical physics in a different way. Instead of sampling
Performance Characteristics of Cathode Materials for Lithium-Ion Batteries: A Monte Carlo Strategy
Subramanian, Venkat
Performance Characteristics of Cathode Materials for Lithium-Ion Batteries: A Monte Carlo Strategy to study the performance of cathode materials in lithium-ion batteries. The methodology takes into account. Published September 26, 2008. Lithium-ion batteries are state-of-the-art power sources1 for por- table
A Scalable Parallel Monte Carlo Method for Free Energy Simulations of Molecular Systems
Chan, Derek Y C
A Scalable Parallel Monte Carlo Method for Free Energy Simulations of Molecular Systems MALEK O for problems where the energy dominates the entropy. An example is parallel tempering, in which simulations the free energy of the system as a direct output of the simulation. Traditional Metropolis MC samples phase
Optical Monte Carlo modeling of a true port wine stain anatomy
Barton, Jennifer K.
of accommodating an arbitrarily complex geometry was used to determine the energy deposition in a true port wineOptical Monte Carlo modeling of a true port wine stain anatomy Jennifer Kehlet Barton, T. Joshua nm. At both wavelengths, the greatest energy deposition occurred in the superficial blood vessels
Reconstruction for proton computed tomography by tracing proton trajectories: A Monte Carlo study
Reconstruction for proton computed tomography by tracing proton trajectories: A Monte Carlo study 11 January 2006; published 22 February 2006 Proton computed tomography pCT has been explored computed tomography pCT has several potential ad- vantages in medical applications. Its favorable dose
A Positive-Weight Next-to-Leading-Order Monte Carlo for Heavy Flavour Hadroproduction
Stefano Frixione; Paolo Nason; Giovanni Ridolfi
2007-09-22T23:59:59.000Z
We present a next-to-leading order calculation of heavy flavour production in hadronic collisions that can be interfaced to shower Monte Carlo programs. The calculation is performed in the context of the POWHEG method. It is suitable for the computation of charm, bottom and top hadroproduction. In the case of top production, spin correlations in the decay products are taken into account.
Monte Carlo simulation of electron transport in degenerate and inhomogeneous semiconductors
Monte Carlo simulation of electron transport in degenerate and inhomogeneous semiconductors Mona concentrations up to 1020 cm-3 . De- generate semiconductors are important for thermoelectric and thermionic transport in degenerate semiconductor-based structures. If the electron wavelength is smaller than
Washington at Seattle, University of - Department of Physics, Electroweak Interaction Research Group
Monte Carlo Calculations of the Intrinsic Detector Backgrounds for the Karlsruhe Tritium Neutrino of the Intrinsic Detector Backgrounds for the Karlsruhe Tritium Neutrino Experiment Michelle L. Leber Chair of the Supervisory Committee: Professor John F. Wilkerson Physics The Karlsruhe Tritium Neutrino Experiment (KATRIN
Monte Carlo simulation methodology of the ghost interface theory for the planar surface tension
Attard, Phil
Monte Carlo simulation methodology of the ghost interface theory for the planar surface tension October 2003 A novel ``ghost interface'' expression for the surface tension of a planar liquid coexisting phases. Results generated from the ghost interface theory for the surface tension are presented
Evaluation of vectorized Monte Carlo algorithms on GPUs for a neutron Eigenvalue problem
Du, X.; Liu, T.; Ji, W.; Xu, X. G. [Nuclear Engineering Program, Rensselaer Polytechnic Institute, Troy, NY 12180 (United States); Brown, F. B. [Monte Carlo Codes Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)
2013-07-01T23:59:59.000Z
Conventional Monte Carlo (MC) methods for radiation transport computations are 'history-based', which means that one particle history at a time is tracked. Simulations based on such methods suffer from thread divergence on the graphics processing unit (GPU), which severely affects the performance of GPUs. To circumvent this limitation, event-based vectorized MC algorithms can be utilized. A versatile software test-bed, called ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - was used for this study. ARCHER facilitates the development and testing of a MC code based on the vectorized MC algorithm implemented on GPUs by using NVIDIA's Compute Unified Device Architecture (CUDA). The ARCHER{sub GPU} code was designed to solve a neutron eigenvalue problem and was tested on a NVIDIA Tesla M2090 Fermi card. We found that although the vectorized MC method significantly reduces the occurrence of divergent branching and enhances the warp execution efficiency, the overall simulation speed is ten times slower than the conventional history-based MC method on GPUs. By analyzing detailed GPU profiling information from ARCHER, we discovered that the main reason was the large amount of global memory transactions, causing severe memory access latency. Several possible solutions to alleviate the memory latency issue are discussed. (authors)
Smart, Simon Daniel
2014-02-04T23:59:59.000Z
The use of spin-pure and non-orthogonal Hilbert spaces in Full Configuration Interaction Quantum Monte–Carlo Simon Smart Trinity College This dissertation is submitted for the degree of Doctor of Philosophy at the University of Cambridge, December... 2013 For my mother Diana Jean Smart 1956-2013 The use of spin-pure and non-orthogonal Hilbert spaces in Full Configuration Interaction Quantum Monte–Carlo Simon Smart Abstract Full Configuration Interaction Quantum Monte–Carlo (FCIQMC) al- lows...
Takahiro Mizusaki; Noritaka Shimizu
2012-01-27T23:59:59.000Z
We propose a new variational Monte Carlo (VMC) method with an energy variance extrapolation for large-scale shell-model calculations. This variational Monte Carlo is a stochastic optimization method with a projected correlated condensed pair state as a trial wave function, and is formulated with the M-scheme representation of projection operators, the Pfaffian and the Markov-chain Monte Carlo (MCMC). Using this method, we can stochastically calculate approximated yrast energies and electro-magnetic transition strengths. Furthermore, by combining this VMC method with energy variance extrapolation, we can estimate exact shell-model energies.
Monte Carlo Simulations of Macho Parallaxes From a Satellite
Thomas Boutreux; Andrew Gould
1995-07-25T23:59:59.000Z
Three ongoing microlensing experiments have found more candidate events than expected from the known stars. These experiments measure only one parameter of the massive compact halo objects (machos), the magnification time scale of the events. More information is required to understand the nature of the machos. A satellite experiment has been proposed to measure their projected transverse speed $\\tilde{v} = v/(1-z)$, where $v$ is the macho transverse speed and $z$ its distance divided by the distance of the source. Measurement of $\\tilde{v}$ would determine whether the machos were in the Galactic disk, Galactic halo, or in the Large Magellanic Cloud (LMC). We simulate events observed toward the LMC by the Earth and by a satellite in an Earth like heliocentric orbit. To leading order, such an experiment determines $\\tilde{v}$ up to a two fold degeneracy. More precise measurements break the degeneracy. We show that with photometric precisions of 3\\% to 4\\% and approximately 1 observation per day, $\\tilde{v}$ can be measured with a maximum error of 20\\% for 70\\% to 90\\% of events similar to the ones reported by the EROS and MACHO collaborations. The projected transverse velocity is known with the same maximum error for 60\\% to 75\\% of these events. This 20\\% maximum error is not a 1 $\\sigma$ error but is mostly due to degeneracy between two possible solutions, each one being localized to much better than 20\\%. These results are obtained with an Earth-satellite separation of 1 AU, and are improved by a larger separation.
The energy injection and losses in the Monte Carlo simulations of a diffusive shock
Wang, Xin
2011-01-01T23:59:59.000Z
Although diffusive shock acceleration (DSA) could be simulated by some well-established models, the assumption of the injection rate from the thermal particles to the superthermal population is still a contentious problem. But in the self-consistent Monte Carlo simulations, because of the prescribed scattering law instead of the assumption of the injected function, hence particle injection rate is intrinsically defined by the prescribed scattering law. We expect to examine the correlation of the energy injection with the prescribed multiple scattering angular distributions. According to the Rankine-Hugoniot conditions, the energy injection and the losses in the simulation system can directly decide the shock energy spectrum slope. By the simulations performed with multiple scattering law in the dynamical Monte Carlo model, the energy injection and energy loss functions are obtained. As results, the case applying anisotropic scattering law produce a small energy injection and large energy losses leading to a s...
Calculating alpha Eigenvalues in a Continuous-Energy Infinite Medium with Monte Carlo
Betzler, Benjamin R. [Los Alamos National Laboratory; Kiedrowski, Brian C. [Los Alamos National Laboratory; Brown, Forrest B. [Los Alamos National Laboratory; Martin, William R. [Los Alamos National Laboratory
2012-09-04T23:59:59.000Z
The {alpha} eigenvalue has implications for time-dependent problems where the system is sub- or supercritical. We present methods and results from calculating the {alpha}-eigenvalue spectrum for a continuous-energy infinite medium with a simplified Monte Carlo transport code. We formulate the {alpha}-eigenvalue problem, detail the Monte Carlo code physics, and provide verification and results. We have a method for calculating the {alpha}-eigenvalue spectrum in a continuous-energy infinite-medium. The continuous-time Markov process described by the transition rate matrix provides a way of obtaining the {alpha}-eigenvalue spectrum and kinetic modes. These are useful for the approximation of the time dependence of the system.
Study of nuclear pairing with Configuration-Space Monte-Carlo approach
Lingle, Mark
2015-01-01T23:59:59.000Z
Pairing correlations in nuclei play a decisive role in determining nuclear drip-lines, binding energies, and many collective properties. In this work a new Configuration-Space Monte-Carlo (CSMC) method for treating nuclear pairing correlations is developed, implemented, and demonstrated. In CSMC the Hamiltonian matrix is stochastically generated in Krylov subspace, resulting in the Monte-Carlo version of Lanczos-like diagonalization. The advantages of this approach over other techniques are discussed; the absence of the fermionic sign problem, probabilistic interpretation of quantum-mechanical amplitudes, and ability to handle truly large-scale problems with defined precision and error control, are noteworthy merits of CSMC. The features of our CSMC approach are shown using models and realistic examples. Special attention is given to difficult limits: situations with non-constant pairing strengths, cases with nearly degenerate excited states, limits when pairing correlations in finite systems are weak, and pr...
Rao-Blackwellised Interacting Markov Chain Monte Carlo for Electromagnetic Scattering Inversion
Giraud, François
2012-01-01T23:59:59.000Z
The following electromagnetism (EM) inverse problem is addressed. It consists in estimating local radioelectric properties of materials recovering an object from the global EM scattering measurement, at various incidences and wave frequencies. This large scale ill-posed inverse problem is explored by an intensive exploitation of an efficient 2D Maxwell solver, distributed on High Performance Computing (HPC) machines. Applied to a large training data set, a statistical analysis reduces the problem to a simpler probabilistic metamodel, on which Bayesian inference can be performed. Considering the radioelectric properties as a dynamic stochastic process, evolving in function of the frequency, it is shown how advanced Markov Chain Monte Carlo methods, called Sequential Monte Carlo (SMC) or interacting particles, can provide estimations of the EM properties of each material, and their associated uncertainties.
M. A. Novotny; Shannon M. Wheeler
2002-11-02T23:59:59.000Z
We present the Monte Carlo with Absorbing Markov Chains (MCAMC) method for extremely long kinetic Monte Carlo simulations. The MCAMC algorithm does not modify the system dynamics. It is extremely useful for models with discrete state spaces when low-temperature simulations are desired. To illustrate the strengths and limitations of this algorithm we introduce a simple model involving random walkers on an energy landscape. This simple model has some of the characteristics of protein folding and could also be experimentally realizable in domain motion in nanoscale magnets. We find that even the simplest MCAMC algorithm can speed up calculations by many orders of magnitude. More complicated MCAMC simulations can gain further increases in speed by orders of magnitude.
Monte Carlo simulation to investigate the formation of molecular hydrogen and its deuterated forms
Sahu, DIpen; Majumdar, Liton; Chakrabarti, Sandip K
2015-01-01T23:59:59.000Z
$H_2$ is the most abundant interstellar species. Its deuterated forms ($HD$ and $D_2$) are also significantly abundant. Huge abundances of these molecules could be explained by considering the chemistry occurring on the interstellar dust. Because of its simplicity, Rate equation method is widely used to study the formation of grain-surface species. However, since recombination efficiency of formation of any surface species are heavily dependent on various physical and chemical parameters, Monte Carlo method would be best method suited to take care of randomness of the processes. We perform Monte Carlo simulation to study the formation of $H_2$, $HD$ and $D_2$ on interstellar ices. Adsorption energies of surface species are the key inputs for the formation of any species on interstellar dusts but binding energies of deuterated species are yet to known with certainty. A zero point energy correction exists between hydrogenated and deuterated species which should be considered while modeling the chemistry on the ...
A user-friendly, graphical interface for the Monte Carlo neutron optics code MCLIB
Thelliez, T.; Daemen, L.; Hjelm, R.P. [Los Alamos National Lab., NM (United States); Seeger, P.A. [Seeger (Phil A.), Los Alamos, NM (United States)
1995-12-01T23:59:59.000Z
The authors describe a prototype of a new user interface for the Monte Carlo neutron optics simulation program MCLIB. At this point in its development the interface allows the user to define an instrument as a set of predefined instrument elements. The user can specify the intrinsic parameters of each element, its position and orientation. The interface then writes output to the MCLIB package and starts the simulation. The present prototype is an early development stage of a comprehensive Monte Carlo simulations package that will serve as a tool for the design, optimization and assessment of performance of new neutron scattering instruments. It will be an important tool for understanding the efficacy of new source designs in meeting the needs of these instruments.
Yasuda, Shugo
2015-01-01T23:59:59.000Z
A Monte Carlo simulation for the chemotactic bacteria is developed on the basis of the kinetic modeling, i.e., the Boltzmann transport equation, and applied to the one-dimensional traveling population wave in a micro channel.In this method, the Monte Carlo method, which calculates the run-and-tumble motions of bacteria, is coupled with a finite volume method to solve the macroscopic transport of the chemical cues in the field. The simulation method can successfully reproduce the traveling population wave of bacteria which was observed experimentally. The microscopic dynamics of bacteria, e.g., the velocity autocorrelation function and velocity distribution function of bacteria, are also investigated. It is found that the bacteria which form the traveling population wave create quasi-periodic motions as well as a migratory movement along with the traveling population wave. Simulations are also performed with changing the sensitivity and modulation parameters in the response function of bacteria. It is found th...
Using Markov chain Monte Carlo methods for estimating parameters with gravitational radiation data
Nelson Christensen; Renate Meyer
2001-02-05T23:59:59.000Z
We present a Bayesian approach to the problem of determining parameters for coalescing binary systems observed with laser interferometric detectors. By applying a Markov Chain Monte Carlo (MCMC) algorithm, specifically the Gibbs sampler, we demonstrate the potential that MCMC techniques may hold for the computation of posterior distributions of parameters of the binary system that created the gravity radiation signal. We describe the use of the Gibbs sampler method, and present examples whereby signals are detected and analyzed from within noisy data.
Imaginary time correlations and the phaseless auxiliary field quantum Monte Carlo
Motta, M.; Galli, D. E.; Vitali, E. [Dipartimento di Fisica, Università degli Studi di Milano, via Celoria 16, 20133 Milano (Italy)] [Dipartimento di Fisica, Università degli Studi di Milano, via Celoria 16, 20133 Milano (Italy); Moroni, S. [IOM-CNR DEMOCRITOS National Simulation Center and SISSA, via Bonomea 265, 34136 Trieste (Italy)] [IOM-CNR DEMOCRITOS National Simulation Center and SISSA, via Bonomea 265, 34136 Trieste (Italy)
2014-01-14T23:59:59.000Z
The phaseless Auxiliary Field Quantum Monte Carlo (AFQMC) method provides a well established approximation scheme for accurate calculations of ground state energies of many-fermions systems. Here we address the possibility of calculating imaginary time correlation functions with the phaseless AFQMC. We give a detailed description of the technique and test the quality of the results for static properties and imaginary time correlation functions against exact values for small systems.
The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units
Hall, Clifford [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States) [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); Ji, Weixiao [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States)] [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); Blaisten-Barojas, Estela, E-mail: blaisten@gmu.edu [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States) [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States)
2014-02-01T23:59:59.000Z
We present a CPU–GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm, which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU–GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU–GPU duets. -- Highlights: •We parallelize the Metropolis Monte Carlo (MMC) algorithm on one CPU—GPU duet. •The Adaptive Tempering Monte Carlo employs MMC and profits from this CPU—GPU implementation. •Our benchmark shows a size scaling-up speedup of 62 for systems with 225,000 particles. •The testbed involves a polymeric system of oligopyrroles in the condensed phase. •The CPU—GPU parallelization includes dipole—dipole and Mie—Jones classic potentials.
Sima, Octavian [Physics Department, University of Bucharest, Bucharest-Magurele, POBoxMG-11 RO-077125 (Romania)
2008-08-14T23:59:59.000Z
A comprehensive calibration of gamma-ray spectrometers cannot be obtained purely on experimental basis. Problems like self-attenuation effects, coincidence-summing effects and non-uniform source distribution (resulting e.g. from neutron self-shielding in NAA) can be efficiently solved by Monte Carlo simulation. The application of the GESPECOR code to these problems is presented and the associated uncertainty is discussed.
The role of diagonalization within a diagonalization/Monte Carlo scheme
Dean Lee
2000-10-31T23:59:59.000Z
We discuss a method called quasi-sparse eigenvector diagonalization which finds the most important basis vectors of the low energy eigenstates of a quantum Hamiltonian. It can operate using any basis, either orthogonal or non-orthogonal, and any sparse Hamiltonian, either Hermitian, non-Hermitian, finite-dimensional, or infinite-dimensional. The method is part of a new computational approach which combines both diagonalization and Monte Carlo techniques.
Radiative transfer in the earth's atmosphere-ocean system using Monte Carlo techniques
Bradley, Paul Andrew
1987-01-01T23:59:59.000Z
TRANSFER PROBLEM MONTE CARLO METHOD Assumptions of the Model Photon Pathlength Emulation Techniques Sampling Scattering Functions: Angles and Probabilities Emulation of an Interface Computing the Radiance by Statistical Estimation Determination... of Direction Cosines After Scattering Flux Estimation into Detectors Determination of a New Scattering Point Photon Trajectories Direct Flux and Radiance From the Ocean Bottonr Accounting for Multiple Orders of Scattering With the Bottom Computation...
Biggs, P.J. (Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston (United States))
1991-10-01T23:59:59.000Z
Shielding calculations for door thicknesses for megavoltage radiotherapy facilities with mazes are generally straightforward. To simplify the calculations, the standard formalism adopts several approximations relating to the average beam path, scattering coefficients, and the mean energy of the spectrum of scattered radiation. To test the accuracy of these calculations, the Monte Carlo program, ITS, was applied to this problem by determining the dose and energy spectrum of the radiation at the door for 4- and 10-MV bremsstrahlung beams incident on a phantom at isocenter. This was performed for mazes, one termed 'standard' and the other a shorter maze where the primary beam is incident on the wall adjacent to the door. The peak of the photon-energy spectrum at the door was found to be the same for both types of maze, independent of primary beam energy, and also, in the case of the conventional maze, of the primary beam orientation. The spectrum was harder for the short maze and for 10 MV vs. 4 MV. The thickness of the lead door for a short maze configuration was 1.5 cm for 10 MV and 1.2 cm for 4 MV vs. approximately less than 1 mm for a conventional maze. For the conventional maze, the Monte Carlo calculation predicts the dose at the door to be lower than given by NCRP 49 and NCRP 51 by about a factor of 2 at 4 MV but to be the same at 10 MV. For the short maze, the Monte Carlo predicts the dose to be a factor of 3 lower for 4 MV and about a factor of 1.5 lower for 10 MV. Experimental results support the Monte Carlo findings for the short maze.
Application of diffusion Monte Carlo to materials dominated by van der Waals interactions
Benali, Anouar [Argonne National Laboratory (ANL); Shulenburger, Luke [Sandia National Laboratory (SNL); Romero, Nichols [Argonne National Laboratory (ANL); Kim, Jeongnim [ORNL; Von Lilienfeld, Anatole [University of Basel
2014-01-01T23:59:59.000Z
Van der Waals forces are notoriously difficult to account for from first principles. We perform extensive calculation to assess the usefulness and validity of diffusion quantum Monte Carlo when applied to van der Waals forces. We present results for noble gas solids and clusters - archetypical van der Waals dominated assemblies, as well as a relevant pi-pi stacking supramolecular complex: DNA + intercalating anti-cancer drug Ellipticine.
A unified Monte Carlo approach to fast neutron cross section data evaluation.
Smith, D.; Nuclear Engineering Division
2008-03-03T23:59:59.000Z
A unified Monte Carlo (UMC) approach to fast neutron cross section data evaluation that incorporates both model-calculated and experimental information is described. The method is based on applications of Bayes Theorem and the Principle of Maximum Entropy as well as on fundamental definitions from probability theory. This report describes the formalism, discusses various practical considerations, and examines a few numerical examples in some detail.
Hybrid Monte Carlo with Wilson Dirac operator on the Fermi GPU
Chakrabarty, Abhijit
2012-01-01T23:59:59.000Z
In this article we present our implementation of a Hybrid Monte Carlo algorithm for Lattice Gauge Theory using two degenerate flavours of Wilson-Dirac fermions on a Fermi GPU. We find that using registers instead of global memory speeds up the code by almost an order of magnitude. To map the array variables to scalars, so that the compiler puts them in the registers, we use code generators. Our final program is more than 10 times faster than a generic single CPU.
Hybrid Monte Carlo with Wilson Dirac operator on the Fermi GPU
Abhijit Chakrabarty; Pushan Majumdar
2012-07-10T23:59:59.000Z
In this article we present our implementation of a Hybrid Monte Carlo algorithm for Lattice Gauge Theory using two degenerate flavours of Wilson-Dirac fermions on a Fermi GPU. We find that using registers instead of global memory speeds up the code by almost an order of magnitude. To map the array variables to scalars, so that the compiler puts them in the registers, we use code generators. Our final program is more than 10 times faster than a generic single CPU.
Hiatt, Matthew Torgerson
2009-06-02T23:59:59.000Z
This thesis describes a tool called TXSAMC (Transport Cross Sections from Applied Monte Carlo) that produces shielded and homogenized multigroup cross sections for small fast reactor systems. The motivation for this tool comes from a desire...
Stanley, H. Eugene
Liquid-Liquid Phase Transition in Confined Water: A Monte Carlo Study Martin Meyer and H. Eugene Stanley* Center for Polymer Studies and Department of Physics, Boston UniVersity, Boston, Massachusetts
Monte Carlo depletion calculations using VESTA 2.1 new features and perspectives
Haeck, W.; Cochet, B.; Aguiar, L. [Institut de Radioprotection et de Surete Nucleaire IRSN, BP 17, 92262 Fontenay-aux-Roses Cedex (France)
2012-07-01T23:59:59.000Z
VESTA is a Monte Carlo depletion interface code that is currently under development at IRSN. With VESTA, the emphasis lies on both accuracy and performance, so that the code will be capable of providing accurate and complete answers in an acceptable amount of time compared to other Monte Carlo depletion codes. From its inception, VESTA is intended to be a generic interface code so that it will ultimately be capable of using any Monte-Carlo code or depletion module and that can be tailored to the users needs. A new version of the code (version 2.1.x) will be released in 2012. The most important additions to the code are a burn up dependent isomeric branching ratio treatment to improve the prediction of metastable nuclides such as {sup 242m}Am and the integration of the PHOENIX point depletion module (also developed at IRSN) to overcome some of the limitations of the ORIGEN 2.2 module. The task of extracting and visualising the basic results and also the calculation of physical quantities or other data that can be derived from the basic output provided by VESTA will be the task of the AURORA depletion analysis tool which will be released at the same time as VESTA 2.1.x. The experimental validation database was also extended for this new version and it now contains a total of 35 samples with chemical assay data and 34 assembly decay heat measurements. (authors)
Nonequilibrium candidate Monte Carlo: A new tool for efficient equilibrium simulation
Nilmeier, Jerome P.; Crooks, Gavin E.; Minh, David D. L.; Chodera, John D.
2011-11-08T23:59:59.000Z
Metropolis Monte Carlo simulation is a powerful tool for studying the equilibrium properties of matter. In complex condensed-phase systems, however, it is difficult to design Monte Carlo moves with high acceptance probabilities that also rapidly sample uncorrelated configurations. Here, we introduce a new class of moves based on nonequilibrium dynamics: candidate configurations are generated through a finite-time process in which a system is actively driven out of equilibrium, and accepted with criteria that preserve the equilibrium distribution. The acceptance rule is similar to the Metropolis acceptance probability, but related to the nonequilibrium work rather than the instantaneous energy difference. Our method is applicable to sampling from both a single thermodynamic state or a mixture of thermodynamic states, and allows both coordinates and thermodynamic parameters to be driven in nonequilibrium proposals. While generating finite-time switching trajectories incurs an additional cost, driving some degrees of freedom while allowing others to evolve naturally can lead to large enhancements in acceptance probabilities, greatly reducing structural correlation times. Using nonequilibrium driven processes vastly expands the repertoire of useful Monte Carlo proposals in simulations of dense solvated systems.
Spatial homogenization of thermal feedback regions in Monte Carlo reactor calculations
Hanna, B. R.; Gill, D. F.; Griesheimer, D. P. [Bertis Atomic Power Laboratory, Bechtel Marine Propulsion Corporation, P.O. Box 79, West Mifflin, PA 15122 (United States)
2012-07-01T23:59:59.000Z
An integrated thermal-hydraulic feedback module has previously been developed for the Monte Carlo transport solver, MC21. The module incorporates a flexible input format that allows the user to describe heat transfer and coolant flow paths within the geometric model at any level of spatial detail desired. The effect that the varying levels of spatial homogenization of thermal regions has on the accuracy of the Monte Carlo simulations is examined in this study. Six thermal feedback mappings are constructed from the same geometric model of the Calvert Cliffs core. The spatial homogenization of the thermal regions is varied, giving each scheme a different level of detail, and the adequacy of the spatial homogenization is determined based on the eigenvalue produced by each Monte Carlo calculation. The purpose of these numerical experiments is to determine the level of detail necessarily to accurately capture the thermal feedback effect on reactivity. Several different core models are considered: axial-flow only, axial and lateral flow, asymmetry due to control rod insertion, and fuel heating (temperature -dependent cross sections). The thermal results generated by the MC21 thermal feedback module are consistent with expectations. Based upon the numerical experiments conducted it is concluded that the amount of spatial detail necessary to accurately capture the feedback effect on reactivity is relatively small. Homogenization at the assembly level for the Calvert Cliffs PWR model results in a similar power defect to that calculated with individual pin-cells modeled as explicit thermal regions. (authors)
Monte Carlo Study of Patchy Nanostructures Self-Assembled from a Single Multiblock Chain
Jakub Krajniak; Michal Banaszak
2014-10-15T23:59:59.000Z
We present a lattice Monte Carlo simulation for a multiblock copolymer chain of length N=240 and microarchitecture $(10-10)_{12}$.The simulation was performed using the Monte Carlo method with the Metropolis algorithm. We measured average energy, heat capacity, the mean squared radius of gyration, and the histogram of cluster count distribution. Those quantities were investigated as a function of temperature and incompatibility between segments, quantified by parameter {\\omega}. We determined the temperature of the coil-globule transition and constructed the phase diagram exhibiting a variety of patchy nanostructures. The presented results yield a qualitative agreement with those of the off-lattice Monte Carlo method reported earlier, with a significant exception for small incompatibilities,{\\omega}, and low temperatures, where 3-cluster patchy nanostructures are observed in contrast to the 2-cluster structures observed for the off-lattice $(10-10)_{12}$ chain. We attribute this difference to a considerable stiffness of lattice chains in comparison to that of the off-lattice chains.
Monte Carlo Studies of Identified Two-particle Correlations in p-p and Pb-Pb Collisions
G. Bencedi; G. G. Barnaföldi; L. Molnar
2014-03-21T23:59:59.000Z
Azimuthal particle correlations have been extensively studied in the past at various collider energies in p-p, p-A, and A-A collisions. Hadron-correlation measurements in heavy-ion collisions have mainly focused on studies of collective (flow) effects at low-$p_T$ and parton energy loss via jet quenching in the high-$p_T$ regime. This was usually done without event-by-event particle identification. In this paper, we present two-particle correlations with identified trigger hadrons and identified associated hadrons at mid-rapidity in Monte Carlo generated events. The primary purpose of this study was to investigate the effect of quantum number conservation and the flavour balance during parton fragmentation and hadronization. The simulated p-p events were generated with PYTHIA 6.4 with the Perugia-0 tune at $\\sqrt{s}=7$ TeV. HIJING was used to generate $0-10\\%$ central Pb-Pb events at $\\sqrt{s_{\\rm NN}}=2.76$ TeV. We found that the extracted identified associated hadron spectra for charged pion, kaon, and proton show identified trigger-hadron dependent splitting. Moreover, the identified trigger-hadron dependent correlation functions vary in different $p_T$ bins, which may show the presence of collective/nuclear effects.
A Monte-Carlo Method without Grid to Compute the Exchange Coefficient in the Double Porosity Model
Boyer, Edmond
Classification: 76S05 (65C05 76M35) Published in Monte Carlo Methods Appl.. 8:2, 129147, 2002 Archives, links Methods and Applications 8, 2 (2002) 129-147" #12;F. Campillo and A. Lejay / A Monte Carlo Method witouth consists in transforming (1) into a system: m Pm t = a-Pm - (Pm - Pf), m = Meas(m) Meas() f Pf t = a
The ATLAS Fast Monte Carlo Production Chain Project
Jansky, Roland Wolfgang; The ATLAS collaboration
2015-01-01T23:59:59.000Z
During the last years ATLAS has successfully deployed a new integrated simulation framework (ISF) which allows a flexible mixture of full and fast detector simulation techniques within the processing of one event. With the ISF, the simulation execution speed could be increased up to a factor 100, which makes subsequent digitisation and reconstruction processing the dominant contributions to the MC production CPU cost. The slowest components of both digitisation and reconstruction are within the Inner Detector due to the complex signal modelling needed in the emulation of the detector readout and in reconstruction due to the combinatorial nature of the problem to solve, respectively. Alternative fast approaches have been developed for these components: for the silicon based detectors a simpler geometrical clustering approach has been deployed replacing the charge drift emulation in the standard digitisation modules, and achieves a very high accuracy in describing the standard output. For the Inner Detector tra...
Radiation doses in cone-beam breast computed tomography: A Monte Carlo simulation study
Yi Ying; Lai, Chao-Jen; Han Tao; Zhong Yuncheng; Shen Youtao; Liu Xinming; Ge Shuaiping; You Zhicheng; Wang Tianpeng; Shaw, Chris C. [Department of Imaging Physics, University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States)
2011-02-15T23:59:59.000Z
Purpose: In this article, we describe a method to estimate the spatial dose variation, average dose and mean glandular dose (MGD) for a real breast using Monte Carlo simulation based on cone beam breast computed tomography (CBBCT) images. We present and discuss the dose estimation results for 19 mastectomy breast specimens, 4 homogeneous breast models, 6 ellipsoidal phantoms, and 6 cylindrical phantoms. Methods: To validate the Monte Carlo method for dose estimation in CBBCT, we compared the Monte Carlo dose estimates with the thermoluminescent dosimeter measurements at various radial positions in two polycarbonate cylinders (11- and 15-cm in diameter). Cone-beam computed tomography (CBCT) images of 19 mastectomy breast specimens, obtained with a bench-top experimental scanner, were segmented and used to construct 19 structured breast models. Monte Carlo simulation of CBBCT with these models was performed and used to estimate the point doses, average doses, and mean glandular doses for unit open air exposure at the iso-center. Mass based glandularity values were computed and used to investigate their effects on the average doses as well as the mean glandular doses. Average doses for 4 homogeneous breast models were estimated and compared to those of the corresponding structured breast models to investigate the effect of tissue structures. Average doses for ellipsoidal and cylindrical digital phantoms of identical diameter and height were also estimated for various glandularity values and compared with those for the structured breast models. Results: The absorbed dose maps for structured breast models show that doses in the glandular tissue were higher than those in the nearby adipose tissue. Estimated average doses for the homogeneous breast models were almost identical to those for the structured breast models (p=1). Normalized average doses estimated for the ellipsoidal phantoms were similar to those for the structured breast models (root mean square (rms) percentage difference=1.7%; p=0.01), whereas those for the cylindrical phantoms were significantly lower (rms percentage difference=7.7%; p<0.01). Normalized MGDs were found to decrease with increasing glandularity. Conclusions: Our results indicate that it is sufficient to use homogeneous breast models derived from CBCT generated structured breast models to estimate the average dose. This investigation also shows that ellipsoidal digital phantoms of similar dimensions (diameter and height) and glandularity to actual breasts may be used to represent a real breast to estimate the average breast dose with Monte Carlo simulation. We have also successfully demonstrated the use of structured breast models to estimate the true MGDs and shown that the normalized MGDs decreased with the glandularity as previously reported by other researchers for CBBCT or mammography.
Comparative Dosimetric Estimates of a 25 keV Electron Micro-beam with three Monte Carlo Codes
Mainardi, Enrico; Donahue, Richard J.; Blakely, Eleanor A.
2002-09-11T23:59:59.000Z
The calculations presented compare the different performances of the three Monte Carlo codes PENELOPE-1999, MCNP-4C and PITS, for the evaluation of Dose profiles from a 25 keV electron micro-beam traversing individual cells. The overall model of a cell is a water cylinder equivalent for the three codes but with a different internal scoring geometry: hollow cylinders for PENELOPE and MCNP, whereas spheres are used for the PITS code. A cylindrical cell geometry with scoring volumes with the shape of hollow cylinders was initially selected for PENELOPE and MCNP because of its superior simulation of the actual shape and dimensions of a cell and for its improved computer-time efficiency if compared to spherical internal volumes. Some of the transfer points and energy transfer that constitute a radiation track may actually fall in the space between spheres, that would be outside the spherical scoring volume. This internal geometry, along with the PENELOPE algorithm, drastically reduced the computer time when using this code if comparing with event-by-event Monte Carlo codes like PITS. This preliminary work has been important to address dosimetric estimates at low electron energies. It demonstrates that codes like PENELOPE can be used for Dose evaluation, even with such small geometries and energies involved, which are far below the normal use for which the code was created. Further work (initiated in Summer 2002) is still needed however, to create a user-code for PENELOPE that allows uniform comparison of exact cell geometries, integral volumes and also microdosimetric scoring quantities, a field where track-structure codes like PITS, written for this purpose, are believed to be superior.
Charged-Particle Thermonuclear Reaction Rates: I. Monte Carlo Method and Statistical Distributions
Richard Longland; Christian Iliadis; Art Champagne; Joe Newton; Claudio Ugalde; Alain Coc; Ryan Fitzgerald
2010-04-23T23:59:59.000Z
A method based on Monte Carlo techniques is presented for evaluating thermonuclear reaction rates. We begin by reviewing commonly applied procedures and point out that reaction rates that have been reported up to now in the literature have no rigorous statistical meaning. Subsequently, we associate each nuclear physics quantity entering in the calculation of reaction rates with a specific probability density function, including Gaussian, lognormal and chi-squared distributions. Based on these probability density functions the total reaction rate is randomly sampled many times until the required statistical precision is achieved. This procedure results in a median (Monte Carlo) rate which agrees under certain conditions with the commonly reported recommended "classical" rate. In addition, we present at each temperature a low rate and a high rate, corresponding to the 0.16 and 0.84 quantiles of the cumulative reaction rate distribution. These quantities are in general different from the statistically meaningless "minimum" (or "lower limit") and "maximum" (or "upper limit") reaction rates which are commonly reported. Furthermore, we approximate the output reaction rate probability density function by a lognormal distribution and present, at each temperature, the lognormal parameters miu and sigma. The values of these quantities will be crucial for future Monte Carlo nucleosynthesis studies. Our new reaction rates, appropriate for bare nuclei in the laboratory, are tabulated in the second paper of this series (Paper II). The nuclear physics input used to derive our reaction rates is presented in the third paper of this series (Paper III). In the fourth paper of this series (Paper IV) we compare our new reaction rates to previous results.
Monte Carlo calculations of the physical properties of RDX, {beta}-HMX, and TATB
Sewell, T.D.
1997-09-01T23:59:59.000Z
Atomistic Monte Carlo simulations in the NpT ensemble are used to calculate the physical properties of crystalline RDX, {beta}-HMX, and TATB. Among the issues being considered are the effects of various treatments of the intermolecular potential, inclusion of intramolecular flexibility, and simulation size dependence of the results. Calculations of the density, lattice energy, and lattice parameters are made over a wide range of pressures; thereby allowing for predictions of the bulk and linear coefficients of isothermal expansion of the crystals. Comparison with experiment is made where possible.
SIM-RIBRAS: A Monte-Carlo simulation package for RIBRAS system
Leistenschneider, E.; Lepine-Szily, A.; Lichtenthaeler, R. [Departamento de Fisica Nuclear, Instituto de Fisica, Universidade de Sao Paulo (Brazil)
2013-05-06T23:59:59.000Z
SIM-RIBRAS is a Root-based Monte-Carlo simulation tool designed to help RIBRAS users on experience planning and experimental setup enhancing and caracterization. It is divided into two main programs: CineRIBRAS, aiming beam kinematics, and SolFocus, aiming beam optics. SIM-RIBRAS replaces other methods and programs used in the past, providing more complete and accurate results and requiring much less manual labour. Moreover, the user can easily make modifications in the codes, adequating it for specific requirements of an experiment.
Monte-Carlo study of the phase transition in the AA-stacked bilayer graphene
A. A. Nikolaev; M. V. Ulybyshev
2014-12-04T23:59:59.000Z
Tight-binding model of the AA-stacked bilayer graphene with screened electron-electron interactions has been studied using the Hybrid Monte Carlo simulations on the original double-layer hexagonal lattice. Instantaneous screened Coulomb potential is taken into account using Hubbard-Stratonovich transformation. G-type antiferromagnetic ordering has been studied and the phase transition with spontaneous generation of the mass gap has been observed. Dependence of the antiferromagnetic condensate on the on-site electron-electron interaction is examined.
Temperature-extrapolation method for Implicit Monte Carlo - Radiation hydrodynamics calculations
McClarren, R. G. [Department of Nuclear Engineering, Texas A and M University, 3133 TAMU, College Station, TX 77802 (United States); Urbatsch, T. J. [XTD-5: Air Force Systems, Los Alamos National Laboratory, P.O. Box 1663, Los Alamos, NM 77845 (United States)
2013-07-01T23:59:59.000Z
We present a method for implementing temperature extrapolation in Implicit Monte Carlo solutions to radiation hydrodynamics problems. The method is based on a BDF-2 type integration to estimate a change in material temperature over a time step. We present results for radiation only problems in an infinite medium and for a 2-D Cartesian hohlraum problem. Additionally, radiation hydrodynamics simulations are presented for an RZ hohlraum problem and a related 3D problem. Our results indicate that improvements in noise and general behavior are possible. We present considerations for future investigations and implementations. (authors)
Alan M. Watson; William J. Henney
2001-08-30T23:59:59.000Z
We describe an efficient Monte Carlo algorithm for a restricted class of scattering problems in radiation transfer. This class includes many astrophysically interesting problems, including the scattering of ultraviolet and visible light by grains. The algorithm correctly accounts for multiply-scattered light. We describe the algorithm, present a number of important optimizations, and explicity show how the algorithm can be used to estimate quantities such as the emergent and mean intensity. We present two test cases, examine the importance of the optimizations, and show that this algorithm can be usefully applied to optically-thin problems, a regime sometimes considered limited to explicit single-scattering plus attenuation approximations.
A new approach to hot particle dosimetry using a Monte Carlo transport code
Busche, Donna Marie
1989-01-01T23:59:59.000Z
Ci-hrs. This value assumes a threshold dose of 2000 rads to an area of 0. 1 cm&, at a depth of 100 ltm (NCRP 1988). The purpose of this research was evaluate the current methods used in industry to assess the doses from hot particles. A Monte Carlo electron... radioactivity being released from the site. Frisking, portal monitors, and step off pads are important HP areas and should involve overview and supervision. IDENTMCATION To properly assess the dose from these hot particles, the source strength, type...
A VAX version of the coupled Monte Carlo transport codes HETC and MORSE-CGA
Sanna, R.S.
1990-12-01T23:59:59.000Z
The three-dimensional Monte Carlo transport codes, HETC and MORSE-CGA, are distributed by the Radiation Shielding Information Center at Oak Ridge National Laboratory. These codes, written for IBM-3033 computers, have been installed on the Environmental Measurements Laboratory's VAX/11-750 computer for operation in a coupled mode to study the transport of neutrons over the energy range from thermal to several GeV. This report is a guide to their use on the VAX/11-750 computer. 26 refs., 6 figs., 14 tabs.
Perera, Meewanage Dilina N [ORNL; Li, Ying Wai [ORNL; Eisenbach, Markus [ORNL; Vogel, Thomas [Los Alamos National Laboratory (LANL); Landau, David P [University of Georgia, Athens, GA
2015-01-01T23:59:59.000Z
We describe the study of thermodynamics of materials using replica-exchange Wang Landau (REWL) sampling, a generic framework for massively parallel implementations of the Wang Landau Monte Carlo method. To evaluate the performance and scalability of the method, we investigate the magnetic phase transition in body-centered cubic (bcc) iron using the classical Heisenberg model parameterized with first principles calculations. We demonstrate that our framework leads to a significant speedup without compromising the accuracy and precision and facilitates the study of much larger systems than is possible with its serial counterpart.
Monte Carlo Generators for Studies of the 3D Structure of the Nucleon
Avagyan, Harut A. [JLAB
2015-01-01T23:59:59.000Z
Extraction of transverse momentum and space distributions of partons from measurements of spin and azimuthal asymmetries requires development of a self consistent analysis framework, accounting for evolution effects, and allowing control of systematic uncertainties due to variations of input parameters and models. Development of realistic Monte-Carlo generators, accounting for TMD evolution effects, spin-orbit and quark-gluon correlations will be crucial for future studies of quark-gluon dynamics in general and 3D structure of the nucleon in particular.
Probability of initiation and extinction in the Mercury Monte Carlo code
McKinley, M. S.; Brantley, P. S. [Lawrence Livermore National Laboratory, 7000 East Ave., Livermore, CA 94551 (United States)
2013-07-01T23:59:59.000Z
A Monte Carlo method for computing the probability of initiation has previously been implemented in Mercury. Recently, a new method based on the probability of extinction has been implemented as well. The methods have similarities from counting progeny to cycling in time, but they also have differences such as population control and statistical uncertainty reporting. The two methods agree very well for several test problems. Since each method has advantages and disadvantages, we currently recommend that both methods are used to compute the probability of criticality. (authors)
Nauchi, Y.; Kameyama, T. [Central Research Inst., Electric Power Industry, 2-11-1 Iwado-Kita, Komae-shi, Tokyo 201-8511 (Japan)
2006-07-01T23:59:59.000Z
New method is proposed to estimate effective fraction of delayed neutrons radiated from precursors categorized into 6 groups of decay constant. Instead of adjoint flux {Phi}*, an expected number of fission neutrons in next generations, M, is applied as a weight function [1]. Introduction of M enables us to calculate the fraction based on continuous energy Monte Carlo method. For the calculation of the fraction, an algorithm is established and implemented into the MCNP-5 code. The method is verified using reactor period data obtained in reactivity measurements. (authors)
Finite-temperature quantum Monte Carlo study of the one-dimensional polarized Fermi gas
Wolak, M. J. [Centre for Quantum Technologies, National University of Singapore, 2 Science Drive 3, Singapore 117542 (Singapore); Rousseau, V. G. [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Miniatura, C. [Centre for Quantum Technologies, National University of Singapore, 2 Science Drive 3, Singapore 117542 (Singapore); INLN, Universite de Nice-Sophia Antipolis, CNRS, 1361 route des Lucioles, F-06560 Valbonne (France); Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117542 (Singapore); Gremaud, B. [Centre for Quantum Technologies, National University of Singapore, 2 Science Drive 3, Singapore 117542 (Singapore); Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117542 (Singapore); Laboratoire Kastler Brossel, UPMC-Paris 6, ENS, CNRS, 4 Place Jussieu, F-75005 Paris (France); Scalettar, R. T. [Physics Department, University of California, Davis, California 95616 (United States); Batrouni, G. G. [Centre for Quantum Technologies, National University of Singapore, 2 Science Drive 3, Singapore 117542 (Singapore); INLN, Universite de Nice-Sophia Antipolis, CNRS, 1361 route des Lucioles, F-06560 Valbonne (France)
2010-07-15T23:59:59.000Z
Quantum Monte Carlo (QMC) techniques are used to provide an approximation-free investigation of the phases of the one-dimensional attractive Hubbard Hamiltonian in the presence of population imbalance. The temperature at which the ''Fulde-Ferrell-Larkin-Ovchinnikov'' (FFLO) phase is destroyed by thermal fluctuations is determined as a function of the polarization. It is shown that the presence of a confining potential does not dramatically alter the FFLO regime and that recent experiments on trapped atomic gases likely lie just within the stable temperature range.
Monte-Carlo Simulation of Exclusive Channels in e+e- Annihilation at Low Energy
D. Anipko; S. Eidelman; A. Pak
2003-12-25T23:59:59.000Z
Software package for Monte-Carlo simulation of e+e- exclusive annihilation channels written in the C++ language for Linux/Solaris platforms has been developed. It incorporates matrix elements for several mechanisms of multipion production in a model of consequent two and three-body resonance decays. Possible charge states of intermediate and final particles are accounted automatically under the assumption of isospin conservation. Interference effects can be taken into acccount. Package structure allows adding new matrix elements written in a gauge-invariant form.
A Monte Carlo study of the distribution of parameter estimators in a dual exponential decay model
Garcia, Raul
1969-01-01T23:59:59.000Z
of an estimate of the reliability of the parameter estimates calculated. In 1965, Bell and Garcia [2] developed a computer program which permits a solution of the parameters without the time-consuming effort of manual calcu- lations. The same year, Rossing [3...A MONTE CARLO STUDY OF THE DISTRIBUTION OF PARAMETER ESTIMATORS IN A DUAL EXPONENTIAL DECAY MODEL A Thesis by SAUL GARCIA Submitted to the Graduate College of Texas A&M University in partial fulfillment of the requirements for the degree...
Kinetic lattice Monte Carlo simulations of interdiffusion in strained silicon germanium alloys
Chen, Renyu; Dunham, Scott T.
2010-03-03T23:59:59.000Z
Point-defect-mediated diffusion processes are investigated in strained SiGe alloys using kinetic lattice Monte Carlo *KLMC* simulation technique. The KLMC simulator incorporates an augmented lattice domain and includes defect structures, atomistic hopping mechanisms, and the stress dependence of transition rates obtained from density functional theory calculation results. Vacancy-mediated interdiffusion in strained SiGe alloys is analyzed, and the stress effect caused by the induced strain of germanium is quantified separately from that due to germanium-vacancy binding. The results indicate that both effects have substantial impact on interdiffusion. © 2010 American Vacuum Society.
Thermonuclear reaction rate of $^{18}$Ne($?$,$p$)$^{21}$Na from Monte-Carlo calculations
P. Mohr; R. Longland; C. Iliadis
2014-12-14T23:59:59.000Z
The $^{18}$Ne($\\alpha$,$p$)$^{21}$Na reaction impacts the break-out from the hot CNO-cycles to the $rp$-process in type I X-ray bursts. We present a revised thermonuclear reaction rate, which is based on the latest experimental data. The new rate is derived from Monte-Carlo calculations, taking into account the uncertainties of all nuclear physics input quantities. In addition, we present the reaction rate uncertainty and probability density versus temperature. Our results are also consistent with estimates obtained using different indirect approaches.
Thermonuclear reaction rate of $^{18}$Ne($\\alpha$,$p$)$^{21}$Na from Monte-Carlo calculations
Mohr, P; Iliadis, C
2014-01-01T23:59:59.000Z
The $^{18}$Ne($\\alpha$,$p$)$^{21}$Na reaction impacts the break-out from the hot CNO-cycles to the $rp$-process in type I X-ray bursts. We present a revised thermonuclear reaction rate, which is based on the latest experimental data. The new rate is derived from Monte-Carlo calculations, taking into account the uncertainties of all nuclear physics input quantities. In addition, we present the reaction rate uncertainty and probability density versus temperature. Our results are also consistent with estimates obtained using different indirect approaches.
An Evaluation of Monte Carlo Simulations of Neutron Multiplicity Measurements of Plutonium Metal
Mattingly, John [North Carolina State University; Miller, Eric [University of Michigan; Solomon, Clell J. Jr. [Los Alamos National Laboratory; Dennis, Ben [University of Michigan; Meldrum, Amy [University of Michigan; Clarke, Shaun [University of Michigan; Pozzi, Sara [University of Michigan
2012-06-21T23:59:59.000Z
In January 2009, Sandia National Laboratories conducted neutron multiplicity measurements of a polyethylene-reflected plutonium metal sphere. Over the past 3 years, those experiments have been collaboratively analyzed using Monte Carlo simulations conducted by University of Michigan (UM), Los Alamos National Laboratory (LANL), Sandia National Laboratories (SNL), and North Carolina State University (NCSU). Monte Carlo simulations of the experiments consistently overpredict the mean and variance of the measured neutron multiplicity distribution. This paper presents a sensitivity study conducted to evaluate the potential sources of the observed errors. MCNPX-PoliMi simulations of plutonium neutron multiplicity measurements exhibited systematic over-prediction of the neutron multiplicity distribution. The over-prediction tended to increase with increasing multiplication. MCNPX-PoliMi had previously been validated against only very low multiplication benchmarks. We conducted sensitivity studies to try to identify the cause(s) of the simulation errors; we eliminated the potential causes we identified, except for Pu-239 {bar {nu}}. A very small change (-1.1%) in the Pu-239 {bar {nu}} dramatically improved the accuracy of the MCNPX-PoliMi simulation for all 6 measurements. This observation is consistent with the trend observed in the bias exhibited by the MCNPX-PoliMi simulations: a very small error in {bar {nu}} is 'magnified' by increasing multiplication. We applied a scalar adjustment to Pu-239 {bar {nu}} (independent of neutron energy); an adjustment that depends on energy is probably more appropriate.
A Deterministic-Monte Carlo Hybrid Method for Time-Dependent Neutron Transport Problems
Justin Pounders; Farzad Rahnema
2001-10-01T23:59:59.000Z
A new deterministic-Monte Carlo hybrid solution technique is derived for the time-dependent transport equation. This new approach is based on dividing the time domain into a number of coarse intervals and expanding the transport solution in a series of polynomials within each interval. The solutions within each interval can be represented in terms of arbitrary source terms by using precomputed response functions. In the current work, the time-dependent response function computations are performed using the Monte Carlo method, while the global time-step march is performed deterministically. This work extends previous work by coupling the time-dependent expansions to space- and angle-dependent expansions to fully characterize the 1D transport response/solution. More generally, this approach represents and incremental extension of the steady-state coarse-mesh transport method that is based on global-local decompositions of large neutron transport problems. An example of a homogeneous slab is discussed as an example of the new developments.
The Proton Therapy Nozzles at Samsung Medical Center: A Monte Carlo Simulation Study using TOPAS
Chung, Kwangzoo; Kim, Dae-Hyun; Ahn, Sunghwan; Han, Youngyih
2015-01-01T23:59:59.000Z
To expedite the commissioning process of the proton therapy system at Samsung Medical Center (SMC), we have developed a Monte Carlo simulation model of the proton therapy nozzles using TOPAS. At SMC proton therapy center, we have two gantry rooms with different types of nozzles; a multi-purpose nozzle and a dedicated scanning nozzle. Each nozzle has been modeled in detail following the geometry information provided by the manufacturer, Sumitomo Heavy Industries, Ltd. For this purpose, novel features of TOPAS, such as the time feature or the ridge filter class, have been used. And the appropriate physics models for proton nozzle simulation were defined. Dosimetric properties, like percent depth dose curve, spread-out Bragg peak (SOBP), beam spot size, have been simulated and verified against measured beam data. Beyond the Monte Carlo nozzle modeling, we have developed an interface between TOPAS and the treatment planning system (TPS), RayStation. An exported RT plan data from the TPS has been interpreted by th...
Bias-Variance Techniques for Monte Carlo Optimization: Cross-validation for the CE Method
Rajnarayan, Dev
2008-01-01T23:59:59.000Z
In this paper, we examine the CE method in the broad context of Monte Carlo Optimization (MCO) and Parametric Learning (PL), a type of machine learning. A well-known overarching principle used to improve the performance of many PL algorithms is the bias-variance tradeoff. This tradeoff has been used to improve PL algorithms ranging from Monte Carlo estimation of integrals, to linear estimation, to general statistical estimation. Moreover, as described by, MCO is very closely related to PL. Owing to this similarity, the bias-variance tradeoff affects MCO performance, just as it does PL performance. In this article, we exploit the bias-variance tradeoff to enhance the performance of MCO algorithms. We use the technique of cross-validation, a technique based on the bias-variance tradeoff, to significantly improve the performance of the Cross Entropy (CE) method, which is an MCO algorithm. In previous work we have confirmed that other PL techniques improve the perfomance of other MCO algorithms. We conclude that ...
Monte Carlo uncertainty reliability and isotope production calculations for a fast reactor
Miles, T.L.
1992-01-01T23:59:59.000Z
Statistical uncertainties in Monte Carlo calculations are typically determined by the first and second moments of the tally. For certain types of calculations, there is concern that the uncertainty estimate is significantly non-conservative. This is typically seen in reactor eigenvalue problems where the uncertainty estimate is aggravated by the generation-to-generation fission source. It has been speculated that optimization of the random walk, through biasing techniques, may increase the non-conservative nature of the uncertainty estimate. A series of calculations are documented here which quantify the reliability of the Monte Carlo Neutron and Photon (MCNP) mean and uncertainty estimates by comparing these estimates to the true mean. These calculations were made with a liquid metal fast reactor model, but every effort was made to isolate the statistical nature of the uncertainty estimates so that the analysis of the reliability of the MCNP estimates should be relevant for small thermal reactors as well. Also, preliminary reactor physics calculations for two different special isotope production test assemblies for irradiation in the Fast Flux Test Facility (FFTF) were performed using MCNP and are documented here. The effect of an yttrium-hydride moderator to tailor the neutron flux incident on the targets to maximize isotope production for different designs in different locations within the reactor is discussed. These calculations also demonstrate the useful application of MCNP in design iterations by utilizing many of the codes features.
Energy density matrix formalism for interacting quantum systems: a quantum Monte Carlo study
Krogel, Jaron T [ORNL] [ORNL; Kim, Jeongnim [ORNL] [ORNL; Reboredo, Fernando A [ORNL] [ORNL
2014-01-01T23:59:59.000Z
We develop an energy density matrix that parallels the one-body reduced density matrix (1RDM) for many-body quantum systems. Just as the density matrix gives access to the number density and occupation numbers, the energy density matrix yields the energy density and orbital occupation energies. The eigenvectors of the matrix provide a natural orbital partitioning of the energy density while the eigenvalues comprise a single particle energy spectrum obeying a total energy sum rule. For mean-field systems the energy density matrix recovers the exact spectrum. When correlation becomes important, the occupation energies resemble quasiparticle energies in some respects. We explore the occupation energy spectrum for the finite 3D homogeneous electron gas in the metallic regime and an isolated oxygen atom with ground state quantum Monte Carlo techniques imple- mented in the QMCPACK simulation code. The occupation energy spectrum for the homogeneous electron gas can be described by an effective mass below the Fermi level. Above the Fermi level evanescent behavior in the occupation energies is observed in similar fashion to the occupation numbers of the 1RDM. A direct comparison with total energy differences demonstrates a quantita- tive connection between the occupation energies and electron addition and removal energies for the electron gas. For the oxygen atom, the association between the ground state occupation energies and particle addition and removal energies becomes only qualitative. The energy density matrix provides a new avenue for describing energetics with quantum Monte Carlo methods which have traditionally been limited to total energies.
Using Monte Carlo analyses in uptake models for evaluating risks to ecological receptors
Hayse, J.W.; Hlohowskyj, I. [Argonne National Lab., IL (United States). Environmental Assessment Div.
1995-12-31T23:59:59.000Z
A deterministic modeling approach was used to evaluate risks to wildlife receptors at a contaminated site in Maryland. Models to predict daily doses of contaminants to ecological receptors used single point estimates for media contaminant concentrations and for ecological exposure factors. Predicted doses exceeding contaminant- and species-specific dose values were considered to be indicative of adverse risk, and the model results are being used to develop and evaluate remedial alternatives for the site. Risk estimates based on the deterministic approach predicted daily contaminant doses exceeding acceptable dose levels for more than half of the modeled receptors. Ecological risks were also evaluated using a stochastic approach. In this approach the input parameters that most greatly affected the deterministic model outcome were identified using sensitivity analyses. Statistical distributions were assigned to these parameters, and Monte Carlo simulations of the models were conducted to generate probability density functions of contaminant doses. The resulting probability density functions were then used to quantify the probability that contaminant uptake would exceed the acceptable dose values. Models using Monte Carlo analyses identified only a low probability of exceeding the acceptable dose level for most of the contaminants and receptors. The differences in the risks predicted using the deterministic and stochastic models would likely result in the selection of different remediation goals and actions for the same area of contamination. Given the different interpretations that could result from these two modeling approaches, the authors recommend that both techniques be considered for estimating risks to ecological receptors.
Massively parallel Monte Carlo for many-particle simulations on GPUs
Anderson, Joshua A.; Jankowski, Eric [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)] [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Grubb, Thomas L. [Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)] [Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Engel, Michael [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)] [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Glotzer, Sharon C., E-mail: sglotzer@umich.edu [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)
2013-12-01T23:59:59.000Z
Current trends in parallel processors call for the design of efficient massively parallel algorithms for scientific computing. Parallel algorithms for Monte Carlo simulations of thermodynamic ensembles of particles have received little attention because of the inherent serial nature of the statistical sampling. In this paper, we present a massively parallel method that obeys detailed balance and implement it for a system of hard disks on the GPU. We reproduce results of serial high-precision Monte Carlo runs to verify the method. This is a good test case because the hard disk equation of state over the range where the liquid transforms into the solid is particularly sensitive to small deviations away from the balance conditions. On a Tesla K20, our GPU implementation executes over one billion trial moves per second, which is 148 times faster than on a single Intel Xeon E5540 CPU core, enables 27 times better performance per dollar, and cuts energy usage by a factor of 13. With this improved performance we are able to calculate the equation of state for systems of up to one million hard disks. These large system sizes are required in order to probe the nature of the melting transition, which has been debated for the last forty years. In this paper we present the details of our computational method, and discuss the thermodynamics of hard disks separately in a companion paper.
Physics and Algorithm Enhancements for a Validated MCNP/X Monte Carlo Simulation Tool, Phase VII
McKinney, Gregg W [Los Alamos National Laboratory
2012-07-17T23:59:59.000Z
Currently the US lacks an end-to-end (i.e., source-to-detector) radiation transport simulation code with predictive capability for the broad range of DHS nuclear material detection applications. For example, gaps in the physics, along with inadequate analysis algorithms, make it difficult for Monte Carlo simulations to provide a comprehensive evaluation, design, and optimization of proposed interrogation systems. With the development and implementation of several key physics and algorithm enhancements, along with needed improvements in evaluated data and benchmark measurements, the MCNP/X Monte Carlo codes will provide designers, operators, and systems analysts with a validated tool for developing state-of-the-art active and passive detection systems. This project is currently in its seventh year (Phase VII). This presentation will review thirty enhancements that have been implemented in MCNPX over the last 3 years and were included in the 2011 release of version 2.7.0. These improvements include 12 physics enhancements, 4 source enhancements, 8 tally enhancements, and 6 other enhancements. Examples and results will be provided for each of these features. The presentation will also discuss the eight enhancements that will be migrated into MCNP6 over the upcoming year.
Monte Carlo methods and their analysis for Coulomb collisions in multicomponent plasmas
Bobylev, A.V., E-mail: alexander.bobylev@kau.se [Department of Mathematics, Karlstad University, SE-65188 Karlstad (Sweden); Potapenko, I.F., E-mail: firena@yandex.ru [Keldysh Institute for Applied Mathematics, RAS, 125047 Moscow (Russian Federation)
2013-08-01T23:59:59.000Z
Highlights: •A general approach to Monte Carlo methods for multicomponent plasmas is proposed. •We show numerical tests for the two-component (electrons and ions) case. •An optimal choice of parameters for speeding up the computations is discussed. •A rigorous estimate of the error of approximation is proved. -- Abstract: A general approach to Monte Carlo methods for Coulomb collisions is proposed. Its key idea is an approximation of Landau–Fokker–Planck equations by Boltzmann equations of quasi-Maxwellian kind. It means that the total collision frequency for the corresponding Boltzmann equation does not depend on the velocities. This allows to make the simulation process very simple since the collision pairs can be chosen arbitrarily, without restriction. It is shown that this approach includes the well-known methods of Takizuka and Abe (1977) [12] and Nanbu (1997) as particular cases, and generalizes the approach of Bobylev and Nanbu (2000). The numerical scheme of this paper is simpler than the schemes by Takizuka and Abe [12] and by Nanbu. We derive it for the general case of multicomponent plasmas and show some numerical tests for the two-component (electrons and ions) case. An optimal choice of parameters for speeding up the computations is also discussed. It is also proved that the order of approximation is not worse than O(?(?)), where ? is a parameter of approximation being equivalent to the time step ?t in earlier methods. A similar estimate is obtained for the methods of Takizuka and Abe and Nanbu.
Leon, Stephanie M., E-mail: Stephanie.Leon@uth.tmc.edu; Wagner, Louis K. [Department of Diagnostic and Interventional Imaging, University of Texas Medical School at Houston, Houston, Texas 77030 (United States); Brateman, Libby F. [Department of Radiology, University of Florida, Gainesville, Florida 32610 (United States)
2014-11-01T23:59:59.000Z
Purpose: Monte Carlo simulations were performed with the goal of verifying previously published physical measurements characterizing scatter as a function of apparent thickness. A secondary goal was to provide a way of determining what effect tissue glandularity might have on the scatter characteristics of breast tissue. The overall reason for characterizing mammography scatter in this research is the application of these data to an image processing-based scatter-correction program. Methods: MCNPX was used to simulate scatter from an infinitesimal pencil beam using typical mammography geometries and techniques. The spreading of the pencil beam was characterized by two parameters: mean radial extent (MRE) and scatter fraction (SF). The SF and MRE were found as functions of target, filter, tube potential, phantom thickness, and the presence or absence of a grid. The SF was determined by separating scatter and primary by the angle of incidence on the detector, then finding the ratio of the measured scatter to the total number of detected events. The accuracy of the MRE was determined by placing ring-shaped tallies around the impulse and fitting those data to the point-spread function (PSF) equation using the value for MRE derived from the physical measurements. The goodness-of-fit was determined for each data set as a means of assessing the accuracy of the physical MRE data. The effect of breast glandularity on the SF, MRE, and apparent tissue thickness was also considered for a limited number of techniques. Results: The agreement between the physical measurements and the results of the Monte Carlo simulations was assessed. With a grid, the SFs ranged from 0.065 to 0.089, with absolute differences between the measured and simulated SFs averaging 0.02. Without a grid, the range was 0.28–0.51, with absolute differences averaging ?0.01. The goodness-of-fit values comparing the Monte Carlo data to the PSF from the physical measurements ranged from 0.96 to 1.00 with a grid and 0.65 to 0.86 without a grid. Analysis of the data suggested that the nongrid data could be better described by a biexponential function than the single exponential used here. The simulations assessing the effect of breast composition on SF and MRE showed only a slight impact on these quantities. When compared to a mix of 50% glandular/50% adipose tissue, the impact of substituting adipose or glandular breast compositions on the apparent thickness of the tissue was about 5%. Conclusions: The findings show agreement between the physical measurements published previously and the Monte Carlo simulations presented here; the resulting data can therefore be used more confidently for an application such as image processing-based scatter correction. The findings also suggest that breast composition does not have a major impact on the scatter characteristics of breast tissue. Application of the scatter data to the development of a scatter-correction software program can be simplified by ignoring the variations in density among breast tissues.
Nakano, Y., E-mail: nakano.yuuji@c.mbox.nagoya-u.ac.jp; Yamazaki, A.; Watanabe, K.; Uritani, A. [Graduate School of Engineering, Nagoya University, Nagoya 464-8603 (Japan); Ogawa, K.; Isobe, M. [National Institute for Fusion Science, Toki-city, GIFU 509-5292 (Japan)
2014-11-15T23:59:59.000Z
Neutron monitoring is important to manage safety of fusion experiment facilities because neutrons are generated in fusion reactions. Monte Carlo simulations play an important role in evaluating the influence of neutron scattering from various structures and correcting differences between deuterium plasma experiments and in situ calibration experiments. We evaluated these influences based on differences between the both experiments at Large Helical Device using Monte Carlo simulation code MCNP5. A difference between the both experiments in absolute detection efficiency of the fission chamber between O-ports is estimated to be the biggest of all monitors. We additionally evaluated correction coefficients for some neutron monitors.
Levin, Yan
Surface tension of an electrolyteair interface: a Monte Carlo study This article has been 24 (2012) 284115 (5pp) doi:10.1088/0953-8984/24/28/284115 Surface tension of an electrolyte for calculating the surface tension of an electrolyteair interface using Monte Carlo (MC) simulations
Experimental Study and Monte Carlo Modeling of Calcium Borosilicate Glasses Leaching
Arab, Mehdi; Cailleteau, Celine; Angeli, Frederic [CEA/DTCD/SECM/Laboratoire d'etudes du Comportement a Long Terme, CEA Centre Valrho, BP 17171, Bagnols-sur-ceze, 30207 (France); Devreux, Francois [Laboratoire de Physique de la Matiere Condensee, CNRS and Ecole Polytechnique, Palaiseau Cedex, 91128 (France)
2007-07-01T23:59:59.000Z
During aqueous alteration of glass an alteration layer appears on the glass surface. The properties of this alteration layer are of great importance for understanding and predicting the long-term behavior of high-level radioactive waste glasses. Numerical modeling can be very useful for understanding the impact of the glass composition on its aqueous reactivity and long-term properties but it is quite difficult to model these complex glasses. In order to identify the effect of the calcium content on glass alteration, seven oxide glass compositions (57SiO{sub 2} 17B{sub 2}O{sub 3} (22-x)Na{sub 2}O{sub x}CaO 4ZrO{sub 2}; 0 < x < 11) were investigated and a Monte Carlo model was developed to describe their leaching behavior. The specimens were altered at constant temperature (T = 90 deg. C) at a glass-surface-area-to-solution-volume (SA/V) ratio of 15 cm-1 in a buffered solution (pH 9.2). Under these conditions all the variations observed in the leaching behavior are attributable to composition effects. Increasing the calcium content in the glass appears to be responsible for a sharp drop in the final leached boron fraction. In parallel with this experimental work, a Monte Carlo model was developed to investigate the effect of calcium content on the leaching behavior especially on the initial stage of alteration. Monte Carlo simulations performed with this model are in good agreement with the experimental results. The dependence of the alteration rate on the calcium content can be described by a quadratic function: fitting the simulated points gives a minimum alteration rate at about 7.7 mol% calcium. This value is consistent with the figure of 8.2 mol% obtained from the experimental work. The model was also used to investigate the role of calcium in the glass structure and it pointed out that calcium act preferentially as a network modifier rather than a charge compensator in this kind of glasses. (authors)
Lee, Choonsik; Kim, Kwang Pyo; Long, Daniel; Fisher, Ryan; Tien, Chris; Simon, Steven L.; Bouville, Andre; Bolch, Wesley E. [Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institute of Health, Bethesda, Maryland 20852 (United States); Department of Nuclear Engineering, Kyung Hee University, Yongin 446-701 (Korea, Republic of); Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, Florida 32611 (United States); Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institute of Health, Bethesda, Maryland 20852 (United States); Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, Florida 32611 (United States)
2011-03-15T23:59:59.000Z
Purpose: To develop a computed tomography (CT) organ dose estimation method designed to readily provide organ doses in a reference adult male and female for different scan ranges to investigate the degree to which existing commercial programs can reasonably match organ doses defined in these more anatomically realistic adult hybrid phantomsMethods: The x-ray fan beam in the SOMATOM Sensation 16 multidetector CT scanner was simulated within the Monte Carlo radiation transport code MCNPX2.6. The simulated CT scanner model was validated through comparison with experimentally measured lateral free-in-air dose profiles and computed tomography dose index (CTDI) values. The reference adult male and female hybrid phantoms were coupled with the established CT scanner model following arm removal to simulate clinical head and other body region scans. A set of organ dose matrices were calculated for a series of consecutive axial scans ranging from the top of the head to the bottom of the phantoms with a beam thickness of 10 mm and the tube potentials of 80, 100, and 120 kVp. The organ doses for head, chest, and abdomen/pelvis examinations were calculated based on the organ dose matrices and compared to those obtained from two commercial programs, CT-EXPO and CTDOSIMETRY. Organ dose calculations were repeated for an adult stylized phantom by using the same simulation method used for the adult hybrid phantom. Results: Comparisons of both lateral free-in-air dose profiles and CTDI values through experimental measurement with the Monte Carlo simulations showed good agreement to within 9%. Organ doses for head, chest, and abdomen/pelvis scans reported in the commercial programs exceeded those from the Monte Carlo calculations in both the hybrid and stylized phantoms in this study, sometimes by orders of magnitude. Conclusions: The organ dose estimation method and dose matrices established in this study readily provides organ doses for a reference adult male and female for different CT scan ranges and technical parameters. Organ doses from existing commercial programs do not reasonably match organ doses calculated for the hybrid phantoms due to differences in phantom anatomy, as well as differences in organ dose scaling parameters. The organ dose matrices developed in this study will be extended to cover different technical parameters, CT scanner models, and various age groups.
Mergers of galaxies in clusters: Monte Carlo simulation of mass and angular momentum distribution
D. S. Krivitsky; V. M. Kontorovich
1997-03-04T23:59:59.000Z
A Monte Carlo simulation of mergers in clusters of galaxies is carried out. An ``explosive'' character of the merging process (an analog of phase transition), suggested earlier by Cavaliere et al. (1991), Kontorovich et al. (1992), is confirmed. In particular, a giant object similar to cD-galaxy is formed in a comparatively short time as a result of mergers. Mass and angular momentum distribution function for galaxies is calculated. An intermediate asymptotics of the mass function is close to a power law with the exponent $\\alpha\\approx2$. It may correspond to recent observational data for steep faint end of the luminosity function. The angular momentum distribution formed by mergers is close to Gaussian, the rms dimensionless angular momentum $S/(GM^3R)^{1/2}$ being approximately independent of mass, which is in accordance with observational data.
A portable, parallel, object-oriented Monte Carlo neutron transport code in C++
Lee, S.R.; Cummings, J.C. [Los Alamos National Lab., NM (United States); Nolen, S.D. [Texas A and M Univ., College Station, TX (United States)]|[Los Alamos National Lab., NM (United States)
1997-05-01T23:59:59.000Z
We have developed a multi-group Monte Carlo neutron transport code using C++ and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and {alpha}-eigenvalues and is portable to and runs parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities of MC++ are discussed, along with physics and performance results on a variety of hardware, including all Accelerated Strategic Computing Initiative (ASCI) hardware. Current parallel performance indicates the ability to compute {alpha}-eigenvalues in seconds to minutes rather than hours to days. Future plans and the implementation of a general transport physics framework are also discussed.
A Monte-Carlo method for ex-core neutron response
Gamino, R.G.; Ward, J.T.; Hughes, J.C. [Lockheed Martin Corp., Schenectady, NY (United States)
1997-10-01T23:59:59.000Z
A Monte Carlo neutron transport kernel capability primarily for ex-core neutron response is described. The capability consists of the generation of a set of response kernels, which represent the neutron transport from the core to a specific ex-core volume. This is accomplished by tagging individual neutron histories from their initial source sites and tracking them throughout the problem geometry, tallying those that interact in the geometric regions of interest. These transport kernels can subsequently be combined with any number of core power distributions to determine detector response for a variety of reactor Thus, the transport kernels are analogous to an integrated adjoint response. Examples of pressure vessel response and ex-core neutron detector response are provided to illustrate the method.
MC++: A parallel, portable, Monte Carlo neutron transport code in C++
Lee, S.R.; Cummings, J.C. [Los Alamos National Lab., NM (United States); Nolen, S.D. [Texas A & M Univ., College Station, TX (United States)
1997-03-01T23:59:59.000Z
MC++ is an implicit multi-group Monte Carlo neutron transport code written in C++ and based on the Parallel Object-Oriented Methods and Applications (POOMA) class library. MC++ runs in parallel on and is portable to a wide variety of platforms, including MPPs, SMPs, and clusters of UNIX workstations. MC++ is being developed to provide transport capabilities to the Accelerated Strategic Computing Initiative (ASCI). It is also intended to form the basis of the first transport physics framework (TPF), which is a C++ class library containing appropriate abstractions, objects, and methods for the particle transport problem. The transport problem is briefly described, as well as the current status and algorithms in MC++ for solving the transport equation. The alpha version of the POOMA class library is also discussed, along with the implementation of the transport solution algorithms using POOMA. Finally, a simple test problem is defined and performance and physics results from this problem are discussed on a variety of platforms.
Validation of GEANT4 Monte Carlo Models with a Highly Granular Scintillator-Steel Hadron Calorimeter
C. Adloff; J. Blaha; J. -J. Blaising; C. Drancourt; A. Espargilière; R. Gaglione; N. Geffroy; Y. Karyotakis; J. Prast; G. Vouters; K. Francis; J. Repond; J. Schlereth; J. Smith; L. Xia; E. Baldolemar; J. Li; S. T. Park; M. Sosebee; A. P. White; J. Yu; T. Buanes; G. Eigen; Y. Mikami; N. K. Watson; G. Mavromanolakis; M. A. Thomson; D. R. Ward; W. Yan; D. Benchekroun; A. Hoummada; Y. Khoulaki; J. Apostolakis; A. Dotti; G. Folger; V. Ivantchenko; V. Uzhinskiy; M. Benyamna; C. Cârloganu; F. Fehr; P. Gay; S. Manen; L. Royer; G. C. Blazey; A. Dyshkant; J. G. R. Lima; V. Zutshi; J. -Y. Hostachy; L. Morin; U. Cornett; D. David; G. Falley; K. Gadow; P. Göttlicher; C. Günter; B. Hermberg; S. Karstensen; F. Krivan; A. -I. Lucaci-Timoce; S. Lu; B. Lutz; S. Morozov; V. Morgunov; M. Reinecke; F. Sefkow; P. Smirnov; M. Terwort; A. Vargas-Trevino; N. Feege; E. Garutti; I. Marchesinik; M. Ramilli; P. Eckert; T. Harion; A. Kaplan; H. -Ch. Schultz-Coulon; W. Shen; R. Stamen; B. Bilki; E. Norbeck; Y. Onel; G. W. Wilson; K. Kawagoe; P. D. Dauncey; A. -M. Magnan; V. Bartsch; M. Wing; F. Salvatore; E. Calvo Alamillo; M. -C. Fouz; J. Puerta-Pelayo; B. Bobchenko; M. Chadeeva; M. Danilov; A. Epifantsev; O. Markin; R. Mizuk; E. Novikov; V. Popov; V. Rusinov; E. Tarkovsky; N. Kirikova; V. Kozlov; P. Smirnov; Y. Soloviev; P. Buzhan; A. Ilyin; V. Kantserov; V. Kaplin; A. Karakash; E. Popova; V. Tikhomirov; C. Kiesling; K. Seidel; F. Simon; C. Soldner; M. Szalay; M. Tesar; L. Weuste; M. S. Amjad; J. Bonis; S. Callier; S. Conforti di Lorenzo; P. Cornebise; Ph. Doublet; F. Dulucq; J. Fleury; T. Frisson; N. van der Kolk; H. Li; G. Martin-Chassard; F. Richard; Ch. de la Taille; R. Pöschl; L. Raux; J. Rouëné; N. Seguin-Moreau; M. Anduze; V. Boudry; J-C. Brient; D. Jeans; P. Mora de Freitas; G. Musat; M. Reinhard; M. Ruan; H. Videau; B. Bulanek; J. Zacek; J. Cvach; P. Gallus; M. Havranek; M. Janata; J. Kvasnicka; D. Lednicky; M. Marcisovsky; I. Polak; J. Popule; L. Tomasek; M. Tomasek; P. Ruzicka; P. Sicho; J. Smolik; V. Vrba; J. Zalesak; B. Belhorma; H. Ghazlane; T. Takeshita; S. Uozumi; M. Götze; O. Hartbrich; J. Sauer; S. Weber; C. Zeitnitz
2014-06-15T23:59:59.000Z
Calorimeters with a high granularity are a fundamental requirement of the Particle Flow paradigm. This paper focuses on the prototype of a hadron calorimeter with analog readout, consisting of thirty-eight scintillator layers alternating with steel absorber planes. The scintillator plates are finely segmented into tiles individually read out via Silicon Photomultipliers. The presented results are based on data collected with pion beams in the energy range from 8GeV to 100GeV. The fine segmentation of the sensitive layers and the high sampling frequency allow for an excellent reconstruction of the spatial development of hadronic showers. A comparison between data and Monte Carlo simulations is presented, concerning both the longitudinal and lateral development of hadronic showers and the global response of the calorimeter. The performance of several GEANT4 physics lists with respect to these observables is evaluated.
Kinetic Monte Carlo Simulation of Electrodeposition using the Embedded-Atom Method
Treeratanaphitak, Tanyakarn; Abukhdeir, Nasser Mohieddin
2013-01-01T23:59:59.000Z
A kinetic Monte Carlo (KMC) method is presented to simulate the electrodeposition of a metal on a single crystal surface of the same metal under galvanostatic conditions. This method utilizes the multi-body embedded-atom method (EAM) potential to characterize the interactions of metal atoms and adatoms. The KMC method accounts for deposition and surface diffusion processes including hopping, atom exchange and step-edge atom exchange. Steady-state deposition configurations obtained using the KMC method are validated by comparison with the structures obtained through the use of molecular dynamics (MD) simulations to relax KMC constraints. The results of this work support the use of the proposed KMC method to simulate electrodeposition processes at length (microns) and time (seconds) scales that are not feasible using other methods.
Monte Carlo and Renormalization Group Effective Potentials in Scalar Field Theories
J. R. Shepard; V. Dmitrašinovi?; J. A. McNeil
1994-12-29T23:59:59.000Z
We study constraint effective potentials for various strongly interacting $\\phi^4$ theories. Renormalization group (RG) equations for these quantities are discussed and a heuristic development of a commonly used RG approximation is presented which stresses the relationships among the loop expansion, the Schwinger-Dyson method and the renormalization group approach. We extend the standard RG treatment to account explicitly for finite lattice effects. Constraint effective potentials are then evaluated using Monte Carlo (MC) techniques and careful comparisons are made with RG calculations. Explicit treatment of finite lattice effects is found to be essential in achieving quantitative agreement with the MC effective potentials. Excellent agreement is demonstrated for $d=3$ and $d=4$, O(1) and O(2) cases in both symmetric and broken phases.
Bianco, Federica B; Oh, Seung Man; Fierroz, David; Liu, Yuqian; Kewley, Lisa; Graur, Or
2015-01-01T23:59:59.000Z
We present the open-source Python code pyMCZ that determines oxygen abundance and its distribution from strong emission lines in the standard metallicity scales, based on the original IDL code of Kewley & Dopita (2002) with updates from Kewley & Ellison (2008), and expanded to include more recently developed scales. The standard strong-line diagnostics have been used to estimate the oxygen abundance in the interstellar medium through various emission line ratios in many areas of astrophysics, including galaxy evolution and supernova host galaxy studies. We introduce a Python implementation of these methods that, through Monte Carlo (MC) sampling, better characterizes the statistical reddening-corrected oxygen abundance confidence region. Given line flux measurements and their uncertainties, our code produces synthetic distributions for the oxygen abundance in up to 13 metallicity scales simultaneously, as well as for E(B-V), and estimates their median values and their 66% confidence regions. In additi...
Application analysis of Monte Carlo to estimate the capacity of geothermal resources in Lawu Mount
Supriyadi, E-mail: supriyadi-uno@yahoo.co.nz [Physics, Faculty of Mathematics and Natural Sciences, University of Jember, Jl. Kalimantan Kampus Bumi Tegal Boto, Jember 68181 (Indonesia); Srigutomo, Wahyu [Complex system and earth physics, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia); Munandar, Arif [Kelompok Program Penelitian Panas Bumi, PSDG, Badan Geologi, Kementrian ESDM, Jl. Soekarno Hatta No. 444 Bandung 40254 (Indonesia)
2014-03-24T23:59:59.000Z
Monte Carlo analysis has been applied in calculation of geothermal resource capacity based on volumetric method issued by Standar Nasional Indonesia (SNI). A deterministic formula is converted into a stochastic formula to take into account the nature of uncertainties in input parameters. The method yields a range of potential power probability stored beneath Lawu Mount geothermal area. For 10,000 iterations, the capacity of geothermal resources is in the range of 139.30-218.24 MWe with the most likely value is 177.77 MWe. The risk of resource capacity above 196.19 MWe is less than 10%. The power density of the prospect area covering 17 km{sup 2} is 9.41 MWe/km{sup 2} with probability 80%.
A new time quantifiable Monte Carlo method in simulating magnetization reversal process
X. Z. Cheng; M. B. A. Jalil; H. K. Lee; Y. Okabe
2005-04-14T23:59:59.000Z
We propose a new time quantifiable Monte Carlo (MC) method to simulate the thermally induced magnetization reversal for an isolated single domain particle system. The MC method involves the determination of density of states, and the use of Master equation for time evolution. We derive an analytical factor to convert MC steps into real time intervals. Unlike a previous time quantified MC method, our method is readily scalable to arbitrarily long time scales, and can be repeated for different temperatures with minimal computational effort. Based on the conversion factor, we are able to make a direct comparison between the results obtained from MC and Langevin dynamics methods, and find excellent agreement between them. An analytical formula for the magnetization reversal time is also derived, which agrees very well with both numerical Langevin and time-quantified MC results, over a large temperature range and for parallel and oblique easy axis orientations.
Resonating Valence Bond Quantum Monte Carlo: Application to the ozone molecule
Sam Azadi; Ranber Singh; Thomas D. Kühne
2015-02-24T23:59:59.000Z
We study the potential energy surface of the ozone molecule by means of Quantum Monte Carlo simulations based on the resonating valence bond concept. The trial wave function consists of an antisymmetrized geminal power arranged in a single-determinant that is multiplied by a Jastrow correlation factor. Whereas the determinantal part incorporates static correlation effects, the augmented real-space correlation factor accounts for the dynamics electron correlation. The accuracy of this approach is demonstrated by computing the potential energy surface for the ozone molecule in three vibrational states: symmetric, asymmetric and scissoring. We find that the employed wave function provides a detailed description of rather strongly-correlated multi-reference systems, which is in quantitative agreement with experiment.
Monte Carlo procedure for protein folding in lattice model. Conformational rigidity
Olivier Collet
1999-07-19T23:59:59.000Z
A rigourous Monte Carlo method for protein folding simulation on lattice model is introduced. We show that a parameter which can be seen as the rigidity of the conformations has to be introduced in order to satisfy the detailed balance condition. Its properties are discussed and its role during the folding process is elucidated. This method is applied on small chains on two-dimensional lattice. A Bortz-Kalos-Lebowitz type algorithm which allows to study the kinetic of the chains at very low temperature is implemented in the presented method. We show that the coefficients of the Arrhenius law are in good agreement with the value of the main potential barrier of the system.
Davis JE, Eddy MJ, Sutton TM, Altomari TJ
2007-03-01T23:59:59.000Z
Solid modeling computer software systems provide for the design of three-dimensional solid models used in the design and analysis of physical components. The current state-of-the-art in solid modeling representation uses a boundary representation format in which geometry and topology are used to form three-dimensional boundaries of the solid. The geometry representation used in these systems is cubic B-spline curves and surfaces--a network of cubic B-spline functions in three-dimensional Cartesian coordinate space. Many Monte Carlo codes, however, use a geometry representation in which geometry units are specified by intersections and unions of half-spaces. This paper describes an algorithm for converting from a boundary representation to a half-space representation.
Resonating Valence Bond Quantum Monte Carlo: Application to the ozone molecule
Azadi, Sam; Kühne, Thomas D
2015-01-01T23:59:59.000Z
We study the potential energy surface of the ozone molecule by means of Quantum Monte Carlo simulations based on the resonating valence bond concept. The trial wave function consists of an antisymmetrized geminal power arranged in a single-determinant that is multiplied by a Jastrow correlation factor. Whereas the determinantal part incorporates static correlation effects, the augmented real-space correlation factor accounts for the dynamics electron correlation. The accuracy of this approach is demonstrated by computing the potential energy surface for the ozone molecule in three vibrational states: symmetric, asymmetric and scissoring. We find that the employed wave function provides a detailed description of rather strongly-correlated multi-reference systems, which is in quantitative agreement with experiment.
Quantum Monte Carlo calculation of the equation of state of neutron matter
Gandolfi, S.; Illarionov, A. Yu.; Schmidt, K. E.; Pederiva, F.; Fantoni, S. [International School for Advanced Studies, SISSA Via Beirut 2/4 I-34014 Trieste (Italy) and INFN, Sezione di Trieste, Trieste (Italy); Department of Physics, Arizona State University, Tempe, Arizona 85287 (United States); Dipartimento di Fisica dell'Universita di Trento, via Sommarive 14, I-38050 Povo, Trento (Italy) and INFN, Gruppo Collegato di Trento, Trento (Italy); International School for Advanced Studies, SISSA Via Beirut 2/4 I-34014 Trieste (Italy); INFN, Sezione di Trieste, Trieste, Italy and INFM DEMOCRITOS National Simulation Center, Via Beirut 2/4 I-34014 Trieste (Italy)
2009-05-15T23:59:59.000Z
We calculated the equation of state of neutron matter at zero temperature by means of the auxiliary field diffusion Monte Carlo (AFDMC) method combined with a fixed-phase approximation. The calculation of the energy was carried out by simulating up to 114 neutrons in a periodic box. Special attention was given to reducing finite-size effects at the energy evaluation by adding to the interaction the effect due to the truncation of the simulation box, and by performing several simulations using different numbers of neutrons. The finite-size effects due to kinetic energy were also checked by employing the twist-averaged boundary conditions. We considered a realistic nuclear Hamiltonian containing modern two- and three-body interactions of the Argonne and Urbana family. The equation of state can be used to compare and calibrate other many-body calculations and to predict properties of neutron stars.
MaGe - a Geant4-based Monte Carlo framework for low-background experiments
Yuen-Dat Chan; Jason A. Detwiler; Reyco Henning; Victor M. Gehman; Rob A. Johnson; David V. Jordan; Kareem Kazkaz; Markus Knapp; Kevin Kroninger; Daniel Lenz; Jing Liu; Xiang Liu; Michael G. Marino; Akbar Mokhtarani; Luciano Pandola; Alexis G. Schubert; Claudia Tomei
2008-02-06T23:59:59.000Z
A Monte Carlo framework, MaGe, has been developed based on the Geant4 simulation toolkit. Its purpose is to simulate physics processes in low-energy and low-background radiation detectors, specifically for the Majorana and Gerda $^{76}$Ge neutrinoless double-beta decay experiments. This jointly-developed tool is also used to verify the simulation of physics processes relevant to other low-background experiments in Geant4. The MaGe framework contains simulations of prototype experiments and test stands, and is easily extended to incorporate new geometries and configurations while still using the same verified physics processes, tunings, and code framework. This reduces duplication of efforts and improves the robustness of and confidence in the simulation output.
Validation of the Monte Carlo Criticality Program KENO V. a for highly-enriched uranium systems
Knight, J.R.
1984-11-01T23:59:59.000Z
A series of calculations based on critical experiments have been performed using the KENO V.a Monte Carlo Criticality Program for the purpose of validating KENO V.a for use in evaluating Y-12 Plant criticality problems. The experiments were reflected and unreflected systems of single units and arrays containing highly enriched uranium metal or uranium compounds. Various geometrical shapes were used in the experiments. The SCALE control module CSAS25 with the 27-group ENDF/B-4 cross-section library was used to perform the calculations. Some of the experiments were also calculated using the 16-group Hansen-Roach Library. Results are presented in a series of tables and discussed. Results show that the criteria established for the safe application of the KENO IV program may also be used for KENO V.a results.
A bottom collider vertex detector design, Monte-Carlo simulation and analysis package
Lebrun, P.
1990-10-01T23:59:59.000Z
A detailed simulation of the BCD vertex detector is underway. Specifications and global design issues are briefly reviewed. The BCD design based on double sided strip detector is described in more detail. The GEANT3-based Monte-Carlo program and the analysis package used to estimate detector performance are discussed in detail. The current status of the expected resolution and signal to noise ratio for the golden'' CP violating mode B{sub d} {yields} {pi}{sup +}{pi}{sup {minus}} is presented. These calculations have been done at FNAL energy ({radical}s = 2.0 TeV). Emphasis is placed on design issues, analysis techniques and related software rather than physics potentials. 20 refs., 46 figs.
Phase transition in liquid crystal elastomer - a Monte Carlo study employing non-Boltzmann sampling
D. Jayasri; N. Satyavathi; V. S. S. Sastry; K. P. N. Murthy
2006-11-08T23:59:59.000Z
We investigate Isotropic - Nematic transition in liquid crystal elastomers employing non-Boltzmann Monte Carlo techniques. We consider a lattice model of a liquid elastomer and Selinger-Jeon-Ratna Hamiltonian which accounts for homogeneous/inhomogeneous interactions among liquid crystalline units, interaction of local nematics with global strain, and with inhomogeneous external fields and stress. We find that when the local director is coupled strongly to the global strain the transition is strongly first order; the transition softens when the coupling becomes weaker. Also the transition temperature decreases with decrease of coupling strength. Besides we find that the nematic order scales nonlinearly with global strain especially for strong coupling and at low temperatures.
Ab-initio molecular dynamics simulation of liquid water by Quantum Monte Carlo
Andrea Zen; Ye Luo; Guglielmo Mazzola; Leonardo Guidoni; Sandro Sorella
2015-04-21T23:59:59.000Z
Although liquid water is ubiquitous in chemical reactions at roots of life and climate on the earth, the prediction of its properties by high-level ab initio molecular dynamics simulations still represents a formidable task for quantum chemistry. In this article we present a room temperature simulation of liquid water based on the potential energy surface obtained by a many-body wave function through quantum Monte Carlo (QMC) methods. The simulated properties are in good agreement with recent neutron scattering and X-ray experiments, particularly concerning the position of the oxygen-oxygen peak in the radial distribution function, at variance of previous Density Functional Theory attempts. Given the excellent performances of QMC on large scale supercomputers, this work opens new perspectives for predictive and reliable ab-initio simulations of complex chemical systems.
Monte-Carlo study of quasiparticle dispersion relation in monolayer graphene
P. V. Buividovich
2013-01-07T23:59:59.000Z
The density of electronic one-particle states in monolayer graphene is studied by performing the Hybrid Monte-Carlo simulations of the tight-binding model for electrons on the pi orbitals of carbon atoms which make up the graphene lattice. Density of states is approximated as a derivative of the number of particles over the chemical potential at sufficiently small temperature. Simulations are performed in the partially quenched approximation, in which virtual particles and holes have zero chemical potential. It is found that the Van Hove singularity becomes much sharper than in the free tight-binding model. Simulation results also suggest that the Fermi velocity increases with interaction strength up to the transition to the phase with spontaneously broken chiral symmetry.
Introduction to Computational Physics and Monte Carlo Simulations of Matrix Field Theory
Ydri, Badis
2015-01-01T23:59:59.000Z
This book is divided into two parts. In the first part we give an elementary introduction to computational physics consisting of 21 simulations which originated from a formal course of lectures and laboratory simulations delivered since 2010 to physics students at Annaba University. The second part is much more advanced and deals with the problem of how to set up working Monte Carlo simulations of matrix field theories which involve finite dimensional matrix regularizations of noncommutative and fuzzy field theories, fuzzy spaces and matrix geometry. The study of matrix field theory in its own right has also become very important to the proper understanding of all noncommutative, fuzzy and matrix phenomena. The second part, which consists of 9 simulations, was delivered informally to doctoral students who are working on various problems in matrix field theory. Sample codes as well as sample key solutions are also provided for convenience and completness. An appendix containing an executive arabic summary of t...
From hypernuclei to the Inner Core of Neutron Stars: A Quantum Monte Carlo Study
Diego Lonardoni; Francesco Pederiva; Stefano Gandolfi
2014-08-19T23:59:59.000Z
Auxiliary Field Diffusion Monte Carlo (AFDMC) calculations have been employed to revise the interaction between $\\Lambda$-hyperons and nucleons in hypernuclei. The scheme used to describe the interaction, inspired by the phenomenological Argonne-Urbana forces, is the $\\Lambda N+\\Lambda NN$ potential firstly introduced by Bodmer, Usmani et al.. Within this framework, we performed calculations on light and medium mass hypernuclei in order to assess the extent of the repulsive contribution of the three-body part. By tuning this contribution in order to reproduce the $\\Lambda$ separation energy in $^5_\\Lambda$He and $^{17}_{~\\Lambda}$O, experimental findings are reproduced over a wide range of masses. Calculations have then been extended to $\\Lambda$-neutron matter in order to derive an analogous of the symmetry energy to be used in determining the equation of state of matter in the typical conditions found in the inner core of neutron stars.
The Auxiliary Field Diffusion Monte Carlo Method for Nuclear Physics and Nuclear Astrophysics
Stefano Gandolfi
2007-12-09T23:59:59.000Z
In this thesis, I discuss the use of the Auxiliary Field Diffusion Monte Carlo method to compute the ground state of nuclear Hamiltonians, and I show several applications to interesting problems both in nuclear physics and in nuclear astrophysics. In particular, the AFDMC algorithm is applied to the study of several nuclear systems, finite, and infinite matter. Results about the ground state of nuclei ($^4$He, $^8$He, $^{16}$O and $^{40}$Ca), neutron drops (with 8 and 20 neutrons) and neutron rich-nuclei (isotopes of oxygen and calcium) are discussed, and the equation of state of nuclear and neutron matter are calculated and compared with other many-body calculations. The $^1S_0$ superfluid phase of neutron matter in the low-density regime was also studied.
Auxiliary-field quantum Monte Carlo method for strongly paired fermions
Carlson, J.; Gandolfi, Stefano [Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Schmidt, Kevin E. [Department of Physics, Arizona State University, Tempe, Arizona 85287 (United States); Zhang, Shiwei [Department of Physics, College of William and Mary, Williamsburg, Virginia 23187 (United States)
2011-12-15T23:59:59.000Z
We solve the zero-temperature unitary Fermi gas problem by incorporating a BCS importance function into the auxiliary-field quantum Monte Carlo method. We demonstrate that this method does not suffer from a sign problem and that it increases the efficiency of standard techniques by many orders of magnitude for strongly paired fermions. We calculate the ground-state energies exactly for unpolarized systems with up to 66 particles on lattices of up to 27{sup 3} sites, obtaining an accurate result for the universal parameter {xi}. We also obtain results for interactions with different effective ranges and find that the energy is consistent with a universal linear dependence on the product of the Fermi momentum and the effective range. This method will have many applications in superfluid cold atom systems and in both electronic and nuclear structures where pairing is important.
Ab-initio molecular dynamics simulation of liquid water by Quantum Monte Carlo
Andrea Zen; Ye Luo; Guglielmo Mazzola; Leonardo Guidoni; Sandro Sorella
2014-12-09T23:59:59.000Z
Despite liquid water is ubiquitous in chemical reactions at roots of life and climate on earth, the prediction of its properties by high-level ab initio molecular dynamics simulations still represents a formidable task for quantum chemistry. In this article we present a room temperature simulation of liquid water based on the potential energy surface obtained by a many-body wave function through quantum Monte Carlo (QMC) methods. The simulated properties are in excellent agreement with recent neutron scattering and X-ray experiments, particularly concerning the position of the oxygen-oxygen peak in the radial distribution function, at variance of previous Density Functional Theory attempts. Given the excellent performances of QMC on large scale supercomputers, this work opens new perspectives for predictive and reliable ab-initio simulations of complex chemical systems.
Bose, Tushar Kanti
2015-01-01T23:59:59.000Z
The realization of a spontaneous macroscopic ferroelectric order in fluids of anisotropic mesogens is a topic of both fundamental and technological interest. Recently, we demonstrated that a system of dipolar achiral disklike ellipsoids can exhibit long-searched ferroelectric liquid crystalline phases of dipolar origin. In the present work, extensive off-lattice Monte Carlo simulations are used to investigate the phase behavior of the system under the influences of the electrostatic boundary conditions that restrict any global polarization. We find that the system develops strongly ferroelectric slablike domains periodically arranged in an antiferroelectric fashion. Exploring the phase behavior at different dipole strengths, we find existence of the ferroelectric nematic and ferroelectric columnar order inside the domains. For higher dipole strengths, a biaxial phase is also obtained with a similar periodic array of ferroelectric slabs of antiparallel polarizations. We have studied the depolarizing effects by...
Finite-Temperature Pairing Gap of a Unitary Fermi Gas by Quantum Monte Carlo Calculations
Magierski, Piotr; Wlazlowski, Gabriel [Faculty of Physics, Warsaw University of Technology, ulica Koszykowa 75, 00-662 Warsaw (Poland); Bulgac, Aurel; Drut, Joaquin E. [Department of Physics, University of Washington, Seattle, Washington 98195-1560 (United States)
2009-11-20T23:59:59.000Z
We calculate the one-body temperature Green's (Matsubara) function of the unitary Fermi gas via quantum Monte Carlo, and extract the spectral weight function A(p,omega) using the methods of maximum entropy and singular value decomposition. From A(p,omega) we determine the quasiparticle spectrum, which can be accurately parametrized by three functions of temperature: an effective mass m*, a mean-field potential U, and a gap DELTA. Below the critical temperature T{sub c}=0.15epsilon{sub F} the results for m*, U, and DELTA can be accurately reproduced using an independent quasiparticle model. We find evidence of a pseudogap in the fermionic excitation spectrum for temperatures up to T*{approx_equal}0.20{epsilon}{sub F}>T{sub c}.
Quantum Monte Carlo study of dilute neutron matter at finite temperatures
Wlazlowski, Gabriel; Magierski, Piotr [Faculty of Physics, Warsaw University of Technology, Ulica Koszykowa 75, PL-00-662 Warsaw (Poland)
2011-01-15T23:59:59.000Z
We report results of fully nonperturbative, path integral Monte Carlo calculations for dilute neutron matter. The neutron-neutron interaction in the s channel is parameterized by the scattering length and the effective range. We calculate the energy and the chemical potential as a function of temperature at density {rho}=0.003 fm{sup -3}. The critical temperature T{sub c} for the superfluid-normal phase transition is estimated from the finite size scaling of the condensate fraction. At low temperatures we extract the spectral weight function A(p,{omega}) from the imaginary time propagator using the methods of maximum entropy and singular value decomposition. We determine the quasiparticle spectrum, which can be accurately parameterized by three parameters: an effective mass m{sup *}, a mean-field potential U, and a gap {Delta}. Large values of {Delta}/T{sub c} indicate that the system is not a BCS-type superfluid at low temperatures.
Monte Carlo study of the CO-poisoning dynamics in a model for the catalytic oxidation of CO
Marro, Joaquín
Monte Carlo study of the CO-poisoning dynamics in a model for the catalytic oxidation of CO The poisoning dynamics of the ZiffGulariBarshad Phys. Rev. Lett. 56, 2553 1986 model, for a monomer absorbing state and close to the coexistence point. Analysis of the average poisoning time ( p) allows us
Einstein, Theodore L.
PHYSICAL REVIEW B 83, 245414 (2011) Monte Carlo study of the honeycomb structure of anthraquinone model, we demonstrate a mechanism for the spontaneous formation of honeycomb structure of anthraquinoneÂ13 Pawin et al.14 observed the spontaneous formation of honeycomb structures of anthraquinone (AQ
Monte Carlo Simulations of Small Sulfuric Acid-Water Clusters S. M. Kathmann,* and B. N. Hale,*
Hale, Barbara N.
-to-liquid nucleation1-5 to acid rain formation6-8 and ozone depletion mechanisms.9-11 Doyle's early work2 predictedMonte Carlo Simulations of Small Sulfuric Acid-Water Clusters S. M. Kathmann,* and B. N. HaleÂ§,* En Form: August 7, 2001 Effective atom-atom potentials are developed for binary sulfuric acid
Bendele, Travis Henry
2013-02-22T23:59:59.000Z
A honeycomb probe was designed to measure the optical properties of biological tissues using single Monte Carlo method. The ongoing project is intended to be a multi-wavelength, real time, and in-vivo technique to detect breast cancer. Preliminary...
Boas, David
September 1, 2001 / Vol. 26, No. 17 / OPTICS LETTERS 1335 Perturbation Monte Carlo methods to solve with respect to perturbations in background tissue optical properties. We then feed this derivative information to a nonlinear optimization algorithm to determine the optical properties of the tissue heterogeneity under
Lutzoni, François M.
Bayes or Bootstrap? A Simulation Study Comparing the Performance of Bayesian Markov Chain Monte Carlo Sampling and Bootstrapping in Assessing Phylogenetic Confidence Michael E. Alfaro,* Stefan Zoller of confidence and the most commonly used confidence measure in phylogenetics, the nonparametric bootstrap
a full and quasi-full MC simulation with energy threshold of particles of 500 keV for primary energy by the user). Apart from thinning, a number of papers treats about techniques to simulate ultra high energy30TH INTERNATIONAL COSMIC RAY CONFERENCE A Fast and Accurate Monte Carlo EAS Simulation Scheme
Quantum Monte Carlo Study of the Optical and Diffusive Properties of theVacancy Defect in Diamond
Kent, Paul
associated with radiation damage. It is also very interesting scientifically, with a wide variety of physicalQuantum Monte Carlo Study of the Optical and Diffusive Properties of theVacancy Defect in Diamond]. The best-known optical transition, GR1 at 1.673 eV [5], long associated with the neutral vacancy, cannot
Zhang, Pengfei; Wang, Qiang, E-mail: q.wang@colostate.edu [Department of Chemical and Biological Engineering, Colorado State University, Fort Collins, Colorado 80523-1370 (United States)] [Department of Chemical and Biological Engineering, Colorado State University, Fort Collins, Colorado 80523-1370 (United States)
2014-01-28T23:59:59.000Z
Using fast lattice Monte Carlo (FLMC) simulations [Q. Wang, Soft Matter 5, 4564 (2009)] and the corresponding lattice self-consistent field (LSCF) calculations, we studied a model system of grafted homopolymers, in both the brush and mushroom regimes, in an explicit solvent compressed by an impenetrable surface. Direct comparisons between FLMC and LSCF results, both of which are based on the same Hamiltonian (thus without any parameter-fitting between them), unambiguously and quantitatively reveal the fluctuations/correlations neglected by the latter. We studied both the structure (including the canonical-ensemble averages of the height and the mean-square end-to-end distances of grafted polymers) and thermodynamics (including the ensemble-averaged reduced energy density and the related internal energy per chain, the differences in the Helmholtz free energy and entropy per chain from the uncompressed state, and the pressure due to compression) of the system. In particular, we generalized the method for calculating pressure in lattice Monte Carlo simulations proposed by Dickman [J. Chem. Phys. 87, 2246 (1987)], and combined it with the Wang-Landau–Optimized Ensemble sampling [S. Trebst, D. A. Huse, and M. Troyer, Phys. Rev. E 70, 046701 (2004)] to efficiently and accurately calculate the free energy difference and the pressure due to compression. While we mainly examined the effects of the degree of compression, the distance between the nearest-neighbor grafting points, the reduced number of chains grafted at each grafting point, and the system fluctuations/correlations in an athermal solvent, the ?-solvent is also considered in some cases.
Taylor, Michael, E-mail: michael.taylor@rmit.edu.au [School of Applied Sciences, College of Science, Engineering and Health, RMIT University, Melbourne, Victoria (Australia); Physical Sciences, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Dunn, Leon; Kron, Tomas; Height, Felicity; Franich, Rick [School of Applied Sciences, College of Science, Engineering and Health, RMIT University, Melbourne, Victoria (Australia); Physical Sciences, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia)
2012-04-01T23:59:59.000Z
Prediction of dose distributions in close proximity to interfaces is difficult. In the context of radiotherapy of lung tumors, this may affect the minimum dose received by lesions and is particularly important when prescribing dose to covering isodoses. The objective of this work is to quantify underdosage in key regions around a hypothetical target using Monte Carlo dose calculation methods, and to develop a factor for clinical estimation of such underdosage. A systematic set of calculations are undertaken using 2 Monte Carlo radiation transport codes (EGSnrc and GEANT4). Discrepancies in dose are determined for a number of parameters, including beam energy, tumor size, field size, and distance from chest wall. Calculations were performed for 1-mm{sup 3} regions at proximal, distal, and lateral aspects of a spherical tumor, determined for a 6-MV and a 15-MV photon beam. The simulations indicate regions of tumor underdose at the tumor-lung interface. Results are presented as ratios of the dose at key peripheral regions to the dose at the center of the tumor, a point at which the treatment planning system (TPS) predicts the dose more reliably. Comparison with TPS data (pencil-beam convolution) indicates such underdosage would not have been predicted accurately in the clinic. We define a dose reduction factor (DRF) as the average of the dose in the periphery in the 6 cardinal directions divided by the central dose in the target, the mean of which is 0.97 and 0.95 for a 6-MV and 15-MV beam, respectively. The DRF can assist clinicians in the estimation of the magnitude of potential discrepancies between prescribed and delivered dose distributions as a function of tumor size and location. Calculation for a systematic set of 'generic' tumors allows application to many classes of patient case, and is particularly useful for interpreting clinical trial data.
SU-E-T-344: Validation and Clinical Experience of Eclipse Electron Monte Carlo Algorithm (EMC)
Pokharel, S [21st Century Oncology, Fort Myers, FL (United States); Rana, S [Procure Proton Therapy Center, Oklahoma City, OK (United States)
2014-06-01T23:59:59.000Z
Purpose: The purpose of this study is to validate Eclipse Electron Monte Carlo (Algorithm for routine clinical uses. Methods: The PTW inhomogeneity phantom (T40037) with different combination of heterogeneous slabs has been CT-scanned with Philips Brilliance 16 slice scanner. The phantom contains blocks of Rando Alderson materials mimicking lung, Polystyrene (Tissue), PTFE (Bone) and PMAA. The phantom has 30×30×2.5 cm base plate with 2cm recesses to insert inhomogeneity. The detector systems used in this study are diode, tlds and Gafchromic EBT2 films. The diode and tlds were included in CT scans. The CT sets are transferred to Eclipse treatment planning system. Several plans have been created with Eclipse Monte Carlo (EMC) algorithm 11.0.21. Measurements have been carried out in Varian TrueBeam machine for energy from 6–22mev. Results: The measured and calculated doses agreed very well for tissue like media. The agreement was reasonably okay for the presence of lung inhomogeneity. The point dose agreement was within 3.5% and Gamma passing rate at 3%/3mm was greater than 93% except for 6Mev(85%). The disagreement can reach as high as 10% in the presence of bone inhomogeneity. This is due to eclipse reporting dose to the medium as opposed to the dose to the water as in conventional calculation engines. Conclusion: Care must be taken when using Varian Eclipse EMC algorithm for dose calculation for routine clinical uses. The algorithm dose not report dose to water in which most of the clinical experiences are based on rather it just reports dose to medium directly. In the presence of inhomogeneity such as bone, the dose discrepancy can be as high as 10% or even more depending on the location of normalization point or volume. As Radiation oncology as an empirical science, care must be taken before using EMC reported monitor units for clinical uses.
Sunny, E. E.; Martin, W. R. [University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor MI 48109 (United States)
2013-07-01T23:59:59.000Z
Current Monte Carlo codes use one of three models to model neutron scattering in the epithermal energy range: (1) the asymptotic scattering model, (2) the free gas scattering model, or (3) the S({alpha},{beta}) model, depending on the neutron energy and the specific Monte Carlo code. The free gas scattering model assumes the scattering cross section is constant over the neutron energy range, which is usually a good approximation for light nuclei, but not for heavy nuclei where the scattering cross section may have several resonances in the epithermal region. Several researchers in the field have shown that using the free gas scattering model in the vicinity of the resonances in the lower epithermal range can under-predict resonance absorption due to the up-scattering phenomenon. Existing methods all involve performing the collision analysis in the center-of-mass frame, followed by a conversion back to the laboratory frame. In this paper, we will present a new sampling methodology that (1) accounts for the energy-dependent scattering cross sections in the collision analysis and (2) acts in the laboratory frame, avoiding the conversion to the center-of-mass frame. The energy dependence of the scattering cross section was modeled with even-ordered polynomials to approximate the scattering cross section in Blackshaw's equations for the moments of the differential scattering PDFs. These moments were used to sample the outgoing neutron speed and angle in the laboratory frame on-the-fly during the random walk of the neutron. Results for criticality studies on fuel pin and fuel assembly calculations using these methods showed very close comparison to results using the reference Doppler-broadened rejection correction (DBRC) scheme. (authors)
Implementation of the probability table method in a continuous-energy Monte Carlo code system
Sutton, T.M.; Brown, F.B. [Lockheed Martin Corp., Schenectady, NY (United States)
1998-10-01T23:59:59.000Z
RACER is a particle-transport Monte Carlo code that utilizes a continuous-energy treatment for neutrons and neutron cross section data. Until recently, neutron cross sections in the unresolved resonance range (URR) have been treated in RACER using smooth, dilute-average representations. This paper describes how RACER has been modified to use probability tables to treat cross sections in the URR, and the computer codes that have been developed to compute the tables from the unresolved resonance parameters contained in ENDF/B data files. A companion paper presents results of Monte Carlo calculations that demonstrate the effect of the use of probability tables versus the use of dilute-average cross sections for the URR. The next section provides a brief review of the probability table method as implemented in the RACER system. The production of the probability tables for use by RACER takes place in two steps. The first step is the generation of probability tables from the nuclear parameters contained in the ENDF/B data files. This step, and the code written to perform it, are described in Section 3. The tables produced are at energy points determined by the ENDF/B parameters and/or accuracy considerations. The tables actually used in the RACER calculations are obtained in the second step from those produced in the first. These tables are generated at energy points specific to the RACER calculation. Section 4 describes this step and the code written to implement it, as well as modifications made to RACER to enable it to use the tables. Finally, some results and conclusions are presented in Section 5.
Helton, J.C.; Shiver, A.W.
1994-10-01T23:59:59.000Z
A Monte Carlo procedure for the construction of complementary cumulative distribution functions (CCDFs) for comparison with the US Environmental Protection Agency (EPA) release limits for radioactive waste disposal (40 CFR 191, Subpart B) is described and illustrated with results from a recent performance assessment (PA) for the Waste Isolation Pilot Plant (WIPP). The Monte Carlo procedure produces CCDF estimates similar to those obtained with stratified sampling in several recent PAs for the WIPP. The advantages of the Monte Carlo procedure over stratified sampling include increased resolution in the calculation of probabilities for complex scenarios involving drilling intrusions and better use of the necessarily limited number of mechanistic calculations that underlie CCDF construction.
Ramos-Mendez, Jose [Benemerita Universidad Autonoma de Puebla, 18 Sur and San Claudio Avenue, Puebla, Puebla 72750 (Mexico); Perl, Joseph [SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025 (United States); Faddegon, Bruce [Department of Radiation Oncology, University of California at San Francisco, California 94143 (United States); Schuemann, Jan; Paganetti, Harald [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States)
2013-04-15T23:59:59.000Z
Purpose: To present the implementation and validation of a geometrical based variance reduction technique for the calculation of phase space data for proton therapy dose calculation. Methods: The treatment heads at the Francis H Burr Proton Therapy Center were modeled with a new Monte Carlo tool (TOPAS based on Geant4). For variance reduction purposes, two particle-splitting planes were implemented. First, the particles were split upstream of the second scatterer or at the second ionization chamber. Then, particles reaching another plane immediately upstream of the field specific aperture were split again. In each case, particles were split by a factor of 8. At the second ionization chamber and at the latter plane, the cylindrical symmetry of the proton beam was exploited to position the split particles at randomly spaced locations rotated around the beam axis. Phase space data in IAEA format were recorded at the treatment head exit and the computational efficiency was calculated. Depth-dose curves and beam profiles were analyzed. Dose distributions were compared for a voxelized water phantom for different treatment fields for both the reference and optimized simulations. In addition, dose in two patients was simulated with and without particle splitting to compare the efficiency and accuracy of the technique. Results: A normalized computational efficiency gain of a factor of 10-20.3 was reached for phase space calculations for the different treatment head options simulated. Depth-dose curves and beam profiles were in reasonable agreement with the simulation done without splitting: within 1% for depth-dose with an average difference of (0.2 {+-} 0.4)%, 1 standard deviation, and a 0.3% statistical uncertainty of the simulations in the high dose region; 1.6% for planar fluence with an average difference of (0.4 {+-} 0.5)% and a statistical uncertainty of 0.3% in the high fluence region. The percentage differences between dose distributions in water for simulations done with and without particle splitting were within the accepted clinical tolerance of 2%, with a 0.4% statistical uncertainty. For the two patient geometries considered, head and prostate, the efficiency gain was 20.9 and 14.7, respectively, with the percentages of voxels with gamma indices lower than unity 98.9% and 99.7%, respectively, using 2% and 2 mm criteria. Conclusions: The authors have implemented an efficient variance reduction technique with significant speed improvements for proton Monte Carlo simulations. The method can be transferred to other codes and other treatment heads.
Bennett, C.M. [Los Alamos National Lab., NM (United States). Theoretical Div.]|[Oklahoma State Univ., Stillwater, OK (United States). Dept. of Chemistry; Sewell, T.D. [Los Alamos National Lab., NM (United States). Theoretical Div.
1998-12-31T23:59:59.000Z
Isothermal-iosbaric Monte Carlo calculations are used in conjunction with an expression that relates the elastic stiffness tensor to the mean-square fluctuations of the strain tensor to obtain first principles predictions of the Young`s moduli, shear moduli, and Poisson`s ratios for room-temperature crystalline RDX. The results are based on numerical data obtained during previously reported calculations of the hydrostatic compression of RDX over the pressure domain 0 GPa {le} p {le} 4 GPa. Although there are no experimental data available for comparison, the predicted values of the engineering coefficients are in accord with general expectations for brittle molecular crystals. The calculations reported here are preliminary: more extensive Monte Carlo realizations are needed to yield well-converged predictions; these are underway for RDX and {beta}-HMX.
Shulenburger, Luke; Desjarlais, M P
2015-01-01T23:59:59.000Z
Motivated by the disagreement between recent diffusion Monte Carlo calculations and experiments on the phase transition pressure between the ambient and beta-Sn phases of silicon, we present a study of the HCP to BCC phase transition in beryllium. This lighter element provides an oppor- tunity for directly testing many of the approximations required for calculations on silicon and may suggest a path towards increasing the practical accuracy of diffusion Monte Carlo calculations of solids in general. We demonstrate that the single largest approximation in these calculations is the pseudopotential approximation. After removing this we find excellent agreement with experiment for the ambient HCP phase and results similar to careful calculations using density functional theory for the phase transition pressure.
Müller, Florian, E-mail: florian.mueller@sam.math.ethz.ch; Jenny, Patrick, E-mail: jenny@ifd.mavt.ethz.ch; Meyer, Daniel W., E-mail: meyerda@ethz.ch
2013-10-01T23:59:59.000Z
Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley–Leverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.
C. E. Berger; E. R. Anderson; J. E. Drut
2014-10-29T23:59:59.000Z
We determine the ground-state energy and Tan's contact of attractively interacting few-fermion systems in a one-dimensional harmonic trap, for a range of couplings and particle numbers. To this end, we implement a new lattice Monte Carlo approach based on a non-uniform discretization of space, defined via Gauss-Hermite quadrature points and weights. This particular coordinate basis is natural for systems in harmonic traps, and it yields a position-dependent coupling and a corresponding non-uniform Hubbard-Stratonovich transformation. The resulting path integral is performed with hybrid Monte Carlo as a proof of principle for calculations at finite temperature and in higher dimensions.
Monte Carlo analysis of a monolithic interconnected module with a back surface reflector
Ballinger, C.T.; Charache, G.W. [Lockheed Martin Corp., Schenectady, NY (United States); Murray, C.S. [Bettis Atomic Power Lab., West Mifflin, PA (United States)
1998-10-01T23:59:59.000Z
Recently, the photon Monte Carlo code, RACER-X, was modified to include wave-length dependent absorption coefficients and indices of refraction. This work was done in an effort to increase the code`s capabilities to be more applicable to a wider range of problems. These new features make RACER-X useful for analyzing devices like monolithic interconnected modules (MIMs) which have etched surface features and incorporates a back surface reflector (BSR) for spectral control. A series of calculations were performed on various MIM structures to determine the impact that surface features and component reflectivities have on spectral utilization. The traditional concern of cavity photonics is replaced with intra-cell photonics in the MIM design. Like the cavity photonic problems previously discussed, small changes in optical properties and/or geometry can lead to large changes in spectral utilization. The calculations show that seemingly innocuous surface features (e.g., trenches and grid lines) can significantly reduce the spectral utilization due to the non-normal incident photon flux. Photons that enter the device through a trench edge are refracted onto a trajectory where they will not escape. This leads to a reduction in the number of reflected below bandgap photons that return to the radiator and reduce the spectral utilization. In addition, trenches expose a lateral conduction layer in this particular series of calculations which increase the absorption of above bandgap photons in inactive material.
Tian, Zhen; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun
2015-01-01T23:59:59.000Z
Monte Carlo (MC) simulation is considered as the most accurate method for radiation dose calculations. Accuracy of a source model for a linear accelerator is critical for the overall dose calculation accuracy. In this paper, we presented an analytical source model that we recently developed for GPU-based MC dose calculations. A key concept called phase-space-ring (PSR) was proposed. It contained a group of particles that are of the same type and close in energy and radial distance to the center of the phase-space plane. The model parameterized probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. For a primary photon PSRs, the particle direction is assumed to be from the beam spot. A finite spot size is modeled with a 2D Gaussian distribution. For a scattered photon PSR, multiple Gaussian components were used to model the particle direction. The direction distribution of an electron PSRs was also modeled as a 2D Gaussian distributi...
A kinetic Monte Carlo method for the simulation of massive phase transformations
Bos, C.; Sommer, F.; Mittemeijer, E.J
2004-07-12T23:59:59.000Z
A multi-lattice kinetic Monte Carlo method has been developed for the atomistic simulation of massive phase transformations. Beside sites on the crystal lattices of the parent and product phase, randomly placed sites are incorporated as possible positions. These random sites allow the atoms to take favourable intermediate positions, essential for a realistic description of transformation interfaces. The transformation from fcc to bcc starting from a flat interface with the fcc(1 1 1)//bcc(1 1 0) and fcc[1 1 1-bar]//bcc[0 0 1-bar] orientation in a single component system has been simulated. Growth occurs in two different modes depending on the chosen values of the bond energies. For larger fcc-bcc energy differences, continuous growth is observed with a rough transformation front. For smaller energy differences, plane-by-plane growth is observed. In this growth mode two-dimensional nucleation is required in the next fcc plane after completion of the transformation of the previous fcc plane.
Fractal space-times under the microscope: A Renormalization Group view on Monte Carlo data
Martin Reuter; Frank Saueressig
2011-10-24T23:59:59.000Z
The emergence of fractal features in the microscopic structure of space-time is a common theme in many approaches to quantum gravity. In this work we carry out a detailed renormalization group study of the spectral dimension $d_s$ and walk dimension $d_w$ associated with the effective space-times of asymptotically safe Quantum Einstein Gravity (QEG). We discover three scaling regimes where these generalized dimensions are approximately constant for an extended range of length scales: a classical regime where $d_s = d, d_w = 2$, a semi-classical regime where $d_s = 2d/(2+d), d_w = 2+d$, and the UV-fixed point regime where $d_s = d/2, d_w = 4$. On the length scales covered by three-dimensional Monte Carlo simulations, the resulting spectral dimension is shown to be in very good agreement with the data. This comparison also provides a natural explanation for the apparent puzzle between the short distance behavior of the spectral dimension reported from Causal Dynamical Triangulations (CDT), Euclidean Dynamical Triangulations (EDT), and Asymptotic Safety.
von Wittenau, A; Aufderheide, M B; Henderson, G L
2010-05-07T23:59:59.000Z
Given the cost and lead-times involved in high-energy proton radiography, it is prudent to model proposed radiographic experiments to see if the images predicted would return useful information. We recently modified our raytracing transmission radiography modeling code HADES to perform simplified Monte Carlo simulations of the transport of protons in a proton radiography beamline. Beamline objects include the initial diffuser, vacuum magnetic fields, windows, angle-selecting collimators, and objects described as distorted 2D (planar or cylindrical) meshes or as distorted 3D hexahedral meshes. We present an overview of the algorithms used for the modeling and code timings for simulations through typical 2D and 3D meshes. We next calculate expected changes in image blur as scattering materials are placed upstream and downstream of a resolution test object (a 3 mm thick sheet of tantalum, into which 0.4 mm wide slits have been cut), and as the current supplied to the focusing magnets is varied. We compare and contrast the resulting simulations with the results of measurements obtained at the 800 MeV Los Alamos LANSCE Line-C proton radiography facility.
Byun, H. S.; Pirbadian, S.; Nakano, Aiichiro; Shi, Liang; El-Naggar, Mohamed Y.
2014-09-05T23:59:59.000Z
Microorganisms overcome the considerable hurdle of respiring extracellular solid substrates by deploying large multiheme cytochrome complexes that form 20 nanometer conduits to traffic electrons through the periplasm and across the cellular outer membrane. Here we report the first kinetic Monte Carlo simulations and single-molecule scanning tunneling microscopy (STM) measurements of the Shewanella oneidensis MR-1 outer membrane decaheme cytochrome MtrF, which can perform the final electron transfer step from cells to minerals and microbial fuel cell anodes. We find that the calculated electron transport rate through MtrF is consistent with previously reported in vitro measurements of the Shewanella Mtr complex, as well as in vivo respiration rates on electrode surfaces assuming a reasonable (experimentally verified) coverage of cytochromes on the cell surface. The simulations also reveal a rich phase diagram in the overall electron occupation density of the hemes as a function of electron injection and ejection rates. Single molecule tunneling spectroscopy confirms MtrF's ability to mediate electron transport between an STM tip and an underlying Au(111) surface, but at rates higher than expected from previously calculated heme-heme electron transfer rates for solvated molecules.
Feasibility of a Monte Carlo-deterministic hybrid method for fast reactor analysis
Heo, W.; Kim, W.; Kim, Y. [Korea Advanced Institute of Science and Technology - KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon, 305-701 (Korea, Republic of); Yun, S. [Korea Atomic Energy Research Institute - KAERI, 989-111 Daedeok-daero, Yuseong-gu, Daejeon, 305-353 (Korea, Republic of)
2013-07-01T23:59:59.000Z
A Monte Carlo and deterministic hybrid method is investigated for the analysis of fast reactors in this paper. Effective multi-group cross sections data are generated using a collision estimator in the MCNP5. A high order Legendre scattering cross section data generation module was added into the MCNP5 code. Both cross section data generated from MCNP5 and TRANSX/TWODANT using the homogeneous core model were compared, and were applied to DIF3D code for fast reactor core analysis of a 300 MWe SFR TRU burner core. For this analysis, 9 groups macroscopic-wise data was used. In this paper, a hybrid calculation MCNP5/DIF3D was used to analyze the core model. The cross section data was generated using MCNP5. The k{sub eff} and core power distribution were calculated using the 54 triangle FDM code DIF3D. A whole core calculation of the heterogeneous core model using the MCNP5 was selected as a reference. In terms of the k{sub eff}, 9-group MCNP5/DIF3D has a discrepancy of -154 pcm from the reference solution, 9-group TRANSX/TWODANT/DIF3D analysis gives -1070 pcm discrepancy. (authors)
Abdel-Khalik, Hany S.; Gardner, Robin; Mattingly, John; Sood, Avneet
2014-05-20T23:59:59.000Z
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calulations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10-10 times to properly characterize the few-group cross-sections for deownstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the faborable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.
MONTE CARLO SIMULATIONS OF THE PHOTOSPHERIC EMISSION IN GAMMA-RAY BURSTS
Begue, D.; Siutsou, I. A.; Vereshchagin, G. V. [University of Roma ''Sapienza'', I-00185, p.le A. Moro 5, Rome (Italy)
2013-04-20T23:59:59.000Z
We studied the decoupling of photons from ultra-relativistic spherically symmetric outflows expanding with constant velocity by means of Monte Carlo simulations. For outflows with finite widths we confirm the existence of two regimes: photon-thick and photon-thin, introduced recently by Ruffini et al. (RSV). The probability density function of the last scattering of photons is shown to be very different in these two cases. We also obtained spectra as well as light curves. In the photon-thick case, the time-integrated spectrum is much broader than the Planck function and its shape is well described by the fuzzy photosphere approximation introduced by RSV. In the photon-thin case, we confirm the crucial role of photon diffusion, hence the probability density of decoupling has a maximum near the diffusion radius well below the photosphere. The time-integrated spectrum of the photon-thin case has a Band shape that is produced when the outflow is optically thick and its peak is formed at the diffusion radius.
MONTE CARLO SIMULATIONS OF NONLINEAR PARTICLE ACCELERATION IN PARALLEL TRANS-RELATIVISTIC SHOCKS
Ellison, Donald C.; Warren, Donald C. [Physics Department, North Carolina State University, Box 8202, Raleigh, NC 27695 (United States); Bykov, Andrei M., E-mail: don_ellison@ncsu.edu, E-mail: ambykov@yahoo.com [Ioffe Institute for Physics and Technology, 194021 St. Petersburg (Russian Federation)
2013-10-10T23:59:59.000Z
We present results from a Monte Carlo simulation of a parallel collisionless shock undergoing particle acceleration. Our simulation, which contains parameterized scattering and a particular thermal leakage injection model, calculates the feedback between accelerated particles ahead of the shock, which influence the shock precursor and 'smooth' the shock, and thermal particle injection. We show that there is a transition between nonrelativistic shocks, where the acceleration efficiency can be extremely high and the nonlinear compression ratio can be substantially greater than the Rankine-Hugoniot value, and fully relativistic shocks, where diffusive shock acceleration is less efficient and the compression ratio remains at the Rankine-Hugoniot value. This transition occurs in the trans-relativistic regime and, for the particular parameters we use, occurs around a shock Lorentz factor ?{sub 0} = 1.5. We also find that nonlinear shock smoothing dramatically reduces the acceleration efficiency presumed to occur with large-angle scattering in ultra-relativistic shocks. Our ability to seamlessly treat the transition from ultra-relativistic to trans-relativistic to nonrelativistic shocks may be important for evolving relativistic systems, such as gamma-ray bursts and Type Ibc supernovae. We expect a substantial evolution of shock accelerated spectra during this transition from soft early on to much harder when the blast-wave shock becomes nonrelativistic.
Intra-Globular Structures in Multiblock Copolymer Chains from a Monte Carlo Simulation
Krzysztof Lewandowski; Michal Banaszak
2014-10-16T23:59:59.000Z
Multiblock copolymer chains in implicit nonselective solvents are studied by Monte Carlo method which employs a parallel tempering algorithm. Chains consisting of 120 $A$ and 120 $B$ monomers, arranged in three distinct microarchitectures: $(10-10)_{12}$, $(6-6)_{20}$, and $(3-3)_{40}$, collapse to globular states upon cooling, as expected. By varying both the reduced temperature $T^*$ and compatibility between monomers $\\omega$, numerous intra-globular structures are obtained: diclusters (handshake, spiral, torus with a core, etc.), triclusters, and $n$-clusters with $n>3$ (lamellar and other), which are reminiscent of the block copolymer nanophases for spherically confined geometries. Phase diagrams for various chains in the $(T^*, \\omega)$-space are mapped. The structure factor $S(k)$, for a selected microarchitecture and $\\omega$, is calculated. Since $S(k)$ can be measured in scattering experiments, it can be used to relate simulation results to an experiment. Self-assembly in those systems is interpreted in term of competition between minimization of the interfacial area separating different types of monomers and minimization of contacts between chain and solvent. Finally, the relevance of this model to the protein folding is addressed.
Hsiao-Ping Hsu; Bernd A. Berg; Peter Grassberger
2004-08-26T23:59:59.000Z
Treating realistically the ambient water is one of the main difficulties in applying Monte Carlo methods to protein folding. The solvent-accessible area method, a popular method for treating water implicitly, is investigated by means of Metropolis simulations of the brain peptide Met-Enkephalin. For the phenomenological energy function ECEPP/2 nine atomic solvation parameter (ASP) sets are studied that had been proposed by previous authors. The simulations are compared with each other, with simulations with a distance dependent electrostatic permittivity $\\epsilon (r)$, and with vacuum simulations ($\\epsilon =2$). Parallel tempering and a recently proposed biased Metropolis technique are employed and their performances are evaluated. The measured observables include energy and dihedral probability densities (pds), integrated autocorrelation times, and acceptance rates. Two of the ASP sets turn out to be unsuitable for these simulations. For all other sets, selected configurations are minimized in search of the global energy minima. Unique minima are found for the vacuum and the $\\epsilon(r)$ system, but for none of the ASP models. Other observables show a remarkable dependence on the ASPs. In particular, autocorrelation times vary dramatically with the ASP parameters. Three ASP sets have much smaller autocorrelations at 300 K than the vacuum simulations, opening the possibility that simulations can be speeded up vastly by judiciously chosing details of the force
Saha, Krishnendu [Ohio Medical Physics Consulting, Dublin, Ohio 43017 (United States); Straus, Kenneth J.; Glick, Stephen J. [Department of Radiology, University of Massachusetts Medical School, Worcester, Massachusetts 01655 (United States); Chen, Yu. [Department of Radiation Oncology, Columbia University, New York, New York 10032 (United States)
2014-08-28T23:59:59.000Z
To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.
Äkäslompolo, Simppa; Tardini, Giovanni; Kurki-Suonio, Taina
2015-01-01T23:59:59.000Z
The activation probe is a robust tool to measure flux of fusion products from a magnetically confined plasma. A carefully chosen solid sample is exposed to the flux, and the impinging ions transmute the material makig it radioactive. Ultra-low level gamma-ray spectroscopy is used post mortem to measure the activity and, thus, the number of fusion products. This contribution presents the numerical analysis of the first measurement in the ASDEX Upgrade tokamak, which was also the first experiment to measure a single discharge. The ASCOT suite of codes was used to perform adjoint/reverse Monte-Carlo calculations of the fusion products. The analysis facilitated, for the first time, a comparison of numerical and experimental values for absolutely calibrated flux. The results agree to within 40%, which can be considered remarkable considering the fact that all features of the plasma cannot be accounted in the simulations. Also an alternative probe orientation was studied. The results suggest that a better optimized...
ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2008-04-01T23:59:59.000Z
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.
Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes
Harrisson, G.; Marleau, G. [Inst. of Nuclear Engineering, Ecole Polytechnique de Montreal (Canada)
2012-07-01T23:59:59.000Z
The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculation performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)
Collapse transitions in thermosensitive multi-block copolymers: A Monte Carlo study
Rissanou, Anastassia N., E-mail: rissanou@tem.uoc.gr [Department of Mathematics and Applied Mathematics, University of Crete, GR-71003 Heraklion Crete, Greece and Archimedes Center for Analysis, Modeling and Computation, University of Crete, P.O. Box 2208, GR-71003 Heraklion Crete (Greece); Tzeli, Despoina S. [Department of Materials Science and Technology, University of Crete, GR-71003 Heraklion Crete (Greece); Anastasiadis, Spiros H. [Department of Chemistry, University of Crete, P.O. Box 2208, 710 03 Heraklion Crete (Greece); Institute of Electronic Structure and Laser, Foundation for Research and Technology-Hellas, GR-71110 Heraklion Crete (Greece); Bitsanis, Ioannis A. [Institute of Electronic Structure and Laser, Foundation for Research and Technology-Hellas, GR-71110 Heraklion Crete (Greece)
2014-05-28T23:59:59.000Z
Monte Carlo simulations are performed on a simple cubic lattice to investigate the behavior of a single linear multiblock copolymer chain of various lengths N. The chain of type (A{sub n}B{sub n}){sub m} consists of alternating A and B blocks, where A are solvophilic and B are solvophobic and N = 2nm. The conformations are classified in five cases of globule formation by the solvophobic blocks of the chain. The dependence of globule characteristics on the molecular weight and on the number of blocks, which participate in their formation, is examined. The focus is on relative high molecular weight blocks (i.e., N in the range of 500–5000 units) and very differing energetic conditions for the two blocks (very good—almost athermal solvent for A and bad solvent for B). A rich phase behavior is observed as a result of the alternating architecture of the multiblock copolymer chain. We trust that thermodynamic equilibrium has been reached for chains of N up to 2000 units; however, for longer chains kinetic entrapments are observed. The comparison among equivalent globules consisting of different number of B-blocks shows that the more the solvophobic blocks constituting the globule the bigger its radius of gyration and the looser its structure. Comparisons between globules formed by the solvophobic blocks of the multiblock copolymer chain and their homopolymer analogs highlight the important role of the solvophilic A-blocks.
Monte Carlo simulation of the data acquisition chain of scintillation detectors
Binda, F.; Ericsson, G.; Hellesen, C.; Hjalmarsson, A.; Eriksson, J.; Skiba, M.; Conroy, S.; Weiszflog, M. [Uppsala University, Department of Physics and Astronomy, Division of Applied Nuclear Physics, 75120 Uppsala (Sweden)
2014-08-21T23:59:59.000Z
The good performance of a detector can be strongly affected by the instrumentation used to acquire the data. The possibility of anticipating how the acquisition chain will affect the signal can help in finding the best solution among different set-ups. In this work we developed a Monte Carlo code that aims to simulate the effect of the various components of a digital Data Acquisition system (DAQ) applied to scintillation detectors. The components included in the model are: the scintillator, the photomultiplier tube (PMT), the signal cable and the digitizer. We benchmarked the code against real data acquired with a NE213 scintillator, comparing simulated and real signal pulses induced by gamma-ray interaction. Then we studied the dependence of the energy resolution of a pulse height spectrum (PHS) on the sampling frequency and the bit resolution of the digitizer. We found that exceeding some values of the sampling frequency and the bit resolution improves only marginally the performance of the system. The method can be applied for the study of various detector systems relevant for nuclear techniques, such as in fusion diagnostics.
Thermodynamics and quark susceptibilities: a Monte-Carlo approach to the PNJL model
M. Cristoforetti; T. Hell; B. Klein; W. Weise
2010-02-11T23:59:59.000Z
The Monte-Carlo method is applied to the Polyakov-loop extended Nambu--Jona-Lasinio (PNJL) model. This leads beyond the saddle-point approximation in a mean-field calculation and introduces fluctuations around the mean fields. We study the impact of fluctuations on the thermodynamics of the model, both in the case of pure gauge theory and including two quark flavors. In the two-flavor case, we calculate the second-order Taylor expansion coefficients of the thermodynamic grand canonical partition function with respect to the quark chemical potential and present a comparison with extrapolations from lattice QCD. We show that the introduction of fluctuations produces only small changes in the behavior of the order parameters for chiral symmetry restoration and the deconfinement transition. On the other hand, we find that fluctuations are necessary in order to reproduce lattice data for the flavor non-diagonal quark susceptibilities. Of particular importance are pion fields, the contribution of which is strictly zero in the saddle point approximation.
Monte Carlo and Analytical Calculation of Lateral Deflection of Proton Beams in Homogeneous Targets
Pazianotto, Mauricio T.; Inocente, Guilherme F.; Silva, Danilo Anacleto A. d; Hormaza, Joel M. [Departamento de Fisica e Biofisica-Instituto de Biociencias, Universidade Estadual Paulista 'Julio de Mesquita Filho'-Botucatu-SP, Brasil and Distrito de Rubiao Junior s/no 18608-000 Botucatu, SP (Brazil)
2010-05-21T23:59:59.000Z
Proton radiation therapy is a precise form of radiation therapy, but the avoidance of damage to critical normal tissues and the prevention of geographical tumor misses require accurate knowledge of the dose delivered to the patient and the verification of his position demand a precise imaging technique. In proton therapy facilities, the X-ray Computed Tomography (xCT) is the preferred technique for the planning treatment of patients. This situation has been changing nowadays with the development of proton accelerators for health care and the increase in the number of treated patients. In fact, protons could be more efficient than xCT for this task. One essential difficulty in pCT image reconstruction systems came from the scattering of the protons inside the target due to the numerous small-angle deflections by nuclear Coulomb fields. The purpose of this study is the comparison of an analytical formulation for the determination of beam lateral deflection, based on Moliere's theory and Rutherford scattering with Monte Carlo calculations by SRIM 2008 and MCNPX codes.
Feasibility Study of Neutron Dose for Real Time Image Guided Proton Therapy: A Monte Carlo Study
Kim, Jin Sung; Kim, Daehyun; Shin, EunHyuk; Chung, Kwangzoo; Cho, Sungkoo; Ahn, Sung Hwan; Ju, Sanggyu; Chung, Yoonsun; Jung, Sang Hoon; Han, Youngyih
2015-01-01T23:59:59.000Z
Two full rotating gantry with different nozzles (Multipurpose nozzle with MLC, Scanning Dedicated nozzle) with conventional cyclotron system is installed and under commissioning for various proton treatment options at Samsung Medical Center in Korea. The purpose of this study is to investigate neutron dose equivalent per therapeutic dose, H/D, to x-ray imaging equipment under various treatment conditions with monte carlo simulation. At first, we investigated H/D with the various modifications of the beam line devices (Scattering, Scanning, Multi-leaf collimator, Aperture, Compensator) at isocenter, 20, 40, 60 cm distance from isocenter and compared with other research groups. Next, we investigated the neutron dose at x-ray equipments used for real time imaging with various treatment conditions. Our investigation showed the 0.07 ~ 0.19 mSv/Gy at x-ray imaging equipments according to various treatment options and intestingly 50% neutron dose reduction effect of flat panel detector was observed due to multi- lea...
Chatterjee, Abhijit [Los Alamos National Laboratory; Voter, Arthur [Los Alamos National Laboratory
2009-01-01T23:59:59.000Z
We develop a variation of the temperature accelerated dynamics (TAD) method, called the p-TAD method, that efficiently generates an on-the-fly kinetic Monte Carlo (KMC) process catalog with control over the accuracy of the catalog. It is assumed that transition state theory is valid. The p-TAD method guarantees that processes relevant at the timescales of interest to the simulation are present in the catalog with a chosen confidence. A confidence measure associated with the process catalog is derived. The dynamics is then studied using the process catalog with the KMC method. Effective accuracy of a p-TAD calculation is derived when a KMC catalog is reused for conditions different from those the catalog was originally generated for. Different KMC catalog generation strategies that exploit the features of the p-TAD method and ensure higher accuracy and/or computational efficiency are presented. The accuracy and the computational requirements of the p-TAD method are assessed. Comparisons to the original TAD method are made. As an example, we study dynamics in sub-monolayer Ag/Cu(110) at the time scale of seconds using the p-TAD method. It is demonstrated that the p-TAD method overcomes several challenges plaguing the conventional KMC method.
RESPONSE FUNCTION OF THE BGO AND NAI(T1) DETECTORS USING MONTE CARLO SIMULATIONS.
Orion, I.; Wielopolski, L.
2001-01-31T23:59:59.000Z
The high efficiency of the BGO detectors makes them very attractive candidates to replace NaI(T1) detectors, which are widely used in studies of body composition. In this work, the response functions of the BGO and NaI(T1) detectors were determined at 0.662,4.4, and 10.0 MeV using three different Monte Carlo codes: EGS4, MCNP, and PHOTON. These codes differ in their input files and transport calculations, and were used to verify the internal consistency of the setup and of the input data. The energy range of 0.662 to 10 MeV was chosen to cover energies of interest in body composition-studies. The superior efficiency of the BGO-detectors has to be weighed-against their inferior resolution, and their higher price than that of the NaI detectors. Because the price of the BGO detectors strongly depends on the size of the crystal, its optimization is an important component in the design of the entire system.
Full-dispersion Monte Carlo simulation of phonon transport in micron-sized graphene nanoribbons
Mei, S., E-mail: smei4@wisc.edu; Knezevic, I., E-mail: knezevic@engr.wisc.edu [Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States); Maurer, L. N. [Department of Physics, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States); Aksamija, Z. [Department of Electrical and Computer Engineering, University of Massachusetts-Amherst, Amherst, Massachusetts 01003 (United States)
2014-10-28T23:59:59.000Z
We simulate phonon transport in suspended graphene nanoribbons (GNRs) with real-space edges and experimentally relevant widths and lengths (from submicron to hundreds of microns). The full-dispersion phonon Monte Carlo simulation technique, which we describe in detail, involves a stochastic solution to the phonon Boltzmann transport equation with the relevant scattering mechanisms (edge, three-phonon, isotope, and grain boundary scattering) while accounting for the dispersion of all three acoustic phonon branches, calculated from the fourth-nearest-neighbor dynamical matrix. We accurately reproduce the results of several experimental measurements on pure and isotopically modified samples [S. Chen et al., ACS Nano 5, 321 (2011);S. Chen et al., Nature Mater. 11, 203 (2012); X. Xu et al., Nat. Commun. 5, 3689 (2014)]. We capture the ballistic-to-diffusive crossover in wide GNRs: room-temperature thermal conductivity increases with increasing length up to roughly 100??m, where it saturates at a value of 5800?W/m K. This finding indicates that most experiments are carried out in the quasiballistic rather than the diffusive regime, and we calculate the diffusive upper-limit thermal conductivities up to 600?K. Furthermore, we demonstrate that calculations with isotropic dispersions overestimate the GNR thermal conductivity. Zigzag GNRs have higher thermal conductivity than same-size armchair GNRs, in agreement with atomistic calculations.
A high-fidelity Monte Carlo evaluation of CANDU-6 safety parameters
Kim, Y.; Hartanto, D. [Korea Advanced Inst. of Science and Technology KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon, 305-701 (Korea, Republic of)
2012-07-01T23:59:59.000Z
Important safety parameters such as the fuel temperature coefficient (FTC) and the power coefficient of reactivity (PCR) of the CANDU-6 (CANada Deuterium Uranium) reactor have been evaluated by using a modified MCNPX code. For accurate analysis of the parameters, the DBRC (Doppler Broadening Rejection Correction) scheme was implemented in MCNPX in order to account for the thermal motion of the heavy uranium nucleus in the neutron-U scattering reactions. In this work, a standard fuel lattice has been modeled and the fuel is depleted by using the MCNPX and the FTC value is evaluated for several burnup points including the mid-burnup representing a near-equilibrium core. The Doppler effect has been evaluated by using several cross section libraries such as ENDF/B-VI, ENDF/B-VII, JEFF, JENDLE. The PCR value is also evaluated at mid-burnup conditions to characterize safety features of equilibrium CANDU-6 reactor. To improve the reliability of the Monte Carlo calculations, huge number of neutron histories are considered in this work and the standard deviation of the k-inf values is only 0.5{approx}1 pcm. It has been found that the FTC is significantly enhanced by accounting for the Doppler broadening of scattering resonance and the PCR are clearly improved. (authors)
Noncovalent Interactions by Quantum Monte Carlo: A Speedup by a Smart Basis Set Reduction
Dubecký, Matúš
2015-01-01T23:59:59.000Z
A fixed-node diffusion Monte Carlo (FN-DMC) method provides a promising alternative to the commonly used coupled-cluster (CC) methods, in the domain of benchmark noncovalent interaction energy calculations. This is mainly true for a low-order polynomial CPU cost scaling of FN-DMC and favorable FN error cancellation leading to benchmark interaction energies accurate to 0.1 kcal/mol. While it is empirically accepted that the FN-DMC results depend weakly on the one-particle basis sets used to expand the guiding functions, limits of this assumption remain elusive. Our recent work indicates that augmented triple zeta basis sets are sufficient to achieve a benchmark level of 0.1 kcal/mol. Here we report on a possibility of significant truncation of the one-particle basis sets without any visible bias on the overall accuracy of the final FN-DMC energy differences. The approach is tested on a set of seven small noncovalent closed-shell complexes including a water dimer. The reported findings enable cheaper high-quali...
Introduction to Computational Physics and Monte Carlo Simulations of Matrix Field Theory
Badis Ydri
2015-06-05T23:59:59.000Z
This book is divided into two parts. In the first part we give an elementary introduction to computational physics consisting of 21 simulations which originated from a formal course of lectures and laboratory simulations delivered since 2010 to physics students at Annaba University. The second part is much more advanced and deals with the problem of how to set up working Monte Carlo simulations of matrix field theories which involve finite dimensional matrix regularizations of noncommutative and fuzzy field theories, fuzzy spaces and matrix geometry. The study of matrix field theory in its own right has also become very important to the proper understanding of all noncommutative, fuzzy and matrix phenomena. The second part, which consists of 9 simulations, was delivered informally to doctoral students who are working on various problems in matrix field theory. Sample codes as well as sample key solutions are also provided for convenience and completness. An appendix containing an executive arabic summary of the first part is added at the end of the book.
Tushar Kanti Bose; Jayashree Saha
2015-03-06T23:59:59.000Z
The realization of a spontaneous macroscopic ferroelectric order in fluids of anisotropic mesogens is a topic of both fundamental and technological interest. Recently, we demonstrated that a system of dipolar achiral disklike ellipsoids can exhibit long-searched ferroelectric liquid crystalline phases of dipolar origin. In the present work, extensive off-lattice Monte Carlo simulations are used to investigate the phase behavior of the system under the influences of the electrostatic boundary conditions that restrict any global polarization. We find that the system develops strongly ferroelectric slablike domains periodically arranged in an antiferroelectric fashion. Exploring the phase behavior at different dipole strengths, we find existence of the ferroelectric nematic and ferroelectric columnar order inside the domains. For higher dipole strengths, a biaxial phase is also obtained with a similar periodic array of ferroelectric slabs of antiparallel polarizations. We have studied the depolarizing effects by using both the Ewald summation and the spherical cut-off techniques. We present and compare the results of the two different approaches of considering the depolarizing effects in this anisotropic system. It is explicitly shown that the domain size increases with the system size as a result of considering longer range of dipolar interactions. The system exhibits pronounced system size effects for stronger dipolar interactions. The results provide strong evidence to the novel understanding that the dipolar interactions are indeed sufficient to produce long range ferroelectric order in anisotropic fluids.
Reconstruction for proton computed tomography by tracing proton trajectories: A Monte Carlo study
Li Tianfang; Liang Zhengrong; Singanallur, Jayalakshmi V.; Satogata, Todd J.; Williams, David C.; Schulte, Reinhard W. [Departments of Radiology, Computer Science, and Physics and Astronomy, State University of New York at Stony Brook, Stony Brook, New York 11794 (United States); Department of Physics, Brookhaven National Laboratory, Upton, New York 11973 (United States); Santa Cruz Institute for Particle Physics, University of California at Santa Cruz, Santa Cruz, California 95064 (United States); Department of Radiation Medicine, Loma Linda University Medical Center, Loma Linda, California 92354 (United States)
2006-03-15T23:59:59.000Z
Proton computed tomography (pCT) has been explored in the past decades because of its unique imaging characteristics, low radiation dose, and its possible use for treatment planning and on-line target localization in proton therapy. However, reconstruction of pCT images is challenging because the proton path within the object to be imaged is statistically affected by multiple Coulomb scattering. In this paper, we employ GEANT4-based Monte Carlo simulations of the two-dimensional pCT reconstruction of an elliptical phantom to investigate the possible use of the algebraic reconstruction technique (ART) with three different path-estimation methods for pCT reconstruction. The first method assumes a straight-line path (SLP) connecting the proton entry and exit positions, the second method adapts the most-likely path (MLP) theoretically determined for a uniform medium, and the third method employs a cubic spline path (CSP). The ART reconstructions showed progressive improvement of spatial resolution when going from the SLP [2 line pairs (lp) cm{sup -1}] to the curved CSP and MLP path estimates (5 lp cm{sup -1}). The MLP-based ART algorithm had the fastest convergence and smallest residual error of all three estimates. This work demonstrates the advantage of tracking curved proton paths in conjunction with the ART algorithm and curved path estimates.
Evaluation of a new commercial Monte Carlo dose calculation algorithm for electron beams
Vandervoort, Eric J., E-mail: evandervoort@toh.on.ca; Cygler, Joanna E. [Department of Medical Physics, The Ottawa Hospital Cancer Centre, The University of Ottawa, Ottawa, Ontario K1H 8L6 (Canada) [Department of Medical Physics, The Ottawa Hospital Cancer Centre, The University of Ottawa, Ottawa, Ontario K1H 8L6 (Canada); The Faculty of Medicine, The University of Ottawa, Ottawa, Ontario K1H 8M5 (Canada); Department of Physics, Carleton University, Ottawa, Ontario K1S 5B6 (Canada); Tchistiakova, Ekaterina [Department of Medical Physics, The Ottawa Hospital Cancer Centre, The University of Ottawa, Ottawa, Ontario K1H 8L6 (Canada) [Department of Medical Physics, The Ottawa Hospital Cancer Centre, The University of Ottawa, Ottawa, Ontario K1H 8L6 (Canada); Department of Medical Biophysics, University of Toronto, Ontario M5G 2M9 (Canada); Heart and Stroke Foundation Centre for Stroke Recovery, Sunnybrook Research Institute, University of Toronto, Ontario M4N 3M5 (Canada); La Russa, Daniel J. [Department of Medical Physics, The Ottawa Hospital Cancer Centre, The University of Ottawa, Ottawa, Ontario K1H 8L6 (Canada) and The Faculty of Medicine, The University of Ottawa, Ottawa, Ontario K1H 8M5 (Canada)] [Department of Medical Physics, The Ottawa Hospital Cancer Centre, The University of Ottawa, Ottawa, Ontario K1H 8L6 (Canada) and The Faculty of Medicine, The University of Ottawa, Ottawa, Ontario K1H 8M5 (Canada)
2014-02-15T23:59:59.000Z
Purpose: In this report the authors present the validation of a Monte Carlo dose calculation algorithm (XiO EMC from Elekta Software) for electron beams. Methods: Calculated and measured dose distributions were compared for homogeneous water phantoms and for a 3D heterogeneous phantom meant to approximate the geometry of a trachea and spine. Comparisons of measurements and calculated data were performed using 2D and 3D gamma index dose comparison metrics. Results: Measured outputs agree with calculated values within estimated uncertainties for standard and extended SSDs for open applicators, and for cutouts, with the exception of the 17 MeV electron beam at extended SSD for cutout sizes smaller than 5 × 5 cm{sup 2}. Good agreement was obtained between calculated and experimental depth dose curves and dose profiles (minimum number of measurements that pass a 2%/2 mm agreement 2D gamma index criteria for any applicator or energy was 97%). Dose calculations in a heterogeneous phantom agree with radiochromic film measurements (>98% of pixels pass a 3 dimensional 3%/2 mm ?-criteria) provided that the steep dose gradient in the depth direction is considered. Conclusions: Clinically acceptable agreement (at the 2%/2 mm level) between the measurements and calculated data for measurements in water are obtained for this dose calculation algorithm. Radiochromic film is a useful tool to evaluate the accuracy of electron MC treatment planning systems in heterogeneous media.
Lattice Monte Carlo calculations for unitary fermions in a harmonic trap
Michael G. Endres; David B. Kaplan; Jong-Wan Lee; Amy N. Nicholson
2011-11-03T23:59:59.000Z
We present a new lattice Monte Carlo approach developed for studying large numbers of strongly interacting nonrelativistic fermions, and apply it to a dilute gas of unitary fermions confined to a harmonic trap. Our lattice action is highly improved, with sources of discretization and finite volume errors systematically removed; we are able to demonstrate the expected volume scaling of energy levels of two and three untrapped fermions, and to reproduce the high precision calculations published previously for the ground state energies for N = 3 unitary fermions in a box (to within our 0.3% uncertainty), and for N = 3, . . ., 6 unitary fermions in a harmonic trap (to within our ~ 1% uncertainty). We use this action to determine the ground state energies of up to 70 unpolarized fermions trapped in a harmonic potential on a lattice as large as 64^3 x 72; our approach avoids the use of importance sampling or calculation of a fermion determinant and employs a novel statistical method for estimating observables, allowing us to generate ensembles as large as 10^8 while requiring only relatively modest computational resources.
Investigating the rotational evolution of young, low mass stars using Monte Carlo simulations
Vasconcelos, M J
2015-01-01T23:59:59.000Z
We investigate the rotational evolution of young stars through Monte Carlo simulations. We simulate 280,000 stars, each of which is assigned a mass, a rotational period, and a mass accretion rate. The mass accretion rate depends on mass and time, following power-laws indices 1.4 and -1.5, respectively. A mass-dependent accretion threshold is defined below which a star is considered as diskless, which results in a distribution of disk lifetimes that matches observations. Stars are evolved at constant angular spin rate while accreting and at constant angular momentum when they become diskless. We recover the bimodal period distribution seen in several young clusters. The short period peak consists mostly of diskless stars and the long period one is mainly populated by accreting stars. Both distributions present a long tail towards long periods and a population of slowly rotating diskless stars is observed at all ages. We reproduce the observed correlations between disk fraction and spin rate, as well as between...
A Monte Carlo Analysis of Gas Centrifuge Enrichment Plant Process Load Cell Data
Garner, James R [ORNL; Whitaker, J Michael [ORNL
2013-01-01T23:59:59.000Z
As uranium enrichment plants increase in number, capacity, and types of separative technology deployed (e.g., gas centrifuge, laser, etc.), more automated safeguards measures are needed to enable the IAEA to maintain safeguards effectiveness in a fiscally constrained environment. Monitoring load cell data can significantly increase the IAEA s ability to efficiently achieve the fundamental safeguards objective of confirming operations as declared (i.e., no undeclared activities), but care must be taken to fully protect the operator s proprietary and classified information related to operations. Staff at ORNL, LANL, JRC/ISPRA, and University of Glasgow are investigating monitoring the process load cells at feed and withdrawal (F/W) stations to improve international safeguards at enrichment plants. A key question that must be resolved is what is the necessary frequency of recording data from the process F/W stations? Several studies have analyzed data collected at a fixed frequency. This paper contributes to load cell process monitoring research by presenting an analysis of Monte Carlo simulations to determine the expected errors caused by low frequency sampling and its impact on material balance calculations.
Erickson, Lori
1995-01-01T23:59:59.000Z
's application of Monte Carlo simulation methods to the spread of geographic phenomena, more specifically, the spread of innovations or ideas from person to person (Pitts 1965; Chorley and Haggett 1967; Marble and Bowlby 1968; Gould 1969; Cliff et al. 1981... are responsible for the safety of these park users, are concerned about several important factors. These include unusually high temperatures, lack of potable water, and other desert hazards such as steep ridges and cliffs, spiny plants, and poisonous animals...
The two-phase issue in the O(n) non-linear $?$-model: A Monte Carlo study
B. Alles; A. Buonanno; G. Cella
1996-08-01T23:59:59.000Z
We have performed a high statistics Monte Carlo simulation to investigate whether the two-dimensional O(n) non-linear sigma models are asymptotically free or they show a Kosterlitz- Thouless-like phase transition. We have calculated the mass gap and the magnetic susceptibility in the O(8) model with standard action and the O(3) model with Symanzik action. Our results for O(8) support the asymptotic freedom scenario.
Tutt, Teresa Elizabeth
2009-05-15T23:59:59.000Z
MEDIA?????????? 123 APPENDIX B: REPEATABLE GEOMETRY FOR TARGET IRRADIATION????????????????????. 128 APPENDIX C: VARIATION OF MONTE-CARLO PARAMETERS FOR 5.5 MM PHANTOM????????????????. 131 VITA???????????????????????????????? 137 ix LIST... errors in simple structure??... 10 2.5 Two dimensional illustration of coarse element errors in cilantro leaf????. 10 2.6 Coarse element error produced by averaging the densities in voxel?????. 11 2.7 Electron step-size artifact for 20 mm cylinder...
Yeo, Sang Chul
Ammonia (NH[subscript 3]) nitridation on an Fe surface was studied by combining density functional theory (DFT) and kinetic Monte Carlo (kMC) calculations. A DFT calculation was performed to obtain the energy barriers ...
Hin, Celine Nathalie
Kinetic Monte Carlo simulations, based on parameters obtained with density-functional theory in the local-density approximation and experimental data, are used to study bulk precipitation of Y[subscript 2]O[subscript 3] ...
Landon, Colin Donald
2014-01-01T23:59:59.000Z
We present a deviational Monte Carlo method for solving the Boltzmann equation for phonon transport subject to the linearized ab initio 3-phonon scattering operator. Phonon dispersion relations and transition rates are ...
Toulouse, Julien; Reinhardt, Peter; Hoggan, Philip E; Umrigar, C J
2010-01-01T23:59:59.000Z
We report state-of-the-art quantum Monte Carlo calculations of the singlet $n \\to \\pi^*$ (CO) vertical excitation energy in the acrolein molecule, extending the recent study of Bouab\\c{c}a {\\it et al.} [J. Chem. Phys. {\\bf 130}, 114107 (2009)]. We investigate the effect of using a Slater basis set instead of a Gaussian basis set, and of using state-average versus state-specific complete-active-space (CAS) wave functions, with or without reoptimization of the coefficients of the configuration state functions (CSFs) and of the orbitals in variational Monte Carlo (VMC). It is found that, with the Slater basis set used here, both state-average and state-specific CAS(6,5) wave functions give an accurate excitation energy in diffusion Monte Carlo (DMC), with or without reoptimization of the CSF and orbital coefficients in the presence of the Jastrow factor. In contrast, the CAS(2,2) wave functions require reoptimization of the CSF and orbital coefficients to give a good DMC excitation energy. Our best estimates of ...
Sadeghi, Mahdi; Raisali, Gholamreza; Hosseini, S. Hamed; Shavar, Arzhang [Nuclear Medicine Research Group, Agricultural, Medical and Industrial Research School, P.O. Box 31485-498, Karaj (Iran, Islamic Republic of) and Engineering Faculty, Science and Research Campus, Islamic Azad University, P.O. Box 14515-775, Tehran (Iran, Islamic Republic of); Radiation Applications Research School, Nuclear Science and Technology Research Institute, Tehran (Iran, Islamic Republic of); Engineering Faculty, Science and Research Campus, Islamic Azad University, P.O. Box 14515-775, Tehran (Iran, Islamic Republic of); SSDL Group, Agricultural, Medical and Industrial Research School, Karaj (Iran, Islamic Republic of)
2008-04-15T23:59:59.000Z
This article presents a brachytherapy source having {sup 103}Pd adsorbed onto a cylindrical silver rod that has been developed by the Agricultural, Medical, and Industrial Research School for permanent implant applications. Dosimetric characteristics (radial dose function, anisotropy function, and anisotropy factor) of this source were experimentally and theoretically determined in terms of the updated AAPM Task group 43 (TG-43U1) recommendations. Monte Carlo simulations were used to calculate the dose rate constant. Measurements were performed using TLD-GR200A circular chip dosimeters using standard methods employing thermoluminescent dosimeters in a Perspex phantom. Precision machined bores in the phantom located the dosimeters and the source in a reproducible fixed geometry, providing for transverse-axis and angular dose profiles over a range of distances from 0.5 to 5 cm. The Monte Carlo N-particle (MCNP) code, version 4C simulation techniques have been used to evaluate the dose-rate distributions around this model {sup 103}Pd source in water and Perspex phantoms. The Monte Carlo calculated dose rate constant of the IRA-{sup 103}Pd source in water was found to be 0.678 cGy h{sup -1} U{sup -1} with an approximate uncertainty of {+-}0.1%. The anisotropy function, F(r,{theta}), and the radial dose function, g(r), of the IRA-{sup 103}Pd source were also measured in a Perspex phantom and calculated in both Perspex and liquid water phantoms.
Event Generation of Large-Angle Bhabha Scattering at LEP2 Energies
A. B. Arbuzov
1999-10-08T23:59:59.000Z
LABSMC Monte Carlo event generator is used to simulate Bhabha scattering at high energies. Different sources of radiative corrections are considered. The resulting precision is discussed.
Parent, Laure; Seco, Joao; Evans, Phil M.; Fielding, Andrew; Dance, David R. [Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, Downs Road, Sutton, SM2 5PT (United Kingdom); School of Physical and Chemical Sciences, Queensland University of Technology, Q337 Gardens Point Campus, Brisbane, Queensland 4001 (Australia); Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, Fulham Road, London, SW3 6JJ (United Kingdom)
2006-12-15T23:59:59.000Z
This study focused on predicting the electronic portal imaging device (EPID) image of intensity modulated radiation treatment (IMRT) fields in the absence of attenuation material in the beam with Monte Carlo methods. As IMRT treatments consist of a series of segments of various sizes that are not always delivered on the central axis, large spectral variations may be observed between the segments. The effect of these spectral variations on the EPID response was studied with fields of various sizes and off-axis positions. A detailed description of the EPID was implemented in a Monte Carlo model. The EPID model was validated by comparing the EPID output factors for field sizes between 1x1 and 26x26 cm{sup 2} at the isocenter. The Monte Carlo simulations agreed with the measurements to within 1.5%. The Monte Carlo model succeeded in predicting the EPID response at the center of the fields of various sizes and offsets to within 1% of the measurements. Large variations (up to 29%) of the EPID response were observed between the various offsets. The EPID response increased with field size and with field offset for most cases. The Monte Carlo model was then used to predict the image of a simple test IMRT field delivered on the beam axis and with an offset. A variation of EPID response up to 28% was found between the on- and off-axis delivery. Finally, two clinical IMRT fields were simulated and compared to the measurements. For all IMRT fields, simulations and measurements agreed within 3%--0.2 cm for 98% of the pixels. The spectral variations were quantified by extracting from the spectra at the center of the fields the total photon yield (Y{sub total}), the photon yield below 1 MeV (Y{sub low}), and the percentage of photons below 1 MeV (P{sub low}). For the studied cases, a correlation was shown between the EPID response variation and Y{sub total}, Y{sub low}, and P{sub low}.
Forward treatment planning for modulated electron radiotherapy (MERT) employing Monte Carlo methods
Henzen, D., E-mail: henzen@ams.unibe.ch; Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Lössl, K.; Aebersold, D. M.; Fix, M. K. [Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, CH-3010 Berne (Switzerland)] [Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, CH-3010 Berne (Switzerland); Neuenschwander, H. [Clinic for Radiation-Oncology, Lindenhofspital Bern, CH-3012 Berne (Switzerland)] [Clinic for Radiation-Oncology, Lindenhofspital Bern, CH-3012 Berne (Switzerland); Stampanoni, M. F. M. [Institute for Biomedical Engineering, ETH Zürich and Paul Scherrer Institut, CH-5234 Villigen (Switzerland)] [Institute for Biomedical Engineering, ETH Zürich and Paul Scherrer Institut, CH-5234 Villigen (Switzerland)
2014-03-15T23:59:59.000Z
Purpose: This paper describes the development of a forward planning process for modulated electron radiotherapy (MERT). The approach is based on a previously developed electron beam model used to calculate dose distributions of electron beams shaped by a photon multi leaf collimator (pMLC). Methods: As the electron beam model has already been implemented into the Swiss Monte Carlo Plan environment, the Eclipse treatment planning system (Varian Medical Systems, Palo Alto, CA) can be included in the planning process for MERT. In a first step, CT data are imported into Eclipse and a pMLC shaped electron beam is set up. This initial electron beam is then divided into segments, with the electron energy in each segment chosen according to the distal depth of the planning target volume (PTV) in beam direction. In order to improve the homogeneity of the dose distribution in the PTV, a feathering process (Gaussian edge feathering) is launched, which results in a number of feathered segments. For each of these segments a dose calculation is performed employing the in-house developed electron beam model along with the macro Monte Carlo dose calculation algorithm. Finally, an automated weight optimization of all segments is carried out and the total dose distribution is read back into Eclipse for display and evaluation. One academic and two clinical situations are investigated for possible benefits of MERT treatment compared to standard treatments performed in our clinics and treatment with a bolus electron conformal (BolusECT) method. Results: The MERT treatment plan of the academic case was superior to the standard single segment electron treatment plan in terms of organs at risk (OAR) sparing. Further, a comparison between an unfeathered and a feathered MERT plan showed better PTV coverage and homogeneity for the feathered plan, with V{sub 95%} increased from 90% to 96% and V{sub 107%} decreased from 8% to nearly 0%. For a clinical breast boost irradiation, the MERT plan led to a similar homogeneity in the PTV compared to the standard treatment plan while the mean body dose was lower for the MERT plan. Regarding the second clinical case, a whole breast treatment, MERT resulted in a reduction of the lung volume receiving more than 45% of the prescribed dose when compared to the standard plan. On the other hand, the MERT plan leads to a larger low-dose lung volume and a degraded dose homogeneity in the PTV. For the clinical cases evaluated in this work, treatment plans using the BolusECT technique resulted in a more homogenous PTV and CTV coverage but higher doses to the OARs than the MERT plans. Conclusions: MERT treatments were successfully planned for phantom and clinical cases, applying a newly developed intuitive and efficient forward planning strategy that employs a MC based electron beam model for pMLC shaped electron beams. It is shown that MERT can lead to a dose reduction in OARs compared to other methods. The process of feathering MERT segments results in an improvement of the dose homogeneity in the PTV.
Adsorption of branched and dendritic polymers onto flat surfaces: A Monte Carlo study
Sommer, J.-U. [Leibniz Institute of Polymer Research Dresden e. V., 01069 Dresden (Germany) [Leibniz Institute of Polymer Research Dresden e. V., 01069 Dresden (Germany); Institute for Theoretical Physics, Technische Universität Dresden, 01069 Dresden (Germany); K?os, J. S. [Leibniz Institute of Polymer Research Dresden e. V., 01069 Dresden (Germany) [Leibniz Institute of Polymer Research Dresden e. V., 01069 Dresden (Germany); Faculty of Physics, A. Mickiewicz University, Umultowska 85, 61-614 Pozna? (Poland); Mironova, O. N. [Leibniz Institute of Polymer Research Dresden e. V., 01069 Dresden (Germany)] [Leibniz Institute of Polymer Research Dresden e. V., 01069 Dresden (Germany)
2013-12-28T23:59:59.000Z
Using Monte Carlo simulations based on the bond fluctuation model we study the adsorption of starburst dendrimers with flexible spacers onto a flat surface. The calculations are performed for various generation number G and spacer length S in a wide range of the reduced temperature ? as the measure of the interaction strength between the monomers and the surface. Our simulations indicate a two-step adsorption scenario. Below the critical point of adsorption, ?{sub c}, a weakly adsorbed state of the dendrimer is found. Here, the dendrimer retains its shape but sticks to the surface by adsorbed spacers. By lowering the temperature below a spacer-length dependent value, ?*(S) < ?{sub c}, a step-like transition into a strongly adsorbed state takes place. In the flatly adsorbed state the shape of the dendrimer is well described by a mean field model of a dendrimer in two dimensions. We also performed simulations of star-polymers which display a simple crossover-behavior in full analogy to linear chains. By analyzing the order parameter of the adsorption transition, we determine the critical point of adsorption of the dendrimers which is located close to the critical point of adsorption for star-polymers. While the order parameter for the adsorbed spacers displays a critical crossover scaling, the overall order parameter, which combines both critical and discontinuous transition effects, does not display simple scaling. The step-like transition from the weak into the strong adsorbed regime is confirmed by analyzing the shape-anisotropy of the dendrimers. We present a mean-field model based on the concept of spacer adsorption which predicts a discontinuous transition of dendrimers due to an excluded volume barrier. The latter results from an increased density of the dendrimer in the flatly adsorbed state which has to be overcome before this state is thermodynamically stable.
Jean-Michel Caillol
2015-01-22T23:59:59.000Z
We present two methods for solving the electrostatics of point charges and multipoles on the surface of a sphere, \\textit{i.e.} in the space $\\mathcal{S}_{2}$, with applications to numerical simulations of two-dimensional polar fluids. In the first approach, point charges are associated with uniform neutralizing backgrounds to form neutral pseudo-charges, while, in the second, one instead considers bi-charges, \\textit{i.e.} dumbells of antipodal point charges of opposite signs. We establish the expressions of the electric potentials of pseudo- and bi-charges as isotropic solutions of the Laplace-Beltrami equation in $\\mathcal{S}_{2}$. A multipolar expansion of pseudo- and bi-charge potentials leads to the electric potentials of mono- and bi-multipoles respectively. These potentials constitute non-isotropic solutions of the Laplace-Beltrami equation the general solution of which in spherical coordinates is recast under a new appealing form. We then focus on the case of mono- and bi-dipoles and build the theory of dielectric media in $\\mathcal{S}_{2}$. We notably obtain the expression of the static dielectric constant of a uniform isotropic polar fluid living in $\\mathcal{S}_{2}$ in term of the polarization fluctuations of subdomains of $\\mathcal{S}_{2}$. We also derive the long range behavior of the equilibrium pair correlation function under the assumption that it is governed by macroscopic electrostatics. These theoretical developments find their application in Monte Carlo simulations of the $2D$ fluid of dipolar hard spheres. Some preliminary numerical experiments are discussed with a special emphasis on finite size effects, a careful study of the thermodynamic limit, and a check of the theoretical predictions for the asymptotic behavior of the pair correlation function.
Lombardo, S.J. (California Inst. of Tech., Pasadena, CA (USA). Dept. of Chemical Engineering Lawrence Berkeley Lab., CA (USA))
1990-08-01T23:59:59.000Z
The kinetics of temperature-programmed and isothermal desorption have been simulated with a Monte Carlo model. Included in the model are the elementary steps of adsorption, surface diffusion, and desorption. Interactions between adsorbates and the metal as well as interactions between the adsorbates are taken into account with the Bond-Order-Conservation-Morse-Potential method. The shape, number, and location of the TPD peaks predicted by the simulations is shown to be sensitive to the binding energy, coverage, and coordination of the adsorbates. In addition, the occurrence of lateral interactions between adsorbates is seen to strongly effect the distribution of adsorbates is seen to strongly effect the distribution of adsorbates on the surface. Temperature-programmed desorption spectra of a single type of adsorbate have been simulated for the following adsorbate-metal systems: CO on Pd(100); H{sub 2} on Mo(100); and H{sub 2} on Ni(111). The model predictions are in good agreement with experimental observation. TPD spectra have also been simulated for two species coadsorbed on a surface; the model predictions are in qualitative agreement with the experimental results for H{sub 2} coadsorbed with strongly bound atomic species on Mo(100) and Fe(100) surfaces as well as for CO and H{sub 2} coadsorbed on Ni(100) and Rh(100) surfaces. Finally, the desorption kinetics of CO from Pd(100) and Ni(100) in the presence of gas-phase CO have been examined. The effect of pressure is seen to lead to an increase in the rate of desorption relative to the rate observed in the absence of gas-phase CO. This increase arises as a consequence of higher coverages and therefore stronger lateral interactions between the adsorbed CO molecules.
Structural Stability and Defect Energetics of ZnO from Diffusion Quantum Monte Carlo
Santana Palacio, Juan A [ORNL; Krogel, Jaron T [ORNL; Kim, Jeongnim [ORNL; Kent, Paul R [ORNL; Reboredo, Fernando A [ORNL
2015-01-01T23:59:59.000Z
We have applied the many-body ab-initio diffusion quantum Monte Carlo (DMC) method to study Zn and ZnO crystals under pressure, and the energetics of the oxygen vacancy, zinc interstitial and hydrogen impurities in ZnO. We show that DMC is an accurate and practical method that can be used to characterize multiple properties of materials that are challenging for density functional theory approximations. DMC agrees with experimental measurements to within 0.3 eV, including the band-gap of ZnO, the ionization potential of O and Zn, and the atomization energy of O2, ZnO dimer, and wurtzite ZnO. DMC predicts the oxygen vacancy as a deep donor with a formation energy of 5.0(2) eV under O-rich conditions and thermodynamic transition levels located between 1.8 and 2.5 eV from the valence band maximum. Our DMC results indicate that the concentration of zinc interstitial and hydrogen impurities in ZnO should be low under n-type, and Zn- and H-rich conditions because these defects have formation energies above 1.4 eV under these conditions. Comparison of DMC and hybrid functionals shows that these DFT approximations can be parameterized to yield a general correct qualitative description of ZnO. However, the formation energy of defects in ZnO evaluated with DMC and hybrid functionals can differ by more than 0.5 eV.
Minibeam radiation therapy for the management of osteosarcomas: A Monte Carlo study
Martínez-Rovira, I.; Prezado, Y., E-mail: prezado@gmail.com [Laboratoire d’Imagerie et Modélisation en Neurobiologie et Cancérologie (IMNC), Centre National de la Recherche Scientifique (CNRS), Campus universitaire, Bât. 440, 1er étage, 15 rue Georges Clemenceau, 91406 Orsay cedex (France)
2014-06-15T23:59:59.000Z
Purpose: Minibeam radiation therapy (MBRT) exploits the well-established tissue-sparing effect provided by the combination of submillimetric field sizes and a spatial fractionation of the dose. The aim of this work is to evaluate the feasibility and potential therapeutic gain of MBRT, in comparison with conventional radiotherapy, for osteosarcoma treatments. Methods: Monte Carlo simulations (PENELOPE/PENEASY code) were used as a method to study the dose distributions resulting from MBRT irradiations of a rat femur and a realistic human femur phantoms. As a figure of merit, peak and valley doses and peak-to-valley dose ratios (PVDR) were assessed. Conversion of absorbed dose to normalized total dose (NTD) was performed in the human case. Several field sizes and irradiation geometries were evaluated. Results: It is feasible to deliver a uniform dose distribution in the target while the healthy tissue benefits from a spatial fractionation of the dose. Very high PVDR values (?20) were achieved in the entrance beam path in the rat case. PVDR values ranged from 2 to 9 in the human phantom. NTD{sub 2.0} of 87 Gy might be reached in the tumor in the human femur while the healthy tissues might receive valley NTD{sub 2.0} lower than 20 Gy. The doses in the tumor and healthy tissues might be significantly higher and lower than the ones commonly delivered used in conventional radiotherapy. Conclusions: The obtained dose distributions indicate that a gain in normal tissue sparing might be expected. This would allow the use of higher (and potentially curative) doses in the tumor. Biological experiments are warranted.
Silva-Rodríguez, Jesús, E-mail: jesus.silva.rodriguez@sergas.es; Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es [Fundación Ramón Domínguez, Santiago de Compostela, Galicia (Spain) [Fundación Ramón Domínguez, Santiago de Compostela, Galicia (Spain); Servicio de Medicina Nuclear, Complexo Hospitalario Universidade de Santiago de Compostela (USC), 15782, Galicia (Spain); Grupo de Imaxe Molecular, Instituto de Investigación Sanitarias (IDIS), Santiago de Compostela, 15706, Galicia (Spain); Sánchez, Manuel; Mosquera, Javier; Luna-Vega, Víctor [Servicio de Radiofísica y Protección Radiológica, Complexo Hospitalario Universidade de Santiago de Compostela (USC), 15782, Galicia (Spain)] [Servicio de Radiofísica y Protección Radiológica, Complexo Hospitalario Universidade de Santiago de Compostela (USC), 15782, Galicia (Spain); Cortés, Julia; Garrido, Miguel [Servicio de Medicina Nuclear, Complexo Hospitalario Universitario de Santiago de Compostela, 15706, Galicia, Spain and Grupo de Imaxe Molecular, Instituto de Investigación Sanitarias (IDIS), Santiago de Compostela, 15706, Galicia (Spain)] [Servicio de Medicina Nuclear, Complexo Hospitalario Universitario de Santiago de Compostela, 15706, Galicia, Spain and Grupo de Imaxe Molecular, Instituto de Investigación Sanitarias (IDIS), Santiago de Compostela, 15706, Galicia (Spain); Pombar, Miguel [Servicio de Radiofísica y Protección Radiológica, Complexo Hospitalario Universitario de Santiago de Compostela, 15706, Galicia (Spain)] [Servicio de Radiofísica y Protección Radiológica, Complexo Hospitalario Universitario de Santiago de Compostela, 15706, Galicia (Spain); Ruibal, Álvaro [Servicio de Medicina Nuclear, Complexo Hospitalario Universidade de Santiago de Compostela (USC), 15782, Galicia (Spain) [Servicio de Medicina Nuclear, Complexo Hospitalario Universidade de Santiago de Compostela (USC), 15782, Galicia (Spain); Grupo de Imaxe Molecular, Instituto de Investigación Sanitarias (IDIS), Santiago de Compostela, 15706, Galicia (Spain); Fundación Tejerina, 28003, Madrid (Spain)
2014-05-15T23:59:59.000Z
Purpose: Current procedure guidelines for whole body [18F]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) state that studies with visible dose extravasations should be rejected for quantification protocols. Our work is focused on the development and validation of methods for estimating extravasated doses in order to correct standard uptake value (SUV) values for this effect in clinical routine. Methods: One thousand three hundred sixty-seven consecutive whole body FDG-PET studies were visually inspected looking for extravasation cases. Two methods for estimating the extravasated dose were proposed and validated in different scenarios using Monte Carlo simulations. All visible extravasations were retrospectively evaluated using a manual ROI based method. In addition, the 50 patients with higher extravasated doses were also evaluated using a threshold-based method. Results: Simulation studies showed that the proposed methods for estimating extravasated doses allow us to compensate the impact of extravasations on SUV values with an error below 5%. The quantitative evaluation of patient studies revealed that paravenous injection is a relatively frequent effect (18%) with a small fraction of patients presenting considerable extravasations ranging from 1% to a maximum of 22% of the injected dose. A criterion based on the extravasated volume and maximum concentration was established in order to identify this fraction of patients that might be corrected for paravenous injection effect. Conclusions: The authors propose the use of a manual ROI based method for estimating the effectively administered FDG dose and then correct SUV quantification in those patients fulfilling the proposed criterion.
Stoller, Roger E [ORNL; Golubov, Stanislav I [ORNL; Becquart, C. S. [Universite de Lille; Domain, C. [EDF R& D, Clamart, France
2007-08-01T23:59:59.000Z
The multiscale modeling scheme encompasses models from the atomistic to the continuum scale. Phenomena at the mesoscale are typically simulated using reaction rate theory, Monte Carlo, or phase field models. These mesoscale models are appropriate for application to problems that involve intermediate length scales, and timescales from those characteristic of diffusion to long-term microstructural evolution (~?s to years). Although the rate theory and Monte Carlo models can be used simulate the same phenomena, some of the details are handled quite differently in the two approaches. Models employing the rate theory have been extensively used to describe radiation-induced phenomena such as void swelling and irradiation creep. The primary approximations in such models are time- and spatial averaging of the radiation damage source term, and spatial averaging of the microstructure into an effective medium. Kinetic Monte Carlo models can account for these spatial and temporal correlations; their primary limitation is the computational burden which is related to the size of the simulation cell. A direct comparison of RT and object kinetic MC simulations has been made in the domain of point defect cluster dynamics modeling, which is relevant to the evolution (both nucleation and growth) of radiation-induced defect structures. The primary limitations of the OKMC model are related to computational issues. Even with modern computers, the maximum simulation cell size and the maximum dose (typically much less than 1 dpa) that can be simulated are limited. In contrast, even very detailed RT models can simulate microstructural evolution for doses up 100 dpa or greater in clock times that are relatively short. Within the context of the effective medium, essentially any defect density can be simulated. Overall, the agreement between the two methods is best for irradiation conditions which produce a high density of defects (lower temperature and higher displacement rate), and for materials that have a relatively high density of fixed sinks such as dislocations.
Long, Daniel J.; Lee, Choonsik; Tien, Christopher; Fisher, Ryan; Hoerner, Matthew R.; Hintenlang, David; Bolch, Wesley E. [J Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, Florida 32611-6131 (United States); National Cancer Institute, National Institute of Health, Bethesda, Maryland 20892-1502 (United States); J Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, Florida 32611-6131 (United States); Department of Radiology, University of Florida, Gainesville, Florida 32610-0374 (United States); J Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, Florida 32611-6131 (United States)
2013-01-15T23:59:59.000Z
Purpose: To validate the accuracy of a Monte Carlo source model of the Siemens SOMATOM Sensation 16 CT scanner using organ doses measured in physical anthropomorphic phantoms. Methods: The x-ray output of the Siemens SOMATOM Sensation 16 multidetector CT scanner was simulated within the Monte Carlo radiation transport code, MCNPX version 2.6. The resulting source model was able to perform various simulated axial and helical computed tomographic (CT) scans of varying scan parameters, including beam energy, filtration, pitch, and beam collimation. Two custom-built anthropomorphic phantoms were used to take dose measurements on the CT scanner: an adult male and a 9-month-old. The adult male is a physical replica of University of Florida reference adult male hybrid computational phantom, while the 9-month-old is a replica of University of Florida Series B 9-month-old voxel computational phantom. Each phantom underwent a series of axial and helical CT scans, during which organ doses were measured using fiber-optic coupled plastic scintillator dosimeters developed at University of Florida. The physical setup was reproduced and simulated in MCNPX using the CT source model and the computational phantoms upon which the anthropomorphic phantoms were constructed. Average organ doses were then calculated based upon these MCNPX results. Results: For all CT scans, good agreement was seen between measured and simulated organ doses. For the adult male, the percent differences were within 16% for axial scans, and within 18% for helical scans. For the 9-month-old, the percent differences were all within 15% for both the axial and helical scans. These results are comparable to previously published validation studies using GE scanners and commercially available anthropomorphic phantoms. Conclusions: Overall results of this study show that the Monte Carlo source model can be used to accurately and reliably calculate organ doses for patients undergoing a variety of axial or helical CT examinations on the Siemens SOMATOM Sensation 16 scanner.
Nanothermodynamics of large iron clusters by means of a flat histogram Monte Carlo method
Basire, M.; Soudan, J.-M.; Angelié, C., E-mail: christian.angelie@cea.fr [Laboratoire Francis Perrin, CNRS-URA 2453, CEA/DSM/IRAMIS/LIDyL, F-91191 Gif-sur-Yvette Cedex (France)
2014-09-14T23:59:59.000Z
The thermodynamics of iron clusters of various sizes, from 76 to 2452 atoms, typical of the catalyst particles used for carbon nanotubes growth, has been explored by a flat histogram Monte Carlo (MC) algorithm (called the ?-mapping), developed by Soudan et al. [J. Chem. Phys. 135, 144109 (2011), Paper I]. This method provides the classical density of states, g{sub p}(E{sub p}) in the configurational space, in terms of the potential energy of the system, with good and well controlled convergence properties, particularly in the melting phase transition zone which is of interest in this work. To describe the system, an iron potential has been implemented, called “corrected EAM” (cEAM), which approximates the MEAM potential of Lee et al. [Phys. Rev. B 64, 184102 (2001)] with an accuracy better than 3 meV/at, and a five times larger computational speed. The main simplification concerns the angular dependence of the potential, with a small impact on accuracy, while the screening coefficients S{sub ij} are exactly computed with a fast algorithm. With this potential, ergodic explorations of the clusters can be performed efficiently in a reasonable computing time, at least in the upper half of the solid zone and above. Problems of ergodicity exist in the lower half of the solid zone but routes to overcome them are discussed. The solid-liquid (melting) phase transition temperature T{sub m} is plotted in terms of the cluster atom number N{sub at}. The standard N{sub at}{sup ?1/3} linear dependence (Pawlow law) is observed for N{sub at} >300, allowing an extrapolation up to the bulk metal at 1940 ±50 K. For N{sub at} <150, a strong divergence is observed compared to the Pawlow law. The melting transition, which begins at the surface, is stated by a Lindemann-Berry index and an atomic density analysis. Several new features are obtained for the thermodynamics of cEAM clusters, compared to the Rydberg pair potential clusters studied in Paper I.
Monte Carlo based beam model using a photon MLC for modulated electron radiotherapy
Henzen, D., E-mail: henzen@ams.unibe.ch; Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Vetterli, D.; Chatelain, C.; Fix, M. K. [Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, CH-3010 Berne (Switzerland)] [Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, CH-3010 Berne (Switzerland); Neuenschwander, H. [Clinic for Radiation-Oncology, Lindenhofspital Bern, CH-3012 Berne (Switzerland)] [Clinic for Radiation-Oncology, Lindenhofspital Bern, CH-3012 Berne (Switzerland); Stampanoni, M. F. M. [Institute for Biomedical Engineering, ETH Zürich and Paul Scherrer Institut, CH-5234 Villigen (Switzerland)] [Institute for Biomedical Engineering, ETH Zürich and Paul Scherrer Institut, CH-5234 Villigen (Switzerland)
2014-02-15T23:59:59.000Z
Purpose: Modulated electron radiotherapy (MERT) promises sparing of organs at risk for certain tumor sites. Any implementation of MERT treatment planning requires an accurate beam model. The aim of this work is the development of a beam model which reconstructs electron fields shaped using the Millennium photon multileaf collimator (MLC) (Varian Medical Systems, Inc., Palo Alto, CA) for a Varian linear accelerator (linac). Methods: This beam model is divided into an analytical part (two photon and two electron sources) and a Monte Carlo (MC) transport through the MLC. For dose calculation purposes the beam model has been coupled with a macro MC dose calculation algorithm. The commissioning process requires a set of measurements and precalculated MC input. The beam model has been commissioned at a source to surface distance of 70 cm for a Clinac 23EX (Varian Medical Systems, Inc., Palo Alto, CA) and a TrueBeam linac (Varian Medical Systems, Inc., Palo Alto, CA). For validation purposes, measured and calculated depth dose curves and dose profiles are compared for four different MLC shaped electron fields and all available energies. Furthermore, a measured two-dimensional dose distribution for patched segments consisting of three 18 MeV segments, three 12 MeV segments, and a 9 MeV segment is compared with corresponding dose calculations. Finally, measured and calculated two-dimensional dose distributions are compared for a circular segment encompassed with a C-shaped segment. Results: For 15 × 34, 5 × 5, and 2 × 2 cm{sup 2} fields differences between water phantom measurements and calculations using the beam model coupled with the macro MC dose calculation algorithm are generally within 2% of the maximal dose value or 2 mm distance to agreement (DTA) for all electron beam energies. For a more complex MLC pattern, differences between measurements and calculations are generally within 3% of the maximal dose value or 3 mm DTA for all electron beam energies. For the two-dimensional dose comparisons, the differences between calculations and measurements are generally within 2% of the maximal dose value or 2 mm DTA. Conclusions : The results of the dose comparisons suggest that the developed beam model is suitable to accurately reconstruct photon MLC shaped electron beams for a Clinac 23EX and a TrueBeam linac. Hence, in future work the beam model will be utilized to investigate the possibilities of MERT using the photon MLC to shape electron beams.
Analytical, experimental, and Monte Carlo system response matrix for pinhole SPECT reconstruction
Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es [Fundación Ramón Domínguez, Medicina Nuclear, CHUS, Spain and Grupo de Imaxe Molecular, IDIS, Santiago de Compostela 15706 (Spain)] [Fundación Ramón Domínguez, Medicina Nuclear, CHUS, Spain and Grupo de Imaxe Molecular, IDIS, Santiago de Compostela 15706 (Spain); Pino, Francisco [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Spain and Servei de Física Médica i Protecció Radiológica, Institut Catalá d'Oncologia, Barcelona 08036 (Spain)] [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Spain and Servei de Física Médica i Protecció Radiológica, Institut Catalá d'Oncologia, Barcelona 08036 (Spain); Silva-Rodríguez, Jesús [Fundación Ramón Domínguez, Medicina Nuclear, CHUS, Santiago de Compostela 15706 (Spain)] [Fundación Ramón Domínguez, Medicina Nuclear, CHUS, Santiago de Compostela 15706 (Spain); Pavía, Javier [Servei de Medicina Nuclear, Hospital Clínic, Barcelona (Spain) [Servei de Medicina Nuclear, Hospital Clínic, Barcelona (Spain); Institut d'Investigacions Biomèdiques August Pí i Sunyer (IDIBAPS) (Spain); CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); Ros, Doménec [Unitat de Biofísica, Facultat de Medicina, Casanova 143 (Spain) [Unitat de Biofísica, Facultat de Medicina, Casanova 143 (Spain); Institut d'Investigacions Biomèdiques August Pí i Sunyer (IDIBAPS) (Spain); CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); Ruibal, Álvaro [Servicio Medicina Nuclear, CHUS (Spain) [Servicio Medicina Nuclear, CHUS (Spain); Grupo de Imaxe Molecular, Facultade de Medicina (USC), IDIS, Santiago de Compostela 15706 (Spain); Fundación Tejerina, Madrid (Spain)] [Spain; and others
2014-03-15T23:59:59.000Z
Purpose: To assess the performance of two approaches to the system response matrix (SRM) calculation in pinhole single photon emission computed tomography (SPECT) reconstruction. Methods: Evaluation was performed using experimental data from a low magnification pinhole SPECT system that consisted of a rotating flat detector with a monolithic scintillator crystal. The SRM was computed following two approaches, which were based on Monte Carlo simulations (MC-SRM) and analytical techniques in combination with an experimental characterization (AE-SRM). The spatial response of the system, obtained by using the two approaches, was compared with experimental data. The effect of the MC-SRM and AE-SRM approaches on the reconstructed image was assessed in terms of image contrast, signal-to-noise ratio, image quality, and spatial resolution. To this end, acquisitions were carried out using a hot cylinder phantom (consisting of five fillable rods with diameters of 5, 4, 3, 2, and 1?mm and a uniform cylindrical chamber) and a custom-made Derenzo phantom, with center-to-center distances between adjacent rods of 1.5, 2.0, and 3.0?mm. Results: Good agreement was found for the spatial response of the system between measured data and results derived from MC-SRM and AE-SRM. Only minor differences for point sources at distances smaller than the radius of rotation and large incidence angles were found. Assessment of the effect on the reconstructed image showed a similar contrast for both approaches, with values higher than 0.9 for rod diameters greater than 1?mm and higher than 0.8 for rod diameter of 1?mm. The comparison in terms of image quality showed that all rods in the different sections of a custom-made Derenzo phantom could be distinguished. The spatial resolution (FWHM) was 0.7?mm at iteration 100 using both approaches. The SNR was lower for reconstructed images using MC-SRM than for those reconstructed using AE-SRM, indicating that AE-SRM deals better with the projection noise than MC-SRM. Conclusions: The authors' findings show that both approaches provide good solutions to the problem of calculating the SRM in pinhole SPECT reconstruction. The AE-SRM was faster to create and handle the projection noise better than MC-SRM. Nevertheless, the AE-SRM required a tedious experimental characterization of the intrinsic detector response. Creation of the MC-SRM required longer computation time and handled the projection noise worse than the AE-SRM. Nevertheless, the MC-SRM inherently incorporates extensive modeling of the system and therefore experimental characterization was not required.
Qin, Z.; Shoesmith, D.W. [The University of Western Ontario, London, Ontario, N6A 5B7 (Canada)
2007-07-01T23:59:59.000Z
Based on a probabilistic model previously proposed, a Monte Carlo simulation code (EBSPA) has been developed to predict the lifetime of the engineered barriers system within the Yucca Mountain nuclear waste repository. The degradation modes considered in the EBSPA are general passive corrosion and hydrogen-induced cracking for the drip shield; and general passive corrosion, crevice corrosion and stress corrosion cracking for the waste package. Two scenarios have been simulated using the EBSPA code: (a) a conservative scenario for the conditions thought likely to prevail in the repository, and (b) an aggressive scenario in which the impact of the degradation processes is overstated. (authors)
Sanna, R.S.; O'Brien, K.
1987-12-01T23:59:59.000Z
SWIFT is a FORTRAN-77 program written for the VAX-11/750 computer. Its purpose is to unfold neutron spectra from multisphere spectrometer measurements using the Monte Carlo technique. This guide describes the code in sufficient detail to enable a user, with a background in FORTRAN programming, to alter the code for use with other spectrometers and/or to install it on computers other than the VAX-11/750. The code and the required input and resulting output are described. As an aid to its implementation, input and output for a sample problem are also presented. 19 refs., 1 fig., 6 tabs.
Ulybyshev, M V
2015-01-01T23:59:59.000Z
We study electronic properties of graphene with finite concentration of vacancies or other resonant scatterers by a straightforward lattice Quantum Monte Carlo calculations. Taking into account realistic long-range Coulomb interaction we calculate distribution of spin density associated to midgap states and demonstrate antiferromagnetic ordering. Energy gap are open due to the interaction effects, both in the bare graphene spectrum and in the vacancy/impurity bands. In the case of 5 % concentration of resonant scatterers the latter gap is estimated as 0.7 eV and 1.1 eV for graphene on boron nitride and freely suspended graphene, respectively.
M. V. Ulybyshev; M. I. Katsnelson
2015-02-04T23:59:59.000Z
We study electronic properties of graphene with finite concentration of vacancies or other resonant scatterers by a straightforward lattice Quantum Monte Carlo calculations. Taking into account realistic long-range Coulomb interaction we calculate distribution of spin density associated to midgap states and demonstrate antiferromagnetic ordering. Energy gap are open due to the interaction effects, both in the bare graphene spectrum and in the vacancy/impurity bands. In the case of 5 % concentration of resonant scatterers the latter gap is estimated as 0.7 eV and 1.1 eV for graphene on boron nitride and freely suspended graphene, respectively.
Montoya, M; Rojas, J
2007-01-01T23:59:59.000Z
The mass and kinetic energy distribution of nuclear fragments from thermal neutron induced fission of 235U have been studied using a Monte-Carlo simulation. Besides reproducing the pronounced broadening on the standard deviation of the final fragment kinetic energy distribution $\\sigma_{e}(m)$ around the mass number m = 109, our simulation also produces a second broadening around m = 125, that is in agreement with the experimental data obtained by Belhafaf et al. These results are consequence of the characteristics of the neutron emission, the variation in the primary fragment mean kinetic energy and the yield as a function of the mass.
M. V. Ulybyshev; M. I. Katsnelson
2015-05-22T23:59:59.000Z
We study electronic properties of graphene with finite concentration of vacancies or other resonant scatterers by a straightforward lattice Quantum Monte Carlo calculations. Taking into account realistic long-range Coulomb interaction we calculate distribution of spin density associated to midgap states and demonstrate antiferromagnetic ordering. Energy gaps are open due to the interaction effects, both in the bare graphene spectrum and in the vacancy/impurity bands. In the case of 5 % concentration of resonant scatterers the latter gap is estimated as 0.7 eV and 1.1 eV for graphene on boron nitride and freely suspended graphene, respectively.
Demchik, Vadim
2013-01-01T23:59:59.000Z
The multi-GPU open-source package QCDGPU for lattice Monte Carlo simulations of pure SU(N) gluodynamics in external magnetic field at finite temperature and O(N) model is developed. The code is implemented in OpenCL, tested on AMD and NVIDIA GPUs, AMD and Intel CPUs and may run on other OpenCL-compatible devices. The package contains minimal external library dependencies and is OS platform-independent. It is optimized for heterogeneous computing due to the possibility of dividing the lattice into non-equivalent parts to hide the difference in performances of the devices used. QCDGPU has client-server part for distributed simulations. The package is designed to produce lattice gauge configurations as well as to analyze previously generated ones. QCDGPU may be executed in fault-tolerant mode. Monte Carlo procedure core is based on PRNGCL library for pseudo-random numbers generation on OpenCL-compatible devices, which contains several most popular pseudo-random number generators.
Griesheimer, D. P. [Bertis Atomic Power Laboratory, P.O. Box 79, West Mifflin, PA 15122 (United States); Stedry, M. H. [Knolls Atomic Power Laboratory, P.O. Box 1072, Schenectady, NY 12301 (United States)
2013-07-01T23:59:59.000Z
A rigorous treatment of energy deposition in a Monte Carlo transport calculation, including coupled transport of all secondary and tertiary radiations, increases the computational cost of a simulation dramatically, making fully-coupled heating impractical for many large calculations, such as 3-D analysis of nuclear reactor cores. However, in some cases, the added benefit from a full-fidelity energy-deposition treatment is negligible, especially considering the increased simulation run time. In this paper we present a generalized framework for the in-line calculation of energy deposition during steady-state Monte Carlo transport simulations. This framework gives users the ability to select among several energy-deposition approximations with varying levels of fidelity. The paper describes the computational framework, along with derivations of four energy-deposition treatments. Each treatment uses a unique set of self-consistent approximations, which ensure that energy balance is preserved over the entire problem. By providing several energy-deposition treatments, each with different approximations for neglecting the energy transport of certain secondary radiations, the proposed framework provides users the flexibility to choose between accuracy and computational efficiency. Numerical results are presented, comparing heating results among the four energy-deposition treatments for a simple reactor/compound shielding problem. The results illustrate the limitations and computational expense of each of the four energy-deposition treatments. (authors)
Matthew G. Baring; Keith Ogilvie; Donald Ellison; Robert Forsyth
1996-10-02T23:59:59.000Z
The most stringent test of theoretical models of the first-order Fermi mechanism at collisionless astrophysical shocks is a comparison of the theoretical predictions with observational data on particle populations. Such comparisons have yielded good agreement between observations at the quasi-parallel portion of the Earth's bow shock and three theoretical approaches, including Monte Carlo kinetic simulations. This paper extends such model testing to the realm of oblique interplanetary shocks: here observations of proton and alpha particle distributions made by the SWICS ion mass spectrometer on Ulysses at nearby interplanetary shocks are compared with test particle Monte Carlo simulation predictions of accelerated populations. The plasma parameters used in the simulation are obtained from measurements of solar wind particles and the magnetic field upstream of individual shocks. Good agreement between downstream spectral measurements and the simulation predictions are obtained for two shocks by allowing the the ratio of the mean-free scattering length to the ionic gyroradius, to vary in an optimization of the fit to the data. Generally small values of this ratio are obtained, corresponding to the case of strong scattering. The acceleration process appears to be roughly independent of the mass or charge of the species.
Axel Hoefer; Oliver Buss; Maik Hennebach; Michael Schmid; Dieter Porsch
2014-11-12T23:59:59.000Z
MOCABA is a combination of Monte Carlo sampling and Bayesian updating algorithms for the prediction of integral functions of nuclear data, such as reactor power distributions or neutron multiplication factors. Similarly to the established Generalized Linear Least Squares (GLLS) methodology, MOCABA offers the capability to utilize integral experimental data to reduce the prior uncertainty of integral observables. The MOCABA approach, however, does not involve any series expansions and, therefore, does not suffer from the breakdown of first-order perturbation theory for large nuclear data uncertainties. This is related to the fact that, in contrast to the GLLS method, the updating mechanism within MOCABA is applied directly to the integral observables without having to "adjust" any nuclear data. A central part of MOCABA is the nuclear data Monte Carlo program NUDUNA, which performs random sampling of nuclear data evaluations according to their covariance information and converts them into libraries for transport code systems like MCNP or SCALE. What is special about MOCABA is that it can be applied to any integral function of nuclear data, and any integral measurement can be taken into account to improve the prediction of an integral observable of interest. In this paper we present two example applications of the MOCABA framework: the prediction of the neutron multiplication factor of a water-moderated PWR fuel assembly based on 21 criticality safety benchmark experiments and the prediction of the power distribution within a toy model reactor containing 100 fuel assemblies.
Dominik Smith; Lorenz von Smekal
2014-03-14T23:59:59.000Z
We report on Hybrid-Monte-Carlo simulations of the tight-binding model with long-range Coulomb interactions for the electronic properties of graphene. We investigate the spontaneous breaking of sublattice symmetry corresponding to a transition from the semimetal to an antiferromagnetic insulating phase. Our short-range interactions thereby include the partial screening due to electrons in higher energy states from ab initio calculations based on the constrained random phase approximation [T.O.Wehling {\\it et al.}, Phys.Rev.Lett.{\\bf 106}, 236805 (2011)]. In contrast to a similar previous Monte-Carlo study [M.V.Ulybyshev {\\it et al.}, Phys.Rev.Lett.{\\bf 111}, 056801 (2013)] we also include a phenomenological model which describes the transition to the unscreened bare Coulomb interactions of graphene at half filling in the long-wavelength limit. Our results show, however, that the critical coupling for the antiferromagnetic Mott transition is largely insensitive to the strength of these long-range Coulomb tails. They hence confirm the prediction that suspended graphene remains in the semimetal phase when a realistic static screening of the Coulomb interactions is included.
Shell Model Monte Carlo method in the $pn$-formalism and applications to the Zr and Mo isotopes
C. Ozen; D. J. Dean
2005-08-05T23:59:59.000Z
We report on the development of a new shell-model Monte Carlo algorithm which uses the proton-neutron formalism. Shell model Monte Carlo methods, within the isospin formulation, have been successfully used in large-scale shell-model calculations. Motivation for this work is to extend the feasibility of these methods to shell-model studies involving non-identical proton and neutron valence spaces. We show the viability of the new approach with some test results. Finally, we use a realistic nucleon-nucleon interaction in the model space described by (1p_1/2,0g_9/2) proton and (1d_5/2,2s_1/2,1d_3/2,0g_7/2,0h_11/2) neutron orbitals above the Sr-88 core to calculate ground-state energies, binding energies, B(E2) strengths, and to study pairing properties of the even-even 90-104 Zr and 92-106 Mo isotope chains.
Zen, Andrea; Sorella, Sandro; Guidoni, Leonardo
2013-01-01T23:59:59.000Z
Quantum Monte Carlo methods are accurate and promising many body techniques for electronic structure calculations which, in the last years, are encountering a growing interest thanks to their favorable scaling with the system size and their efficient parallelization, particularly suited for the modern high performance computing facilities. The ansatz of the wave function and its variational flexibility are crucial points for both the accurate description of molecular properties and the capabilities of the method to tackle large systems. In this paper, we extensively analyze, using different variational ansatzes, several properties of the water molecule, namely: the total energy, the dipole and quadrupole momenta, the ionization and atomization energies, the equilibrium configuration, and the harmonic and fundamental frequencies of vibration. The investigation mainly focuses on variational Monte Carlo calculations, although several lattice regularized diffusion Monte Carlo calculations are also reported. Throu...
Zhai, Pengwang
2009-06-02T23:59:59.000Z
meter. 60 20 Geometry of a scattering event. . . . . . . . . . . . . . . . . . . . . . 63 21 An example of the atmosphere model used in the 3D Monte Carlo code for the vector radiative transfer systems. Inhomogeneous layers are divided into voxels... cases can be solved analytically. Several popular numerical methods include the T-matrix method [15, 16, 17, 18, 19], finite-element method [20, 21], finite-difference time-domain(FDTD)method[22,23,24,25,26,27,28,29,30,31,32], point-matching method [33...
Paris-Sud XI, Université de
Emission Computed Tomography (SPECT) images is degraded by physical effects, namely photon attenuation datasets are currently under investigation. Keywords : single photon emission computed tomography; Monte Emission Computed Tomography (SPECT), the qualitative and quantitative accuracy of images is degraded
{sup 103}Pd strings: Monte Carlo assessment of a new approach to brachytherapy source design
Rivard, Mark J., E-mail: mark.j.rivard@gmail.com [Department of Radiation Oncology, Tufts University School of Medicine, Boston, Massachusetts 02111 (United States); Reed, Joshua L.; DeWerd, Larry A. [Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States)] [Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States)
2014-01-15T23:59:59.000Z
Purpose: A new type of{sup 103}Pd source (CivaString and CivaThin by CivaTech Oncology, Inc.) is examined. The source contains {sup 103}Pd and Au radio-opaque marker(s), all contained within low-Z{sub eff} organic polymers that permit source flexibility. The CivaString source is available in lengths L of 10, 20, 30, 40, 50, and 60 mm, and referred to in the current study as CS10–CS60, respectively. A thinner design, CivaThin, has sources designated as CT10–CT60, respectively. The CivaString and CivaThin sources are 0.85 and 0.60 mm in diameter, respectively. The source design is novel and offers an opportunity to examine its interesting dosimetric properties in comparison to conventional {sup 103}Pd seeds. Methods: The MCNP5 radiation transport code was used to estimate air-kerma rate and dose rate distributions with polar and cylindrical coordinate systems. Doses in water and prostate tissue phantoms were compared to determine differences between the TG-43 formalism and realistic clinical circumstances. The influence of Ti encapsulation and 2.7 keV photons was examined. The accuracy of superposition of dose distributions from shorter sources to create longer source dose distributions was also assessed. Results: The normalized air-kerma rate was not highly dependent onL or the polar angle ?, with results being nearly identical between the CivaString and CivaThin sources for common L. The air-kerma strength was also weakly dependent on L. The uncertainty analysis established a standard uncertainty of 1.3% for the dose-rate constant ?, where the largest contributors were ?{sub en}/? and ?/?. The ? values decreased with increasing L, which was largely explained by differences in solid angle. The radial dose function did not substantially vary among the CivaString and CivaThin sources for r ? 1 cm. However, behavior for r < 1 cm indicated that the Au marker(s) shielded radiation for the sources having L = 10, 30, and 50 mm. The 2D anisotropy function exhibited peaks and valleys that corresponded to positions adjacent to {sup 103}Pd wells and Au markers, respectively. Dose distributions of both source types had minimal anisotropy in comparison to conventional {sup 103}Pd seeds. Contributions by 2.7 keV photons comprised ?0.1% of the dose from all photons at positions farther than 0.13 mm from the polymer source surface. Differences between absorbed dose to water and prostate became more substantial as distance from the sources increased, with prostate dose being about 13% lower for r = 5 cm. Using a cylindrical coordinate system, dose superposition of small length sources to replicate the dose distribution for a long length source proved to be a robust technique; a 2.0% tolerance compared with the reference dose distribution did not exceed 0.1 cm{sup 3} for any of the examined source combinations. Conclusions: By design, the CivaString and CivaThin sources have novel dosimetric characteristics in comparison to Ti-encapsulated{sup 103}Pd seeds. The dosimetric characterization has determined the reasons for these differences through analysis using Monte Carlo-based radiation transport simulations.
Monte Carlo calculations of electron beam quality conversion factors for several ion chamber types
Muir, B. R., E-mail: Bryan.Muir@nrc-cnrc.gc.ca [Measurement Science and Standards, National Research Council Canada, 1200 Montreal Road, Ottawa, Ontario K1A 0R6 (Canada); Rogers, D. W. O., E-mail: drogers@physics.carleton.ca [Carleton Laboratory for Radiotherapy Physics, Physics Department, Carleton University, 1125 ColonelBy Drive, Ottawa, Ontario K1S 5B6 (Canada)
2014-11-01T23:59:59.000Z
Purpose: To provide a comprehensive investigation of electron beam reference dosimetry using Monte Carlo simulations of the response of 10 plane-parallel and 18 cylindrical ion chamber types. Specific emphasis is placed on the determination of the optimal shift of the chambers’ effective point of measurement (EPOM) and beam quality conversion factors. Methods: The EGSnrc system is used for calculations of the absorbed dose to gas in ion chamber models and the absorbed dose to water as a function of depth in a water phantom on which cobalt-60 and several electron beam source models are incident. The optimal EPOM shifts of the ion chambers are determined by comparing calculations of R{sub 50} converted from I{sub 50} (calculated using ion chamber simulations in phantom) to R{sub 50} calculated using simulations of the absorbed dose to water vs depth in water. Beam quality conversion factors are determined as the calculated ratio of the absorbed dose to water to the absorbed dose to air in the ion chamber at the reference depth in a cobalt-60 beam to that in electron beams. Results: For most plane-parallel chambers, the optimal EPOM shift is inside of the active cavity but different from the shift determined with water-equivalent scaling of the front window of the chamber. These optimal shifts for plane-parallel chambers also reduce the scatter of beam quality conversion factors, k{sub Q}, as a function of R{sub 50}. The optimal shift of cylindrical chambers is found to be less than the 0.5 r{sub cav} recommended by current dosimetry protocols. In most cases, the values of the optimal shift are close to 0.3 r{sub cav}. Values of k{sub ecal} are calculated and compared to those from the TG-51 protocol and differences are explained using accurate individual correction factors for a subset of ion chambers investigated. High-precision fits to beam quality conversion factors normalized to unity in a beam with R{sub 50} = 7.5 cm (k{sub Q}{sup ?}) are provided. These factors avoid the use of gradient correction factors as used in the TG-51 protocol although a chamber dependent optimal shift in the EPOM is required when using plane-parallel chambers while no shift is needed with cylindrical chambers. The sensitivity of these results to parameters used to model the ion chambers is discussed and the uncertainty related to the practical use of these results is evaluated. Conclusions: These results will prove useful as electron beam reference dosimetry protocols are being updated. The analysis of this work indicates that cylindrical ion chambers may be appropriate for use in low-energy electron beams but measurements are required to characterize their use in these beams.
Statistical Exploration of Electronic Structure of Molecules from Quantum Monte-Carlo Simulations
Prabhat, Mr; Zubarev, Dmitry; Lester, Jr., William A.
2010-12-22T23:59:59.000Z
In this report, we present results from analysis of Quantum Monte Carlo (QMC) simulation data with the goal of determining internal structure of a 3N-dimensional phase space of an N-electron molecule. We are interested in mining the simulation data for patterns that might be indicative of the bond rearrangement as molecules change electronic states. We examined simulation output that tracks the positions of two coupled electrons in the singlet and triplet states of an H2 molecule. The electrons trace out a trajectory, which was analyzed with a number of statistical techniques. This project was intended to address the following scientific questions: (1) Do high-dimensional phase spaces characterizing electronic structure of molecules tend to cluster in any natural way? Do we see a change in clustering patterns as we explore different electronic states of the same molecule? (2) Since it is hard to understand the high-dimensional space of trajectories, can we project these trajectories to a lower dimensional subspace to gain a better understanding of patterns? (3) Do trajectories inherently lie in a lower-dimensional manifold? Can we recover that manifold? After extensive statistical analysis, we are now in a better position to respond to these questions. (1) We definitely see clustering patterns, and differences between the H2 and H2tri datasets. These are revealed by the pamk method in a fairly reliable manner and can potentially be used to distinguish bonded and non-bonded systems and get insight into the nature of bonding. (2) Projecting to a lower dimensional subspace ({approx}4-5) using PCA or Kernel PCA reveals interesting patterns in the distribution of scalar values, which can be related to the existing descriptors of electronic structure of molecules. Also, these results can be immediately used to develop robust tools for analysis of noisy data obtained during QMC simulations (3) All dimensionality reduction and estimation techniques that we tried seem to indicate that one needs 4 or 5 components to account for most of the variance in the data, hence this 5D dataset does not necessarily lie on a well-defined, low dimensional manifold. In terms of specific clustering techniques, K-means was generally useful in exploring the dataset. The partition around medoids (pam) technique produced the most definitive results for our data showing distinctive patterns for both a sample of the complete data and time-series. The gap statistic with tibshirani criteria did not provide any distinction across the 2 dataset. The gap statistic w/DandF criteria, Model based clustering and hierarchical modeling simply failed to run on our datasets. Thankfully, the vanilla PCA technique was successful in handling our entire dataset. PCA revealed some interesting patterns for the scalar value distribution. Kernel PCA techniques (vanilladot, RBF, Polynomial) and MDS failed to run on the entire dataset, or even a significant fraction of the dataset, and we resorted to creating an explicit feature map followed by conventional PCA. Clustering using K-means and PAM in the new basis set seems to produce promising results. Understanding the new basis set in the scientific context of the problem is challenging, and we are currently working to further examine and interpret the results.
Liew, S.L.; Ku, L.P. (Princeton Univ., NJ (United States). Plasma Physics Lab.)
1991-02-01T23:59:59.000Z
This paper reports on the delayed gamma dose rate problem formulated in terms of the effective delayed gamma production cross section. The coupled neutron-delayed gamma transport equations take the same form as the coupled neutron-prompt gamma transport equations and they can, therefore, be solved directly in the same manner. This eliminates the flux coupling step required in conventional calculations and makes it easier to handle complex, multidimensional problems, especially those that call for Monte Carlo calculations. Mathematical formulation and solution algorithms are derived. The advantages of this method in complex geometry are illustrated by its application in the Monte Carlo solution of a practical design problem.
Bashkatov, A N; Genina, Elina A; Kochubei, V I; Tuchin, Valerii V [Department of Optics and Biomedical Physics, N.G.Chernyshevskii Saratov State University (Russian Federation)
2006-12-31T23:59:59.000Z
Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates. (special issue devoted to multiple radiation scattering in random media)
Parton distributions and event generators Stefano Carrazza, Stefano Forte
Heller, Barbara
Parton distributions and event generators Stefano Carrazza, Stefano Forte Dipartimento di Fisica ingredient in achieving all of these goals is the integration of parton distri- butions within Monte Carlo, and data collected in an experimental fiducial region. Whereas next-to-leading (NLO) order Monte Carlo
Tattersall, W J; Boyle, G J; White, R D
2015-01-01T23:59:59.000Z
We generalize a simple Monte Carlo (MC) model for dilute gases to consider the transport behavior of positrons and electrons in Percus-Yevick model liquids under highly non-equilibrium conditions, accounting rigorously for coherent scattering processes. The procedure extends an existing technique [Wojcik and Tachiya, Chem. Phys. Lett. 363, 3--4 (1992)], using the static structure factor to account for the altered anisotropy of coherent scattering in structured material. We identify the effects of the approximation used in the original method, and develop a modified method that does not require that approximation. We also present an enhanced MC technique that has been designed to improve the accuracy and flexibility of simulations in spatially-varying electric fields. All of the results are found to be in excellent agreement with an independent multi-term Boltzmann equation solution, providing benchmarks for future transport models in liquids and structured systems.
Monte Carlo calculation of the collision density of superthermal produced H atoms in thermal H2 gas
Panarese, A
2011-01-01T23:59:59.000Z
We propose a simple and reliable method to study the collision density of H atoms following their production by chemical mechanisms. The problem is relevant to PDR's, shocks, photospheres, atmospheric entry problems. We show that the thermalization of H atoms can be conveniently studied by a simple method and set the basis for further investigations. Besides our aims are also to review the theoretical basis, the limitation of simpler approaches and address the analogue problems in neutronics. The method adopted is Monte Carlo method including the thermal distri- bution of background molecules. The transport cross section is determined by the inversion of transport data. Plots of the collisions density of H atoms in H2 gas are calculated and discussed also in the context of simple theories. The application of the results to astrophysical problems is outlined.
Structure of Cu64.5Zr35.5 Metallic glass by reverse Monte Carlo simulations
Fang, Xikui W. [Ames Laboratory; Huang, Li [Ames Laboratory; Wang, Cai-Zhuang [Ames Laboratory; Ho, Kai-Ming [Ames Laboratory; Ding, Z. J. [University of Science and Technology of China
2014-02-07T23:59:59.000Z
Reverse Monte Carlo simulations (RMC) have been widely used to generate three dimensional (3D) atomistic models for glass systems. To examine the reliability of the method for metallic glass, we use RMC to predict the atomic configurations of a “known” structure from molecular dynamics (MD) simulations, and then compare the structure obtained from the RMC with the target structure from MD. We show that when the structure factors and partial pair correlation functions from the MD simulations are used as inputs for RMC simulations, the 3D atomistic structure of the glass obtained from the RMC gives the short- and medium-range order in good agreement with those from the target structure by the MD simulation. These results suggest that 3D atomistic structure model of the metallic glass alloys can be reasonably well reproduced by RMC method with a proper choice of input constraints.
Monte Carlo study for optimal conditions in single-shot imaging with femtosecond x-ray laser pulses
Park, Jaehyun; Ishikawa, Tetsuya; Song, Changyong [RIKEN SPring-8 Center, 1-1-1 Kouto, Sayo, Hyogo 679-5148 (Japan)] [RIKEN SPring-8 Center, 1-1-1 Kouto, Sayo, Hyogo 679-5148 (Japan); Joti, Yasumasa [Japan Synchrotron Radiation Research Institute, 1-1-1 Kouto, Sayo, Hyogo 679-5198 (Japan)] [Japan Synchrotron Radiation Research Institute, 1-1-1 Kouto, Sayo, Hyogo 679-5198 (Japan)
2013-12-23T23:59:59.000Z
Intense x-ray pulses from x-ray free electron lasers (XFELs) enable the unveiling of atomic structure in material and biological specimens via ultrafast single-shot exposures. As the radiation is intense enough to destroy the sample, a new sample must be provided for each x-ray pulse. These single-particle delivery schemes require careful optimization, though systematic study to find such optimal conditions is still lacking. We have investigated two major single-particle delivery methods: particle injection as flying objects and membrane-mount as fixed targets. The optimal experimental parameters were searched for via Monte Carlo simulations to discover that the maximum single-particle hit rate achievable is close to 40%.
Böcklin, Christoph, E-mail: boecklic@ethz.ch; Baumann, Dirk; Fröhlich, Jürg [Institute of Electromagnetic Fields, ETH Zurich, 8092 Zurich (Switzerland)
2014-02-14T23:59:59.000Z
A novel way to attain three dimensional fluence rate maps from Monte-Carlo simulations of photon propagation is presented in this work. The propagation of light in a turbid medium is described by the radiative transfer equation and formulated in terms of radiance. For many applications, particularly in biomedical optics, the fluence rate is a more useful quantity and directly derived from the radiance by integrating over all directions. Contrary to the usual way which calculates the fluence rate from absorbed photon power, the fluence rate in this work is directly calculated from the photon packet trajectory. The voxel based algorithm works in arbitrary geometries and material distributions. It is shown that the new algorithm is more efficient and also works in materials with a low or even zero absorption coefficient. The capabilities of the new algorithm are demonstrated on a curved layered structure, where a non-scattering, non-absorbing layer is sandwiched between two highly scattering layers.
Domin, D.; Braida, Benoit; Lester Jr., William A.
2008-05-30T23:59:59.000Z
This study explores the use of breathing orbital valence bond (BOVB) trial wave functions for diffusion Monte Carlo (DMC). The approach is applied to the computation of the carbon-hydrogen (C-H) bond dissociation energy (BDE) of acetylene. DMC with BOVB trial wave functions yields a C-H BDE of 132.4 {+-} 0.9 kcal/mol, which is in excellent accord with the recommended experimental value of 132.8 {+-} 0.7 kcal/mol. These values are to be compared with DMC results obtained with single determinant trial wave functions, using Hartree-Fock orbitals (137.5 {+-} 0.5 kcal/mol) and local spin density (LDA) Kohn-Sham orbitals (135.6 {+-} 0.5 kcal/mol).
Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation
Pecchia, M.; D'Auria, F. [San Piero A Grado Nuclear Research Group GRNSPG, Univ. of Pisa, via Diotisalvi, 2, 56122 - Pisa (Italy); Mazzantini, O. [Nucleo-electrica Argentina Societad Anonima NA-SA, Buenos Aires (Argentina)
2012-07-01T23:59:59.000Z
Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI for performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)
Zink, K., E-mail: klemens.zink@kmub.thm.de [Institute of Medical Physics and Radiation Protection (IMPS), University of Applied Sciences Giessen, Giessen D-35390, Germany and Department of Radiotherapy and Radiooncology, University Medical Center Giessen-Marburg, Marburg D-35043 (Germany); Czarnecki, D.; Voigts-Rhetz, P. von [Institute of Medical Physics and Radiation Protection (IMPS), University of Applied Sciences Giessen, Giessen D-35390 (Germany); Looe, H. K. [Clinic for Radiation Therapy, Pius-Hospital, Oldenburg D-26129, Germany and WG Medical Radiation Physics, Carl von Ossietzky University, Oldenburg D-26129 (Germany); Harder, D. [Prof. em., Medical Physics and Biophysics, Georg August University, Göttingen D-37073 (Germany)
2014-11-01T23:59:59.000Z
Purpose: The electron fluence inside a parallel-plate ionization chamber positioned in a water phantom and exposed to a clinical electron beam deviates from the unperturbed fluence in water in absence of the chamber. One reason for the fluence perturbation is the well-known “inscattering effect,” whose physical cause is the lack of electron scattering in the gas-filled cavity. Correction factors determined to correct for this effect have long been recommended. However, more recent Monte Carlo calculations have led to some doubt about the range of validity of these corrections. Therefore, the aim of the present study is to reanalyze the development of the fluence perturbation with depth and to review the function of the guard rings. Methods: Spatially resolved Monte Carlo simulations of the dose profiles within gas-filled cavities with various radii in clinical electron beams have been performed in order to determine the radial variation of the fluence perturbation in a coin-shaped cavity, to study the influences of the radius of the collecting electrode and of the width of the guard ring upon the indicated value of the ionization chamber formed by the cavity, and to investigate the development of the perturbation as a function of the depth in an electron-irradiated phantom. The simulations were performed for a primary electron energy of 6 MeV. Results: The Monte Carlo simulations clearly demonstrated a surprisingly large in- and outward electron transport across the lateral cavity boundary. This results in a strong influence of the depth-dependent development of the electron field in the surrounding medium upon the chamber reading. In the buildup region of the depth-dose curve, the in–out balance of the electron fluence is positive and shows the well-known dose oscillation near the cavity/water boundary. At the depth of the dose maximum the in–out balance is equilibrated, and in the falling part of the depth-dose curve it is negative, as shown here the first time. The influences of both the collecting electrode radius and the width of the guard ring are reflecting the deep radial penetration of the electron transport processes into the gas-filled cavities and the need for appropriate corrections of the chamber reading. New values for these corrections have been established in two forms, one converting the indicated value into the absorbed dose to water in the front plane of the chamber, the other converting it into the absorbed dose to water at the depth of the effective point of measurement of the chamber. In the Appendix, the in–out imbalance of electron transport across the lateral cavity boundary is demonstrated in the approximation of classical small-angle multiple scattering theory. Conclusions: The in–out electron transport imbalance at the lateral boundaries of parallel-plate chambers in electron beams has been studied with Monte Carlo simulation over a range of depth in water, and new correction factors, covering all depths and implementing the effective point of measurement concept, have been developed.
Sarrut, David, E-mail: david.sarrut@creatis.insa-lyon.fr [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon (France) [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon (France); Université Lyon 1 (France); Centre Léon Bérard (France)] [France; Bardiès, Manuel; Marcatili, Sara; Mauxion, Thibault [Inserm, UMR1037 CRCT, F-31000 Toulouse, France and Université Toulouse III-Paul Sabatier, UMR1037 CRCT, F-31000 Toulouse (France)] [Inserm, UMR1037 CRCT, F-31000 Toulouse, France and Université Toulouse III-Paul Sabatier, UMR1037 CRCT, F-31000 Toulouse (France); Boussion, Nicolas [INSERM, UMR 1101, LaTIM, CHU Morvan, 29609 Brest (France)] [INSERM, UMR 1101, LaTIM, CHU Morvan, 29609 Brest (France); Freud, Nicolas; Létang, Jean-Michel [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Centre Léon Bérard, 69008 Lyon (France)] [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Centre Léon Bérard, 69008 Lyon (France); Jan, Sébastien [CEA/DSV/I2BM/SHFJ, Orsay 91401 (France)] [CEA/DSV/I2BM/SHFJ, Orsay 91401 (France); Loudos, George [Department of Medical Instruments Technology, Technological Educational Institute of Athens, Athens 12210 (Greece)] [Department of Medical Instruments Technology, Technological Educational Institute of Athens, Athens 12210 (Greece); Maigne, Lydia; Perrot, Yann [UMR 6533 CNRS/IN2P3, Université Blaise Pascal, 63171 Aubière (France)] [UMR 6533 CNRS/IN2P3, Université Blaise Pascal, 63171 Aubière (France); Papadimitroulas, Panagiotis [Department of Biomedical Engineering, Technological Educational Institute of Athens, 12210, Athens (Greece)] [Department of Biomedical Engineering, Technological Educational Institute of Athens, 12210, Athens (Greece); Pietrzyk, Uwe [Institut für Neurowissenschaften und Medizin, Forschungszentrum Jülich GmbH, 52425 Jülich, Germany and Fachbereich für Mathematik und Naturwissenschaften, Bergische Universität Wuppertal, 42097 Wuppertal (Germany)] [Institut für Neurowissenschaften und Medizin, Forschungszentrum Jülich GmbH, 52425 Jülich, Germany and Fachbereich für Mathematik und Naturwissenschaften, Bergische Universität Wuppertal, 42097 Wuppertal (Germany); Robert, Charlotte [IMNC, UMR 8165 CNRS, Universités Paris 7 et Paris 11, Orsay 91406 (France)] [IMNC, UMR 8165 CNRS, Universités Paris 7 et Paris 11, Orsay 91406 (France); and others
2014-06-15T23:59:59.000Z
In this paper, the authors' review the applicability of the open-source GATE Monte Carlo simulation platform based on the GEANT4 toolkit for radiation therapy and dosimetry applications. The many applications of GATE for state-of-the-art radiotherapy simulations are described including external beam radiotherapy, brachytherapy, intraoperative radiotherapy, hadrontherapy, molecular radiotherapy, and in vivo dose monitoring. Investigations that have been performed using GEANT4 only are also mentioned to illustrate the potential of GATE. The very practical feature of GATE making it easy to model both a treatment and an imaging acquisition within the same frameworkis emphasized. The computational times associated with several applications are provided to illustrate the practical feasibility of the simulations using current computing facilities.
Sailor, W.C.; Byrd, R.C.; Yariv, Y.
1988-10-01T23:59:59.000Z
The response of organic scintillators to monoenergetic neutrons has been calculated using a Monte Carlo approach. The code TRACE is largely based on the well-tested code of Stanton, except that multi-element capabilities, energy-dependent reaction kinematics, and photon loss through attenuation and reflection are introduced. The modeling assumptions and historical development of the Stanton code are first discussed. Pulse height distributions calculated with this code are given and used to explain the roles of various reaction channels and multiple scattering in determining the detector efficiency. Changes introduced into the code in developing TRACE are summarized. Pulse height spectra and total efficiencies for single-element detectors are calculated with both the Stanton code and with TRACE in the energy range 28 < E/sub n/ < 200MeV, and the results are compared to experimental data obtained with the /sup 7/Li(p,n)/sup 7/Be reaction. 68 refs., 25 figs., 3 tabs.
Hui, Y.Y.; Chang, Y.-R.; Lee, H.-Y.; Chang, H.-C. [Institute of Atomic and Molecular Sciences, Academia Sinica, Taipei 106, Taiwan (China); Lim, T.-S. [Department of Physics, Tunghai University, Taichung 407, Taiwan (China); Fann Wunshain [Institute of Atomic and Molecular Sciences, Academia Sinica, Taipei 106, Taiwan (China); Department of Physics, National Taiwan University, Taipei 106, Taiwan (China)
2009-01-05T23:59:59.000Z
The number of negatively charged nitrogen-vacancy centers (N-V){sup -} in fluorescent nanodiamond (FND) has been determined by photon correlation spectroscopy and Monte Carlo simulations at the single particle level. By taking account of the random dipole orientation of the multiple (N-V){sup -} fluorophores and simulating the probability distribution of their effective numbers (N{sub e}), we found that the actual number (N{sub a}) of the fluorophores is in linear correlation with N{sub e}, with correction factors of 1.8 and 1.2 in measurements using linearly and circularly polarized lights, respectively. We determined N{sub a}=8{+-}1 for 28 nm FND particles prepared by 3 MeV proton irradiation.
Morozov, Alexey A., E-mail: morozov@itp.nsc.ru [Institute of Thermophysics SB RAS, 1 Lavrentyev Ave., 630090 Novosibirsk (Russian Federation)
2013-12-21T23:59:59.000Z
A theoretical study of the time-of-flight (TOF) distributions under pulsed laser evaporation in vacuum has been performed. A database of TOF distributions has been calculated by the direct simulation Monte Carlo (DSMC) method. It is shown that describing experimental TOF signals through the use of the calculated TOF database combined with a simple analysis of evaporation allows determining the irradiated surface temperature and the rate of evaporation. Analysis of experimental TOF distributions under laser ablation of niobium, copper, and graphite has been performed, with the evaluated surface temperature being well agreed with results of the thermal model calculations. General empirical dependences are proposed, which allow indentifying the regime of the laser induced thermal ablation from the TOF distributions for neutral particles without invoking the DSMC-calculated database.
Monte Carlo Simulation for Elastic Energy Loss of High-Energy Partons in Quark-Gluon Plasma
Jussi Auvinen; Kari J. Eskola; Hannu Holopainen; Thorsten Renk
2011-06-13T23:59:59.000Z
We examine the significance of $2 \\rightarrow 2$ partonic collisions as the suppression mechanism of high-energy partons in the strongly interacting medium formed in ultrarelativistic heavy ion collisions. For this purpose, we have developed a Monte Carlo simulation describing the interactions of perturbatively produced, non-eikonally propagating high-energy partons with the quarks and gluons from the expanding QCD medium. The partonic collision rates are computed in leading-order perturbative QCD (pQCD), while three different hydrodynamical scenarios are used to model the medium. We compare our results with the suppression observed in $\\sqrt{s_{NN}}=200$ GeV Au+Au collisions at the BNL-RHIC. We find the incoherent nature of elastic energy loss incompatible with the measured data and the effect of the initial state fluctuations small.
Morris, M. F. [Motorola, Mesa, Arizona 85202 (United States); Tian, S. [Avante, Fremont, California 94538 (United States); Chen, Y.; Tasch, A. [Department of Electrical and Computer Engineering, University of Texas, Austin, Texas 78723 (United States); Baumann, S. [Evans Texas, Round Rock, Texas 78681 (United States); Kirchhoff, J. F. [Charles Evans and Assoc., California 94603 (United States); Hummel, R. [Department of Materials Science and Engineering, University of Florida, Gainesville, Florida 32611 (United States); Prussin, S. [Electrical Engineering Department, UCLA, Los Angeles, California, 90024 (United States); Kamenitsa, D. [Eaton Corporation, Austin, Texas 78717 (United States); Jackson, J. [Eaton Corporation, Beverly, Massachusetts 01915 (United States)
1999-06-10T23:59:59.000Z
The Monte Carlo ion implant simulator UT-MARLOWE has usually been verified using a large array of Secondary Ion Mass Spectroscopy (SIMS) data ({approx}200 profiles per ion species)(1). A model has recently been developed (1) to explicitly simulate defect production, diffusion, and their interactions during the picosecond 'defect production stage' of ion implantation. In order to thoroughly validate this model, both SIMS and various damage measurements were obtained (primarily channeling-Rutherford Backscattering Spectroscopy, Differential Reflectometry and Tapered Groove Profilometry, but supported with SEM and XTEM data). In general, the data from the various experimental techniques was consistent, and the Kinetic Accumulation Damage Model (KADM) was developed and validated using this data. This paper discusses the gathering of damage data in conjunction with SIMS in support of the development of an ion implantation simulator.
Nicolas Puech; Serge Mora; Ty Phou; Gregoire Porte; Jacques Jestin; Julian Oberdisse
2010-12-04T23:59:59.000Z
The effect of silica nanoparticles on transient microemulsion networks made of microemulsion droplets and telechelic copolymer molecules in water is studied, as a function of droplet size and concentration, amount of copolymer, and nanoparticle volume fraction. The phase diagram is found to be affected, and in particular the percolation threshold characterized by rheology is shifted upon addition of nanoparticles, suggesting participation of the particles in the network. This leads to a peculiar reinforcement behaviour of such microemulsion nanocomposites, the silica influencing both the modulus and the relaxation time. The reinforcement is modelled based on nanoparticles connected to the network via droplet adsorption. Contrast-variation Small Angle Neutron Scattering coupled to a reverse Monte Carlo approach is used to analyse the microstructure. The rather surprising intensity curves are shown to be in good agreement with the adsorption of droplets on the nanoparticle surface.
Paolini, Stefano; Ancilotto, Francesco; Toigo, Flavio [Dipartimento di Fisica 'G. Galilei', Universita' di Padova, via Marzolo 8, I-35131 Padova, Italy and CNR-INFM-DEMOCRITOS National Simulation Center, Trieste (Italy)
2007-03-28T23:59:59.000Z
The local order around alkali (Li{sup +} and Na{sup +}) and alkaline-earth (Be{sup +}, Mg{sup +}, and Ca{sup +}) ions in {sup 4}He clusters has been studied using ground-state path integral Monte Carlo calculations. The authors apply a criterion based on multipole dynamical correlations to discriminate between solidlike and liquidlike behaviors of the {sup 4}He shells coating the ions. As it was earlier suggested by experimental measurements in bulk {sup 4}He, their findings indicate that Be{sup +} produces a solidlike ('snowball') structure, similar to alkali ions and in contrast to the more liquidlike {sup 4}He structure embedding heavier alkaline-earth ions.
MONTE CARLO SIMULATIONS OF SMALL H2SO4-H2O CLUSTERS* B.N. HALE AND S.M. KATHMANN
Hale, Barbara N.
rain, and ozone depletion mechanisms involving sulfuric acid tetrahydrate (SAT) ice. At present Abstract - Small binary clusters of water and sulfuric acid are simulated with effective atom- atom pair for the free energy differences are given. Keywords - Monte Carlo, binary nucleation, sulfuric acid and water
Meirovitch, Hagai
Lower and upper bounds for the absolute free energy by the hypothetical scanning Monte Carlo method The hypothetical scanning HS method is a general approach for calculating the absolute entropy S and free energy F to provide the free energy through the analysis of a single configuration. © 2004 American Institute
Wilkins, John
Comparison of screened hybrid density functional theory to diffusion Monte Carlo in calculations of total energies of silicon phases and defects Enrique R. Batista,1, * Jochen Heyd,2 Richard G. Hennig,3 for the prediction of defect properties using the Heyd-Scuseria-Ernzerhof HSE screened-exchange hybrid functional
Glyde, Henry R.
Bose-Einstein condensation in trapped bosons: A variational Monte Carlo analysis J. L. DuBois and H describes the whole gas well. Effects of atoms excited above the condensate have been incorporated within correlations is used to study the sensitivity of condensate and noncondensate properties to the hard- sphere
Glyde, Henry R.
Natural orbitals and Bose-Einstein condensates in traps: A diffusion Monte Carlo analysis J. L. Du of the atoms in an ideal Bose gas can condense into a single quantum state. London 3,4 postulated in harmonic traps over a wide range of densities. Bose- Einstein condensation is formulated using the one
Sadeghi, Mahdi; Taghdiri, Fatemeh; Hamed Hosseini, S.; Tenreiro, Claudio [Agricultural, Medical and Industrial School, P.O. Box 31485-498, Karaj (Iran, Islamic Republic of); Engineering Faculty, Research and Science Campus, Islamic Azad University, Tehran (Iran, Islamic Republic of); Department of Energy Science, SungKyunKwan University, 300 Cheoncheon-dong, Suwon (Korea, Republic of)
2010-10-15T23:59:59.000Z
Purpose: The formalism recommended by Task Group 60 (TG-60) of the American Association of Physicists in Medicine (AAPM) is applicable for {beta} sources. Radioactive biocompatible and biodegradable {sup 153}Sm glass seed without encapsulation is a {beta}{sup -} emitter radionuclide with a short half-life and delivers a high dose rate to the tumor in the millimeter range. This study presents the results of Monte Carlo calculations of the dosimetric parameters for the {sup 153}Sm brachytherapy source. Methods: Version 5 of the (MCNP) Monte Carlo radiation transport code was used to calculate two-dimensional dose distributions around the source. The dosimetric parameters of AAPM TG-60 recommendations including the reference dose rate, the radial dose function, the anisotropy function, and the one-dimensional anisotropy function were obtained. Results: The dose rate value at the reference point was estimated to be 9.21{+-}0.6 cGy h{sup -1} {mu}Ci{sup -1}. Due to the low energy beta emitted from {sup 153}Sm sources, the dose fall-off profile is sharper than the other beta emitter sources. The calculated dosimetric parameters in this study are compared to several beta and photon emitting seeds. Conclusions: The results show the advantage of the {sup 153}Sm source in comparison with the other sources because of the rapid dose fall-off of beta ray and high dose rate at the short distances of the seed. The results would be helpful in the development of the radioactive implants using {sup 153}Sm seeds for the brachytherapy treatment.
Benmakhlouf, Hamza, E-mail: hamza.benmakhlouf@karolinska.se [Department of Medical Physics, Karolinska University Hospital, SE-171 76 Stockholm, Sweden, and Department of Physics, Medical Radiation Physics, Stockholm University and Karolinska Institute, SE-171 76 Stockholm (Sweden)] [Department of Medical Physics, Karolinska University Hospital, SE-171 76 Stockholm, Sweden, and Department of Physics, Medical Radiation Physics, Stockholm University and Karolinska Institute, SE-171 76 Stockholm (Sweden); Sempau, Josep [Institut de Tècniques Energètiques, Universitat Politècnica de Catalunya, Diagonal 647, E-08028, Barcelona (Spain)] [Institut de Tècniques Energètiques, Universitat Politècnica de Catalunya, Diagonal 647, E-08028, Barcelona (Spain); Andreo, Pedro [Department of Physics, Medical Radiation Physics, Stockholm University and Karolinska Institute, SE-171 76 Stockholm (Sweden)] [Department of Physics, Medical Radiation Physics, Stockholm University and Karolinska Institute, SE-171 76 Stockholm (Sweden)
2014-04-15T23:59:59.000Z
Purpose: To determine detector-specific output correction factors,k{sub Q} {sub c{sub l{sub i{sub n}}}} {sub ,Q} {sub m{sub s{sub r}}} {sup f{sub {sup {sub c}{sub l}{sub i}{sub n}{sub {sup ,f{sub {sup {sub m}{sub s}{sub r}{sub ,}}}}}}}} in 6 MV small photon beams for air and liquid ionization chambers, silicon diodes, and diamond detectors from two manufacturers. Methods: Field output factors, defined according to the international formalism published byAlfonso et al. [Med. Phys. 35, 5179–5186 (2008)], relate the dosimetry of small photon beams to that of the machine-specific reference field; they include a correction to measured ratios of detector readings, conventionally used as output factors in broad beams. Output correction factors were calculated with the PENELOPE Monte Carlo (MC) system with a statistical uncertainty (type-A) of 0.15% or lower. The geometries of the detectors were coded using blueprints provided by the manufacturers, and phase-space files for field sizes between 0.5 × 0.5 cm{sup 2} and 10 × 10 cm{sup 2} from a Varian Clinac iX 6 MV linac used as sources. The output correction factors were determined scoring the absorbed dose within a detector and to a small water volume in the absence of the detector, both at a depth of 10 cm, for each small field and for the reference beam of 10 × 10 cm{sup 2}. Results: The Monte Carlo calculated output correction factors for the liquid ionization chamber and the diamond detector were within about ±1% of unity even for the smallest field sizes. Corrections were found to be significant for small air ionization chambers due to their cavity dimensions, as expected. The correction factors for silicon diodes varied with the detector type (shielded or unshielded), confirming the findings by other authors; different corrections for the detectors from the two manufacturers were obtained. The differences in the calculated factors for the various detectors were analyzed thoroughly and whenever possible the results were compared to published data, often calculated for different accelerators and using the EGSnrc MC system. The differences were used to estimate a type-B uncertainty for the correction factors. Together with the type-A uncertainty from the Monte Carlo calculations, an estimation of the combined standard uncertainty was made, assigned to the mean correction factors from various estimates. Conclusions: The present work provides a consistent and specific set of data for the output correction factors of a broad set of detectors in a Varian Clinac iX 6 MV accelerator and contributes to improving the understanding of the physics of small photon beams. The correction factors cannot in general be neglected for any detector and, as expected, their magnitude increases with decreasing field size. Due to the reduced number of clinical accelerator types currently available, it is suggested that detector output correction factors be given specifically for linac models and field sizes, rather than for a beam quality specifier that necessarily varies with the accelerator type and field size due to the different electron spot dimensions and photon collimation systems used by each accelerator model.
Les Houches guidebook to Monte Carlo generators for hadron collider physics
Dobbs, Matt A.; Frixione, Stefano; Laenen, Eric; Tollefson, Kirsten
2004-03-01T23:59:59.000Z
Recently the collider physics community has seen significant advances in the formalisms and implementations of event generators. This review is a primer of the methods commonly used for the simulation of high energy physics events at particle colliders. We provide brief descriptions, references, and links to the specific computer codes which implement the methods. The aim is to provide an overview of the available tools, allowing the reader to ascertain which tool is best for a particular application, but also making clear the limitations of each tool.
Boyer, Edmond
for thyroid imaging. In case of SPECT, the radio-pharmaceutical emits single gamma rays isotropically. Emitted the body. In the absence of attenuation and scatter of the emitted gamma rays, each point of a projection intensity is then: I = f(x,y)du L where I is the detected signal, f(x ,y) the concentration of the radio
Chibani, Omar, E-mail: omar.chibani@fccc.edu; C-M Ma, Charlie [Fox Chase Cancer Center, Philadelphia, Pennsylvania 19111 (United States)] [Fox Chase Cancer Center, Philadelphia, Pennsylvania 19111 (United States)
2014-05-15T23:59:59.000Z
Purpose: To present a new accelerated Monte Carlo code for CT-based dose calculations in high dose rate (HDR) brachytherapy. The new code (HDRMC) accounts for both tissue and nontissue heterogeneities (applicator and contrast medium). Methods: HDRMC uses a fast ray-tracing technique and detailed physics algorithms to transport photons through a 3D mesh of voxels representing the patient anatomy with applicator and contrast medium included. A precalculated phase space file for the{sup 192}Ir source is used as source term. HDRM is calibrated to calculated absolute dose for real plans. A postprocessing technique is used to include the exact density and composition of nontissue heterogeneities in the 3D phantom. Dwell positions and angular orientations of the source are reconstructed using data from the treatment planning system (TPS). Structure contours are also imported from the TPS to recalculate dose-volume histograms. Results: HDRMC was first benchmarked against the MCNP5 code for a single source in homogenous water and for a loaded gynecologic applicator in water. The accuracy of the voxel-based applicator model used in HDRMC was also verified by comparing 3D dose distributions and dose-volume parameters obtained using 1-mm{sup 3} versus 2-mm{sup 3} phantom resolutions. HDRMC can calculate the 3D dose distribution for a typical HDR cervix case with 2-mm resolution in 5 min on a single CPU. Examples of heterogeneity effects for two clinical cases (cervix and esophagus) were demonstrated using HDRMC. The neglect of tissue heterogeneity for the esophageal case leads to the overestimate of CTV D90, CTV D100, and spinal cord maximum dose by 3.2%, 3.9%, and 3.6%, respectively. Conclusions: A fast Monte Carlo code for CT-based dose calculations which does not require a prebuilt applicator model is developed for those HDR brachytherapy treatments that use CT-compatible applicators. Tissue and nontissue heterogeneities should be taken into account in modern HDR brachytherapy planning.
Chandler, David [ORNL; Maldonado, G Ivan [ORNL; Primm, Trent [ORNL
2010-01-01T23:59:59.000Z
The purpose of this study is to validate a Monte Carlo based depletion methodology by comparing calculated post-irradiation uranium isotopic compositions in the fuel elements of the High Flux Isotope Reactor (HFIR) core to values measured using uranium mass-spectrographic analysis. Three fuel plates were analyzed: two from the outer fuel element (OFE) and one from the inner fuel element (IFE). Fuel plates O-111-8, O-350-1, and I-417-24 from outer fuel elements 5-O and 21-O and inner fuel element 49-I, respectively, were selected for examination. Fuel elements 5-O, 21-O, and 49-1 were loaded into HFIR during cycles 4, 16, and 35, respectively (mid to late 1960s). Approximately one year after each of these elements were irradiated, they were transferred to the High Radiation Level Examination Laboratory (HRLEL) where samples from these fuel plates were sectioned and examined via uranium mass-spectrographic analysis. The isotopic composition of each of the samples was used to determine the atomic percent of the uranium isotopes. A Monte Carlo based depletion computer program, ALEPH, which couples the MCNP and ORIGEN codes, was utilized to calculate the nuclide inventory at the end-of-cycle (EOC). A current ALEPH/MCNP input for HFIR fuel cycle 400 was modified to replicate cycles 4, 16, and 35. The control element withdrawal curves and flux trap loadings were revised, as well as the radial zone boundaries and nuclide concentrations in the MCNP model. The calculated EOC uranium isotopic compositions for the analyzed plates were found to be in good agreement with measurements, which reveals that ALEPH/MCNP can accurately calculate burn-up dependent uranium isotopic concentrations for the HFIR core. The spatial power distribution in HFIR changes significantly as irradiation time increases due to control element movement. Accurate calculation of the end-of-life uranium isotopic inventory is a good indicator that the power distribution variation as a function of space and time is accurately calculated, i.e. an integral check. Hence, the time dependent heat generation source terms needed for reactor core thermal hydraulic analysis, if derived from this methodology, have been shown to be accurate for highly enriched uranium (HEU) fuel.
Reverse Monte Carlo simulation of Se{sub 80}Te{sub 20} and Se{sub 80}Te{sub 15}Sb{sub 5} glasses
Abdel-Baset, A. M.; Rashad, M. [Physics Department, Faculty of Science , Assiut University, Assiut, P.O. Box 71516 (Egypt); Moharram, A. H. [Faculty of Science, King Abdul Aziz Univ., Rabigh Branch, P.O. Box 433 (Saudi Arabia)
2013-12-16T23:59:59.000Z
Two-dimensional Monte Carlo of the total pair distribution functions g(r) is determined for Se{sub 80}Te{sub 20} and Se{sub 80}Te{sub 15}Sb{sub 5} alloys, and then it used to assemble the three-dimensional atomic configurations using the reverse Monte Carlo simulation. The partial pair distribution functions g{sub ij}(r) indicate that the basic structure unit in the Se{sub 80}Te{sub 15}Sb{sub 5} glass is di-antimony tri-selenide units connected together through Se-Se and Se-Te chain. The structure of Se{sub 80}Te{sub 20} alloys is a chain of Se-Te and Se-Se in addition to some rings of Se atoms.
Stoller, Roger E [ORNL; Golubov, Stanislav I [ORNL; Becquart, C. S. [Universite de Lille; Domain, C. [EDF R& D, Clamart, France
2006-09-01T23:59:59.000Z
The multiscale modeling scheme encompasses models from the atomistic to the continuum scale. Phenomena at the mesoscale are typically simulated using reaction rate theory (RT), Monte Carlo (MC), or phase field models. These mesoscale models are appropriate for application to problems that involve intermediate length scales ( m to >mm), and timescales from diffusion (~ s) to long-term microstructural evolution (~years). Phenomena at this scale have the most direct impact on mechanical properties in structural materials of interest to nuclear energy systems, and are also the most accessible to direct comparison between the results of simulations and experiments. Recent advances in computational power have substantially expanded the range of application for MC models. Although the RT and MC models can be used simulate the same phenomena, many of the details are handled quite differently in the two approaches. A direct comparison of the RT and MC descriptions has been made in the domain of point defect cluster dynamics modeling, which is relevant to both the nucleation and evolution of radiation-induced defect structures. The relative merits and limitations of the two approaches are discussed, and the predictions of the two approaches are compared for specific irradiation conditions.
Kuss, M.; Markel, T.; Kramer, W.
2011-01-01T23:59:59.000Z
Concentrated purchasing patterns of plug-in vehicles may result in localized distribution transformer overload scenarios. Prolonged periods of transformer overloading causes service life decrements, and in worst-case scenarios, results in tripped thermal relays and residential service outages. This analysis will review distribution transformer load models developed in the IEC 60076 standard, and apply the model to a neighborhood with plug-in hybrids. Residential distribution transformers are sized such that night-time cooling provides thermal recovery from heavy load conditions during the daytime utility peak. It is expected that PHEVs will primarily be charged at night in a residential setting. If not managed properly, some distribution transformers could become overloaded, leading to a reduction in transformer life expectancy, thus increasing costs to utilities and consumers. A Monte-Carlo scheme simulated each day of the year, evaluating 100 load scenarios as it swept through the following variables: number of vehicle per transformer, transformer size, and charging rate. A general method for determining expected transformer aging rate will be developed, based on the energy needs of plug-in vehicles loading a residential transformer.
Wes Armour; Simon Hands; Costas Strouthos
2013-02-07T23:59:59.000Z
We formulate a model of N_f=4 flavors of relativistic fermion in 2+1d in the presence of a chemical potential mu coupled to two flavor doublets with opposite sign, akin to isopsin chemical potential in QCD. This is argued to be an effective theory for low energy electronic excitations in bilayer graphene, in which an applied voltage between the layers ensures equal populations of particles on one layer and holes on the other. The model is then reformulated on a spacetime lattice using staggered fermions, and in the absence of a sign problem, simulated using an orthodox hybrid Monte Carlo algorithm. With the coupling strength chosen to be close to a quantum critical point believed to exist for N_f
Fabio L. Pedrocchi; N. E. Bonesteel; David P. DiVincenzo
2015-07-03T23:59:59.000Z
The Majorana code is an example of a stabilizer code where the quantum information is stored in a system supporting well-separated Majorana Bound States (MBSs). We focus on one-dimensional realizations of the Majorana code, as well as networks of such structures, and investigate their lifetime when coupled to a parity-preserving thermal environment. We apply the Davies prescription, a standard method that describes the basic aspects of a thermal environment, and derive a master equation in the Born-Markov limit. We first focus on a single wire with immobile MBSs and perform error correction to annihilate thermal excitations. In the high-temperature limit, we show both analytically and numerically that the lifetime of the Majorana qubit grows logarithmically with the size of the wire. We then study a trijunction with four MBSs when braiding is executed. We study the occurrence of dangerous error processes that prevent the lifetime of the Majorana code from growing with the size of the trijunction. The origin of the dangerous processes is the braiding itself, which separates pairs of excitations and renders the noise nonlocal; these processes arise from the basic constraints of moving MBSs in 1D structures. We confirm our predictions with Monte Carlo simulations in the low-temperature regime, i.e. the regime of practical relevance. Our results put a restriction on the degree of self-correction of this particular 1D topological quantum computing architecture.
Umbreit, Stefan; Rasio, Frederic A. [Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA) and Department of Physics and Astronomy, Northwestern University, 2145 Sheridan Rd., Evanston, IL 60208 (United States); Fregeau, John M. [Kavli Institute of Theoretical Physics, University of California, Santa Barbara, CA 93106 (United States); Chatterjee, Sourav, E-mail: s-umbreit@northwestern.edu [Department of Astronomy, University of Florida, 211 Bryant Space Science Center, Gainesville, FL (United States)
2012-05-01T23:59:59.000Z
We present results from a series of Monte Carlo (MC) simulations investigating the imprint of a central intermediate-mass black hole (IMBH) on the structure of a globular cluster. We investigate the three-dimensional and projected density profiles, and stellar disruption rates for idealized as well as realistic cluster models, taking into account a stellar mass spectrum and stellar evolution, and allowing for a larger, more realistic number of stars than was previously possible with direct N-body methods. We compare our results to other N-body and Fokker-Planck simulations published previously. We find, in general, very good agreement for the overall cluster structure and dynamical evolution between direct N-body simulations and our MC simulations. Significant differences exist in the number of stars that are tidally disrupted by the IMBH, and this is most likely caused by the wandering motion of the IMBH, not included in the MC scheme. These differences, however, are negligible for the final IMBH masses in realistic cluster models, as the disruption rates are generally much lower than for single-mass clusters. As a direct comparison to observations we construct a detailed model for the cluster NGC 5694, which is known to possess a central surface brightness cusp consistent with the presence of an IMBH. We find that not only the inner slope but also the outer part of the surface brightness profile agree well with observations. However, there is only a slight preference for models harboring an IMBH compared to models without.
Weinman, J.P. [Lockheed Martin Corp., Schenectady, NY (United States)
1998-06-01T23:59:59.000Z
The purpose of this study is to investigate the eigenvalue sensitivity to new {sup 235}U, hydrogen, and oxygen cross section data sets by comparing RACER Monte Carlo calculations for several thermal and intermediate spectrum critical experiments. The new {sup 235}U library (Version 107) was derived by L. Leal and H. Derrien by fitting differential experimental data for {sup 235}U while constraining the fit to match experimental capture and fission resonance integrals and Maxwellian averaged thermal K1 (v fission minus absorption). The new hydrogen library (Version 45) consists of the ENDF/B-VI release 3 data with a 332.0 mb 2,200 m/s cross section which replaces the value of 332.6 mb in the current library. The new oxygen library (Version 39) is based on a recent evaluation of {sup 16}O by E. Caro. Nineteen Oak Ridge and Rocky Flats thermal solution benchmark critical assemblies that span a range of hydrogen-to-{sup 235}U (H/U) concentrations (2,052 to 27.1) and above-thermal neutron leakage fractions (0.555 to 0.011) were analyzed. In addition, three intermediate spectrum critical assemblies (UH3-UR, UH3-NI, and HISS-HUG) were studied.
Kim, Sung Jin; Kim, Sung Kyu
2015-01-01T23:59:59.000Z
Treatment planning system calculations in inhomogeneous regions may present significant inaccuracies due to loss of electronic equilibrium. In this study, three different dose calculation algorithms, pencil beam, collapsed cone, and Monte-Carlo, provided by our planning system were compared to assess their impact on the three-dimensional planning of lung and breast cases. A total of five breast and five lung cases were calculated using the PB, CC, and MC algorithms. Planning treatment volume and organs at risk delineation was performed according to our institutions protocols on the Oncentra MasterPlan image registration module, on 0.3 to 0.5 cm computed tomography slices taken under normal respiration conditions. Four intensity-modulated radiation therapy plans were calculated according to each algorithm for each patient. The plans were conducted on the Oncentra MasterPlan and CMS Monaco treatment planning systems, for 6 MV. The plans were compared in terms of the dose distribution in target, OAR volumes, and...
Sarkadi, L
2015-01-01T23:59:59.000Z
The three-body dynamics of the ionization of the atomic hydrogen by 30 keV antiproton impact has been investigated by calculation of fully differential cross sections (FDCS) using the classical trajectory Monte Carlo (CTMC) method. The results of the calculations are compared with the predictions of quantum mechanical descriptions: The semi-classical time-dependent close-coupling theory, the fully quantal, time-independent close-coupling theory, and the continuum-distorted-wave-eikonal-initial-state model. In the analysis particular emphasis was put on the role of the nucleus-nucleus (NN) interaction played in the ionization process. For low-energy electron ejection CTMC predicts a large NN interaction effect on FDCS, in agreement with the quantum mechanical descriptions. By examining individual particle trajectories it was found that the relative motion between the electron and the nuclei is coupled very weakly with that between the nuclei, consequently the two motions can be treated independently. A simple ...
An OpenCL-based Monte Carlo dose calculation engine (oclMC) for coupled photon-electron transport
Tian, Zhen; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun
2015-01-01T23:59:59.000Z
Monte Carlo (MC) method has been recognized the most accurate dose calculation method for radiotherapy. However, its extremely long computation time impedes clinical applications. Recently, a lot of efforts have been made to realize fast MC dose calculation on GPUs. Nonetheless, most of the GPU-based MC dose engines were developed in NVidia CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a fast cross-platform MC dose engine oclMC using OpenCL environment for external beam photon and electron radiotherapy in MeV energy range. Coupled photon-electron MC simulation was implemented with analogue simulations for photon transports and a Class II condensed history scheme for electron transports. To test the accuracy and efficiency of our dose engine oclMC, we compared dose calculation results of oclMC and gDPM, our previously developed GPU-based MC code, for a 15 MeV electron ...
Yangsen Yao; S. Nan Zhang; Xiaoling Zhang; Yuxin Feng; Craig R. Robinson
2004-10-10T23:59:59.000Z
Understanding the properties of the hot corona is important for studying the accretion disks in black hole X-ray binary systems. Using the Monte-Carlo technique to simulate the inverse Compton scattering between photons emitted from the cold disk and electrons in the hot corona, we have produced two table models in the $XSPEC$ format for the spherical corona case and the disk-like (slab) corona case. All parameters in our table models are physical properties of the system and can be derived from data fitting directly. Applying the models to broad-band spectra of the black hole candidate XTE J2012+381 observed with BeppoSAX, we estimated the size of the corona and the inner radius of the disk. The size of the corona in this system is several tens of gravitational radius, and the substantial increase of the inner disk radius during the transit from hard-state to soft-state is not found.
Lin, J. Y. Y. [California Institute of Technology, Pasadena] [California Institute of Technology, Pasadena; Aczel, Adam A [ORNL] [ORNL; Abernathy, Douglas L [ORNL] [ORNL; Nagler, Stephen E [ORNL] [ORNL; Buyers, W. J. L. [National Research Council of Canada] [National Research Council of Canada; Granroth, Garrett E [ORNL] [ORNL
2014-01-01T23:59:59.000Z
Recently an extended series of equally spaced vibrational modes was observed in uranium nitride (UN) by performing neutron spectroscopy measurements using the ARCS and SEQUOIA time-of- flight chopper spectrometers [A.A. Aczel et al, Nature Communications 3, 1124 (2012)]. These modes are well described by 3D isotropic quantum harmonic oscillator (QHO) behavior of the nitrogen atoms, but there are additional contributions to the scattering that complicate the measured response. In an effort to better characterize the observed neutron scattering spectrum of UN, we have performed Monte Carlo ray tracing simulations of the ARCS and SEQUOIA experiments with various sample kernels, accounting for the nitrogen QHO scattering, contributions that arise from the acoustic portion of the partial phonon density of states (PDOS), and multiple scattering. These simulations demonstrate that the U and N motions can be treated independently, and show that multiple scattering contributes an approximate Q-independent background to the spectrum at the oscillator mode positions. Temperature dependent studies of the lowest few oscillator modes have also been made with SEQUOIA, and our simulations indicate that the T-dependence of the scattering from these modes is strongly influenced by the uranium lattice.
Duan, Zhe; Barber, Desmond P; Qin, Qing
2015-01-01T23:59:59.000Z
With the recently emerging global interest in building a next generation of circular electron-positron colliders to study the properties of the Higgs boson, and other important topics in particle physics at ultra-high beam energies, it is also important to pursue the possibility of implementing polarized beams at this energy scale. It is therefore necessary to set up simulation tools to evaluate the beam polarization at these ultra-high beam energies. In this paper, a Monte-Carlo simulation of the equilibrium beam polarization based on the Polymorphic Tracking Code(PTC) is described. The simulations are for a model storage ring with parameters similar to those of proposed circular colliders in this energy range, and they are compared with the suggestion that there are different regimes for the spin dynamics underlying the polarization of a beam in the presence of synchrotron radiation at ultra-high beam energies. In particular, it has been suggested that the so-called "correlated" crossing of spin resonances ...
Andrea Zen; Ye Luo; Sandro Sorella; Leonardo Guidoni
2013-09-02T23:59:59.000Z
Quantum Monte Carlo methods are accurate and promising many body techniques for electronic structure calculations which, in the last years, are encountering a growing interest thanks to their favorable scaling with the system size and their efficient parallelization, particularly suited for the modern high performance computing facilities. The ansatz of the wave function and its variational flexibility are crucial points for both the accurate description of molecular properties and the capabilities of the method to tackle large systems. In this paper, we extensively analyze, using different variational ansatzes, several properties of the water molecule, namely: the total energy, the dipole and quadrupole momenta, the ionization and atomization energies, the equilibrium configuration, and the harmonic and fundamental frequencies of vibration. The investigation mainly focuses on variational Monte Carlo calculations, although several lattice regularized diffusion Monte Carlo calculations are also reported. Through a systematic study, we provide a useful guide to the choice of the wave function, the pseudo potential, and the basis set for QMC calculations. We also introduce a new strategy for the definition of the atomic orbitals involved in the Jastrow - Antisymmetrised Geminal power wave function, in order to drastically reduce the number of variational parameters. This scheme significantly improves the efficiency of QMC energy minimization in case of large basis sets.
Belec, Jason; Ploquin, Nicolas; La Russa, Daniel J.; Clark, Brenda G. [Department of Medical Physics, Ottawa Hospital Cancer Centre, 501 Smyth Road, Box 927, Ottawa, Ontario K1H 8L6 (Canada) and Carleton University, 1125 Colonel By Drive, Ottawa, Ontario K1S 5B6 (Canada); Department of Medical Physics, Ottawa Hospital Cancer Centre, 501 Smyth Road, Box 927, Ottawa, Ontario K1H 8L6 (Canada); Department of Medical Physics, Ottawa Hospital Cancer Centre, 501 Smyth Road, Box 927, Ottawa, Ontario K1H 8L6 (Canada) and Carleton University, 1125 Colonel By Drive, Ottawa, Ontario K1S 5B6 (Canada)
2011-02-15T23:59:59.000Z
Purpose: The commercial release of volumetric modulated arc therapy techniques using a conventional linear accelerator and the growing number of helical tomotherapy users have triggered renewed interest in dose verification methods, and also in tools for exploring the impact of machine tolerance and patient motion on dose distributions without the need to approximate time-varying parameters such as gantry position, MLC leaf motion, or patient motion. To this end we have developed a Monte Carlo-based calculation method capable of simulating a wide variety of treatment techniques without the need to resort to discretization approximations. Methods: The ability to perform complete position-probability-sampled Monte Carlo dose calculations was implemented in the BEAMnrc/DOSXZYnrc user codes of EGSnrc. The method includes full accelerator head simulations of our tomotherapy and Elekta linacs, and a realistic representation of continous motion via the sampling of a time variable. The functionality of this algorithm was tested via comparisons with both measurements and treatment planning dose distributions for four types of treatment techniques: 3D conformal, step-shoot intensity modulated radiation therapy, helical tomotherapy, and volumetric modulated arc therapy. Results: For static fields, the absolute dose agreement between the EGSnrc Monte Carlo calculations and measurements is within 2%/1 mm. Absolute dose agreement between Monte Carlo calculations and treatment planning system for the four different treatment techniques is within 3%/3 mm. Discrepancies with the tomotherapy TPS on the order of 10%/5 mm were observed for the extreme example of a small target located 15 cm off-axis and planned with a low modulation factor. The increase in simulation time associated with using position-probability sampling, as opposed to the discretization approach, was less than 2% in most cases. Conclusions: A single Monte Carlo simulation method can be used to calculate patient dose distribution for various types of treatment techniques delivered with either tomotherapy or a conventional linac. The method simplifies the simulation process, improves dose calculation accuracy, and involves an acceptably small change in computation time.
A Monte Carlo framework for noncontinuous interactions between particles and classical fields
Christian Wesp; Hendrik van Hees; Alex Meistrenko; Carsten Greiner
2015-04-10T23:59:59.000Z
Particles and fields are standard components in numerical simulations like transport simulations in nuclear physics and have very well understood dynamics. Still, a common problem is the interaction between particles and fields due to their different formal description. Particle interactions are discrete, point-like events while fields have purely continuous equations of motion. A workaround is the use of effective theories like the Langevin equation with the drawback of energy conservation violation. We present a new method, which allows to model non-continuous interactions between particles and scalar fields, allowing us to simulate scattering-like interactions which exchange energy and momentum quanta between fields and particles obeying full energy and momentum conservation and control over interaction strengths and times. In this paper we apply this method to different model systems, starting with a simple scalar harmonic oscillator which is damped by losing discrete energy quanta. The second and third system is a scalar oscillator and a one dimensional field which are both damped by discrete energy loss and which are coupled to a stochastic force, leading to equilibrium states which correspond to statistical Langevin-like systems. The last example is a scalar field in 3D which is coupled to a microcanonical ensemble of particles by incorporating particle production and annihilation processes. Obeying the detailed-balance principle, the system equilibrates to thermal and chemical equilibrium with dynamical fluctuations on the fields, generated dynamically by the discrete interactions.
Talamo, A.; Gohar, Y. (Nuclear Engineering Division) [Nuclear Engineering Division
2011-05-12T23:59:59.000Z
This study investigates the performance of the YALINA Booster subcritical assembly, located in Belarus, during operation with high (90%), medium (36%), and low (21%) enriched uranium fuels in the assembly's fast zone. The YALINA Booster is a zero-power, subcritical assembly driven by a conventional neutron generator. It was constructed for the purpose of investigating the static and dynamic neutronics properties of accelerator driven subcritical systems, and to serve as a fast neutron source for investigating the properties of nuclear reactions, in particular transmutation reactions involving minor-actinides. The first part of this study analyzes the assembly's performance with several fuel types. The MCNPX and MONK Monte Carlo codes were used to determine effective and source neutron multiplication factors, effective delayed neutron fraction, prompt neutron lifetime, neutron flux profiles and spectra, and neutron reaction rates produced from the use of three neutron sources: californium, deuterium-deuterium, and deuterium-tritium. In the latter two cases, the external neutron source operates in pulsed mode. The results discussed in the first part of this report show that the use of low enriched fuel in the fast zone of the assembly diminishes neutron multiplication. Therefore, the discussion in the second part of the report focuses on finding alternative fuel loading configurations that enhance neutron multiplication while using low enriched uranium fuel. It was found that arranging the interface absorber between the fast and the thermal zones in a circular rather than a square array is an effective method of operating the YALINA Booster subcritical assembly without downgrading neutron multiplication relative to the original value obtained with the use of the high enriched uranium fuels in the fast zone.
Watanabe, Y; Dahlman, E [University of Minnesota, Minneapolis, MN (United States)
2014-06-01T23:59:59.000Z
Purpose: To evaluate the analytic formula of the cell death probability after single fraction dose. Methods: Cancer cells endlessly divide, but radiation causes the cancer cells to die. Not all cells die right away after irradiation. Instead, they continue dividing for next few cell cycles before they stop dividing and die. At the end of every cell cycle, the cell decides if it undertakes the mitotic process with a certain probability, Pdiv, which is altered by the radiation. Previously, by using a simple analytic model of radiobiology experiments, we obtained a formula of Pdeath (= 1 ? Pdiv). A question is if the proposed probability can reproduce the well-known survival data of the LQ model. In this study, we evaluated the formula by doing a Monte Carlo simulation of the cell proliferation process. Starting with Ns seed cells, the cell proliferation process was simulated for N generations or until all cells die. We counted the number of living cells at the end. Assuming that the cell colony survived when more than Nc cells were still alive, the surviving fraction S was estimated. We compared the S vs. dose, or S-D curve, with the LQ model. Results: The results indicated that our formula does not reproduce the experimentally observed S-D curve without selecting appropriate ? and ?/?. With parameter optimization, there was a fair agreement between the MC result and the LQ curve of dose lower than 20Gy. However, the survival fraction of MC decreased much faster in comparison to the LQ data for doses higher than 20 Gy. Conclusion: This study showed that the previously derived probability of cell death per cell cycle is not sufficiently accurate to replicate common radiobiological experiments. The formula must be modified by considering its cell cycle dependence and some other unknown effects.
Amoush, Ahmad, E-mail: amousha@ccf.org [Department of Radiation Oncology, University of Cincinnati College of Medicine, Cincinnati, OH (United States); Luckstead, Marcus; Lamba, Michael; Elson, Howard; Kassing, William [Department of Radiation Oncology, University of Cincinnati College of Medicine, Cincinnati, OH (United States)
2013-07-01T23:59:59.000Z
This study aimed to investigate the high-dose rate Iridium-192 brachytherapy, including near source dosimetry, of a catheter-based applicator from 0.5 mm to 1 cm along the transverse axis. Radiochromic film and Monte Carlo (MC) simulation were used to generate absolute dose for the catheter-based applicator. Results from radiochromic film and MC simulation were compared directly to the treatment planning system (TPS) based on the American Association of Physicists in Medicine Updated Task Group 43 (TG-43U1) dose calculation formalism. The difference between dose measured using radiochromic film along the transverse plane at 0.5 mm from the surface and the predicted dose by the TPS was 24%±13%. The dose difference between the MC simulation along the transverse plane at 0.5 mm from the surface and the predicted dose by the TPS was 22.1%±3%. For distances from 1.5 mm to 1 cm from the surface, radiochromic film and MC simulation agreed with TPS within an uncertainty of 3%. The TPS under-predicts the dose at the surface of the applicator, i.e., 0.5 mm from the catheter surface, as compared to the measured and MC simulation predicted dose. MC simulation results demonstrated that 15% of this error is due to neglecting the beta particles and discrete electrons emanating from the sources and not considered by the TPS, and 7% of the difference was due to the photon alone, potentially due to the differences in MC dose modeling, photon spectrum, scoring techniques, and effect of the presence of the catheter and the air gap. Beyond 1 mm from the surface, the TPS dose algorithm agrees with the experimental and MC data within 3%.
Andrea Zen; Emanuele Coccia; Ye Luo; Sandro Sorella; Leonardo Guidoni
2014-06-17T23:59:59.000Z
Diradical molecules are essential species involved in many organic and inorganic chemical reactions. The computational study of their electronic structure is often challenging, because a reliable description of the correlation, and in particular of the static one, requires multi-reference techniques. The Jastrow correlated Antisymmetrized Geminal Power (JAGP) is a compact and efficient wave function ansatz, based on the valence-bond representation, which can be used within Quantum Monte Carlo (QMC) approaches. The AGP part can be rewritten in terms of molecular orbitals, obtaining a multi-determinant expansion with zero-seniority number. In the present work we demonstrate the capability of the JAGP ansatz to correctly describe the electronic structure of two diradical prototypes: the orthogonally twisted ethylene, C2H4, and the methylene, CH2, representing respectively a homosymmetric and heterosymmetric system. On the other hand, we show that the simple ansatz of a Jastrow correlated Single Determinant (JSD) wave function is unable to provide an accurate description of the electronic structure in these diradical molecules, both at variational level and, more remarkably, in the fixed-nodes projection schemes showing that a poor description of the static correlation yields an inaccurate nodal surface. The suitability of JAGP to correctly describe diradicals with a computational cost comparable with that of a JSD calculation, in combination with a favorable scalability of QMC algorithms with the system size, opens new perspectives in the ab initio study of large diradical systems, like the transition states in cycloaddition reactions and the thermal isomerization of biological chromophores.
Liu, T.; Ding, A.; Ji, W.; Xu, X. G. [Nuclear Engineering and Engineering Physics, Rensselaer Polytechnic Inst., Troy, NY 12180 (United States); Carothers, C. D. [Dept. of Computer Science, Rensselaer Polytechnic Inst. RPI (United States); Brown, F. B. [Los Alamos National Laboratory (LANL) (United States)
2012-07-01T23:59:59.000Z
Monte Carlo (MC) method is able to accurately calculate eigenvalues in reactor analysis. Its lengthy computation time can be reduced by general-purpose computing on Graphics Processing Units (GPU), one of the latest parallel computing techniques under development. The method of porting a regular transport code to GPU is usually very straightforward due to the 'embarrassingly parallel' nature of MC code. However, the situation becomes different for eigenvalue calculation in that it will be performed on a generation-by-generation basis and the thread coordination should be explicitly taken care of. This paper presents our effort to develop such a GPU-based MC code in Compute Unified Device Architecture (CUDA) environment. The code is able to perform eigenvalue calculation under simple geometries on a multi-GPU system. The specifics of algorithm design, including thread organization and memory management were described in detail. The original CPU version of the code was tested on an Intel Xeon X5660 2.8 GHz CPU, and the adapted GPU version was tested on NVIDIA Tesla M2090 GPUs. Double-precision floating point format was used throughout the calculation. The result showed that a speedup of 7.0 and 33.3 were obtained for a bare spherical core and a binary slab system respectively. The speedup factor was further increased by a factor of {approx}2 on a dual GPU system. The upper limit of device-level parallelism was analyzed, and a possible method to enhance the thread-level parallelism was proposed. (authors)
SU-D-19A-04: Parameter Characterization of Electron Beam Monte Carlo Phase Space of TrueBeam Linacs
Rodrigues, A; Yin, F; Wu, Q [Duke University Medical Center, Durham, NC (United States); Medical Physics Graduate Program, Duke University Medical Center, Durham, NC (United States); Sawkey, D [Varian Medical Systems, Palo Alto, CA (United States)
2014-06-01T23:59:59.000Z
Purpose: For TrueBeam Monte Carlo simulations, Varian does not distribute linac head geometry and material compositions, instead providing a phase space file (PSF) for the users. The PSF has a finite number of particle histories and can have very large file size, yet still contains inherent statistical noises. The purpose of this study is to characterize the electron beam PSF with parameters. Methods: The PSF is a snapshot of all particles' information at a given plane above jaws including type, energy, position, and directions. This study utilized a preliminary TrueBeam PSF, of which validation against measurement is presented in another study. To characterize the PSF, distributions of energy, position, and direction of all particles are analyzed as piece-wise parameterized functions of radius and polar angle. Subsequently, a pseudo PSF was generated based on this characterization. Validation was assessed by directly comparing the true and pseudo PSFs, and by using both PSFs in the down-stream MC simulations (BEAMnrc/DOSXYZnrc) and comparing dose distributions for 3 applicators at 15 MeV. Statistical uncertainty of 4% was limited by the number of histories in the original PSF. Percent depth dose (PDD) and orthogonal (PRF) profiles at various depths were evaluated. Results: Preliminary results showed that this PSF parameterization was accurate, with no visible differences between original and pseudo PSFs except at the edge (6 cm off axis), which did not impact dose distributions in phantom. PDD differences were within 1 mm for R{sub 7} {sub 0}, R{sub 5} {sub 0}, R{sub 3} {sub 0}, and R{sub 1} {sub 0}, and PRF field size and penumbras were within 2 mm. Conclusion: A PSF can be successfully characterized by distributions for energy, position, and direction as parameterized functions of radius and polar angles; this facilitates generating sufficient particles at any statistical precision. Analyses for all other electron energies are under way and results will be included in the presentation.
Su, L.; Du, X.; Liu, T.; Xu, X. G. [Nuclear Engineering Program, Rensselaer Polytechnic Institute, Troy, NY 12180 (United States)
2013-07-01T23:59:59.000Z
An electron-photon coupled Monte Carlo code ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - is being developed at Rensselaer Polytechnic Institute as a software test bed for emerging heterogeneous high performance computers that utilize accelerators such as GPUs. In this paper, the preliminary results of code development and testing are presented. The electron transport in media was modeled using the class-II condensed history method. The electron energy considered ranges from a few hundred keV to 30 MeV. Moller scattering and bremsstrahlung processes above a preset energy were explicitly modeled. Energy loss below that threshold was accounted for using the Continuously Slowing Down Approximation (CSDA). Photon transport was dealt with using the delta tracking method. Photoelectric effect, Compton scattering and pair production were modeled. Voxelised geometry was supported. A serial ARHCHER-CPU was first written in C++. The code was then ported to the GPU platform using CUDA C. The hardware involved a desktop PC with an Intel Xeon X5660 CPU and six NVIDIA Tesla M2090 GPUs. ARHCHER was tested for a case of 20 MeV electron beam incident perpendicularly on a water-aluminum-water phantom. The depth and lateral dose profiles were found to agree with results obtained from well tested MC codes. Using six GPU cards, 6x10{sup 6} histories of electrons were simulated within 2 seconds. In comparison, the same case running the EGSnrc and MCNPX codes required 1645 seconds and 9213 seconds, respectively, on a CPU with a single core used. (authors)
Interpretation of 3D void measurements with Tripoli4.6/JEFF3.1.1 Monte Carlo code
Blaise, P.; Colomba, A. [CEA, DEN, DER/SPRC/LEPh, F-13108 Saint Paul-Lez-Durance (France)
2012-07-01T23:59:59.000Z
The present work details the first analysis of the 3D void phase conducted during the EPICURE/UM17x17/7% mixed UOX/MOX configuration. This configuration is composed of a homogeneous central 17x17 MOX-7% assembly, surrounded by portions of 17x17 1102 assemblies with guide-tubes. The void bubble is modelled by a small waterproof 5x5 fuel pin parallelepiped box of 11 cm height, placed in the centre of the MOX assembly. This bubble, initially placed at the core mid-plane, is then moved in different axial positions to study the evolution in the core of the axial perturbation. Then, to simulate the growing of this bubble in order to understand the effects of increased void fraction along the fuel pin, 3 and 5 bubbles have been stacked axially, from the core mid-plane. The C/E comparison obtained with the Monte Carlo code Tripoli4 for both radial and axial fission rate distributions, and in particular the reproduction of the very important flux gradients at the void/water interfaces, changing as the bubble is displaced along the z-axis are very satisfactory. It demonstrates both the capability of the code and its library to reproduce this kind of situation, as the very good quality of the experimental results, confirming the UM-17x17 as an excellent experimental benchmark for 3D code validation. This work has been performed within the frame of the V and V program for the future APOLL03 deterministic code of CEA starting in 2012, and its V and V benchmarking database. (authors)
Lou, K [U.T M.D. Anderson Cancer Center, Houston, TX (United States); Rice University, Houston, TX (United States); Mirkovic, D; Sun, X; Zhu, X; Poenisch, F; Grosshans, D; Shao, Y [U.T M.D. Anderson Cancer Center, Houston, TX (United States); Clark, J [Rice University, Houston, TX (United States)
2014-06-01T23:59:59.000Z
Purpose: To study the feasibility of intra-fraction proton beam-range verification with PET imaging. Methods: Two phantoms homogeneous cylindrical PMMA phantoms (290 mm axial length, 38 mm and 200 mm diameter respectively) were studied using PET imaging: a small phantom using a mouse-sized PET (61 mm diameter field of view (FOV)) and a larger phantom using a human brain-sized PET (300 mm FOV). Monte Carlo (MC) simulations (MCNPX and GATE) were used to simulate 179.2 MeV proton pencil beams irradiating the two phantoms and be imaged by the two PET systems. A total of 50 simulations were conducted to generate 50 positron activity distributions and correspondingly 50 measured activity-ranges. The accuracy and precision of these activity-ranges were calculated under different conditions (including count statistics and other factors, such as crystal cross-section). Separate from the MC simulations, an activity distribution measured from a simulated PET image was modeled as a noiseless positron activity distribution corrupted by Poisson counting noise. The results from these two approaches were compared to assess the impact of count statistics on the accuracy and precision of activity-range calculations. Results: MC Simulations show that the accuracy and precision of an activity-range are dominated by the number (N) of coincidence events of the reconstructed image. They are improved in a manner that is inversely proportional to 1/sqrt(N), which can be understood from the statistical modeling. MC simulations also indicate that the coincidence events acquired within the first 60 seconds with 10{sup 9} protons (small phantom) and 10{sup 10} protons (large phantom) are sufficient to achieve both sub-millimeter accuracy and precision. Conclusion: Under the current MC simulation conditions, the initial study indicates that the accuracy and precision of beam-range verification are dominated by count statistics, and intra-fraction PET image-based beam-range verification is feasible. This work was supported by a research award RP120326 from Cancer Prevention and Research Institute of Texas.
allegri filippini carlo: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
1 Wideband Array Signal Processing Using Sequential Monte Carlo Methods William Ng, James P. Reilly model in the time-domain, and incorporates the Markov chain Monte Carlo...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:Energy: Grid Integration Redefining What's Possible forPortsmouth/Paducah47,193.70COMMUNITY AEROSOL:Quantum Condensed Matter
Harris, S; Dave Dunn, D
2009-03-01T23:59:59.000Z
The sensitivity of two specific types of radionuclide detectors for conducting an on-board search in the maritime environment was evaluated using Monte Carlo simulation implemented in AVERT{reg_sign}. AVERT{reg_sign}, short for the Automated Vulnerability Evaluation for Risk of Terrorism, is personal computer based vulnerability assessment software developed by the ARES Corporation. The sensitivity of two specific types of radionuclide detectors for conducting an on-board search in the maritime environment was evaluated using Monte Carlo simulation. The detectors, a RadPack and also a Personal Radiation Detector (PRD), were chosen from the class of Human Portable Radiation Detection Systems (HPRDS). Human Portable Radiation Detection Systems (HPRDS) serve multiple purposes. In the maritime environment, there is a need to detect, localize, characterize, and identify radiological/nuclear (RN) material or weapons. The RadPack is a commercially available broad-area search device used for gamma and also for neutron detection. The PRD is chiefly used as a personal radiation protection device. It is also used to detect contraband radionuclides and to localize radionuclide sources. Neither device has the capacity to characterize or identify radionuclides. The principal aim of this study was to investigate the sensitivity of both the RadPack and the PRD while being used under controlled conditions in a simulated maritime environment for detecting hidden RN contraband. The detection distance varies by the source strength and the shielding present. The characterization parameters of the source are not indicated in this report so the results summarized are relative. The Monte Carlo simulation results indicate the probability of detection of the RN source at certain distances from the detector which is a function of transverse speed and instrument sensitivity for the specified RN source.
Muir, B. R., E-mail: bmuir@physics.carleton.ca; Rogers, D. W. O., E-mail: drogers@physics.carleton.ca [Physics Department, Carleton Laboratory for Radiotherapy Physics, Carleton University, 1125 Colonel By Drive, Ottawa, Ontario K1S 5B6 (Canada)] [Physics Department, Carleton Laboratory for Radiotherapy Physics, Carleton University, 1125 Colonel By Drive, Ottawa, Ontario K1S 5B6 (Canada)
2013-12-15T23:59:59.000Z
Purpose: To investigate recommendations for reference dosimetry of electron beams and gradient effects for the NE2571 chamber and to provide beam quality conversion factors using Monte Carlo simulations of the PTW Roos and NE2571 ion chambers. Methods: The EGSnrc code system is used to calculate the absorbed dose-to-water and the dose to the gas in fully modeled ion chambers as a function of depth in water. Electron beams are modeled using realistic accelerator simulations as well as beams modeled as collimated point sources from realistic electron beam spectra or monoenergetic electrons. Beam quality conversion factors are calculated with ratios of the doses to water and to the air in the ion chamber in electron beams and a cobalt-60 reference field. The overall ion chamber correction factor is studied using calculations of water-to-air stopping power ratios. Results: The use of an effective point of measurement shift of 1.55 mm from the front face of the PTW Roos chamber, which places the point of measurement inside the chamber cavity, minimizes the difference betweenR{sub 50}, the beam quality specifier, calculated from chamber simulations compared to that obtained using depth-dose calculations in water. A similar shift minimizes the variation of the overall ion chamber correction factor with depth to the practical range and reduces the root-mean-square deviation of a fit to calculated beam quality conversion factors at the reference depth as a function of R{sub 50}. Similarly, an upstream shift of 0.34 r{sub cav} allows a more accurate determination of R{sub 50} from NE2571 chamber calculations and reduces the variation of the overall ion chamber correction factor with depth. The determination of the gradient correction using a shift of 0.22 r{sub cav} optimizes the root-mean-square deviation of a fit to calculated beam quality conversion factors if all beams investigated are considered. However, if only clinical beams are considered, a good fit to results for beam quality conversion factors is obtained without explicitly correcting for gradient effects. The inadequacy of R{sub 50} to uniquely specify beam quality for the accurate selection of k{sub Q} factors is discussed. Systematic uncertainties in beam quality conversion factors are analyzed for the NE2571 chamber and amount to between 0.4% and 1.2% depending on assumptions used. Conclusions: The calculated beam quality conversion factors for the PTW Roos chamber obtained here are in good agreement with literature data. These results characterize the use of an NE2571 ion chamber for reference dosimetry of electron beams even in low-energy beams.
Sutherland, J. G. H.; Miksys, N.; Thomson, R. M., E-mail: rthomson@physics.carleton.ca [Carleton Laboratory for Radiotherapy Physics, Department of Physics, Carleton University, Ottawa, Ontario K1S 5B6 (Canada); Furutani, K. M. [Department of Radiation Oncology, Mayo Clinic College of Medicine, Rochester, Minnesota 55905 (United States)] [Department of Radiation Oncology, Mayo Clinic College of Medicine, Rochester, Minnesota 55905 (United States)
2014-01-15T23:59:59.000Z
Purpose: To investigate methods of generating accurate patient-specific computational phantoms for the Monte Carlo calculation of lung brachytherapy patient dose distributions. Methods: Four metallic artifact mitigation methods are applied to six lung brachytherapy patient computed tomography (CT) images: simple threshold replacement (STR) identifies high CT values in the vicinity of the seeds and replaces them with estimated true values; fan beam virtual sinogram replaces artifact-affected values in a virtual sinogram and performs a filtered back-projection to generate a corrected image; 3D median filter replaces voxel values that differ from the median value in a region of interest surrounding the voxel and then applies a second filter to reduce noise; and a combination of fan beam virtual sinogram and STR. Computational phantoms are generated from artifact-corrected and uncorrected images using several tissue assignment schemes: both lung-contour constrained and unconstrained global schemes are considered. Voxel mass densities are assigned based on voxel CT number or using the nominal tissue mass densities. Dose distributions are calculated using the EGSnrc user-code BrachyDose for{sup 125}I, {sup 103}Pd, and {sup 131}Cs seeds and are compared directly as well as through dose volume histograms and dose metrics for target volumes surrounding surgical sutures. Results: Metallic artifact mitigation techniques vary in ability to reduce artifacts while preserving tissue detail. Notably, images corrected with the fan beam virtual sinogram have reduced artifacts but residual artifacts near sources remain requiring additional use of STR; the 3D median filter removes artifacts but simultaneously removes detail in lung and bone. Doses vary considerably between computational phantoms with the largest differences arising from artifact-affected voxels assigned to bone in the vicinity of the seeds. Consequently, when metallic artifact reduction and constrained tissue assignment within lung contours are employed in generated phantoms, this erroneous assignment is reduced, generally resulting in higher doses. Lung-constrained tissue assignment also results in increased doses in regions of interest due to a reduction in the erroneous assignment of adipose to voxels within lung contours. Differences in dose metrics calculated for different computational phantoms are sensitive to radionuclide photon spectra with the largest differences for{sup 103}Pd seeds and smallest but still considerable differences for {sup 131}Cs seeds. Conclusions: Despite producing differences in CT images, dose metrics calculated using the STR, fan beam + STR, and 3D median filter techniques produce similar dose metrics. Results suggest that the accuracy of dose distributions for permanent implant lung brachytherapy is improved by applying lung-constrained tissue assignment schemes to metallic artifact corrected images.
Singleterry, R.C. Jr. [Argonne National Lab., Idaho Falls, ID (United States); Jahshan, S. [SNJ Consulting, Idaho Falls, ID (United States)
1996-04-01T23:59:59.000Z
The F{sub N} basis function expansion solution to the Boltzmann transport equation in Cartesian geometry is summarized and evaluated for several heterogeneous slabs of interest. The resultant scalar and angular fluxes and the critical slab thickness (when applicable) compare to the Monte Carlo transport evaluations by MCNP. A correspondence between the one-group macroscopic cross section used in the FN code is made to energy independent synthetic MCNP microscopic cross sections. The FN method produces comparable results to MCNP, requires fewer computer resources, but is limited to specific problem types.
Pasciak, Alexander Samuel
2009-05-15T23:59:59.000Z
.4 describes the efficient method for sampling the polar scattering angle, where x is a uniformly distributed random number between 0 and 1 (15). 2cos( ) 1 1 axJ a x #1; #2;= - #3; #4;+ -#5; #6; A Monte Carlo code utilizing the screened Rutherford... the frequency mean. (3.5) 9 ( ) ( ) 180 0 180 0 sin sin o o N elastic N elastic mom sJ J J sJ J ?#1; #2;? #3; #4;?W#5; #6; = ?#1; #2;? #3; #4;?W#5; #6; #7; #7; where J is the polar angle of collision. Preservation of higher order moments is also...
Indium-Gallium Segregation in CuIn$_{x}$Ga$_{1-x}$Se$_2$: An ab initio based Monte Carlo Study
Ludwig, Christian D R; Felser, Claudia; Schilling, Tanja; Windeln, Johannes; Kratzer, Peter
2010-01-01T23:59:59.000Z
Thin-film solar cells with CuIn$_x$Ga$_{1-x}$Se$_2$ (CIGS) absorber are still far below their efficiency limit, although lab cells reach already 19.9%. One important aspect is the homogeneity of the alloy. Large-scale simulations combining Monte Carlo and density functional calculations show that two phases coexist in thermal equilibrium below room temperature. Only at higher temperatures, CIGS becomes more and more a homogeneous alloy. A larger degree of inhomogeneity for Ga-rich CIGS persists over a wide temperature range, which may contribute to the low observed efficiency of Ga-rich CIGS solar cells.
Lane, R. A.; Ordonez, C. A. [Department of Physics, University of North Texas, Denton, Texas (United States)
2013-04-19T23:59:59.000Z
A computational tool is described that can be used for designing magnetic focusing or defocusing systems. A fully three-dimensional classical trajectory Monte Carlo simulation has been developed. Ion trajectories are simulated in the presence of magnetic elements that can be modeled as any combination of current loops and current lines. Each current loop or line may be located anywhere in the system and oriented along any of the three coordinate axes. The configuration need not be axisymmetric. The solutions are obtained using normalized parameters, which can be used for easily scaling the results. Examples are provided of the utility of the code.
B. M. Abramov; P. N. Alexeev; Yu. A. Borodin; S. A. Bulychjov; I. A. Dukhovskoy; A. P. Krutenkova; V. V. Kulikov; M. A. Martemianov; M. A. Matsyuk; E. N. Turdakina; A. I. Khanov; S. G. Mashnik
2015-02-05T23:59:59.000Z
Momentum spectra of hydrogen isotopes have been measured at 3.5 deg from C12 fragmentation on a Be target. Momentum spectra cover both the region of fragmentation maximum and the cumulative region. Differential cross sections span five orders of magnitude. The data are compared to predictions of four Monte Carlo codes: QMD, LAQGSM, BC, and INCL++. There are large differences between the data and predictions of some models in the high momentum region. The INCL++ code gives the best and almost perfect description of the data.
Fang, Yuan, E-mail: yuan.fang@fda.hhs.gov [Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993-0002 and Department of Electrical and Computer Engineering, The University of Waterloo, Waterloo, Ontario N2L 3G1 (Canada)] [Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993-0002 and Department of Electrical and Computer Engineering, The University of Waterloo, Waterloo, Ontario N2L 3G1 (Canada); Karim, Karim S. [Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario N2L 3G1 (Canada)] [Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario N2L 3G1 (Canada); Badano, Aldo [Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993-0002 (United States)] [Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993-0002 (United States)
2014-01-15T23:59:59.000Z
Purpose: The authors describe the modification to a previously developed Monte Carlo model of semiconductor direct x-ray detector required for studying the effect of burst and recombination algorithms on detector performance. This work provides insight into the effect of different charge generation models for a-Se detectors on Swank noise and recombination fraction. Methods: The proposed burst and recombination models are implemented in the Monte Carlo simulation package, ARTEMIS, developed byFang et al. [“Spatiotemporal Monte Carlo transport methods in x-ray semiconductor detectors: Application to pulse-height spectroscopy in a-Se,” Med. Phys. 39(1), 308–319 (2012)]. The burst model generates a cloud of electron-hole pairs based on electron velocity, energy deposition, and material parameters distributed within a spherical uniform volume (SUV) or on a spherical surface area (SSA). A simple first-hit (FH) and a more detailed but computationally expensive nearest-neighbor (NN) recombination algorithms are also described and compared. Results: Simulated recombination fractions for a single electron-hole pair show good agreement with Onsager model for a wide range of electric field, thermalization distance, and temperature. The recombination fraction and Swank noise exhibit a dependence on the burst model for generation of many electron-hole pairs from a single x ray. The Swank noise decreased for the SSA compared to the SUV model at 4 V/?m, while the recombination fraction decreased for SSA compared to the SUV model at 30 V/?m. The NN and FH recombination results were comparable. Conclusions: Results obtained with the ARTEMIS Monte Carlo transport model incorporating drift and diffusion are validated with the Onsager model for a single electron-hole pair as a function of electric field, thermalization distance, and temperature. For x-ray interactions, the authors demonstrate that the choice of burst model can affect the simulation results for the generation of many electron-hole pairs. The SSA model is more sensitive to the effect of electric field compared to the SUV model and that the NN and FH recombination algorithms did not significantly affect simulation results.
Journal of Statistical Physics, Vol. 89, Nos. 5/6, 1997 Simulated Annealing Using Hybrid Monte Carlo
Toral, Raúl
of the system. It is known that if a system is heated to a very high temperature T and then it is slowly cooledJournal of Statistical Physics, Vol. 89, Nos. 5/6, 1997 Simulated Annealing Using Hybrid Monte global actualizationsvia the hybrid Monte Carloalgorithmin theirgeneralizedversion for the proposal
Park, Su-Jung; /Bonn U.
2004-02-01T23:59:59.000Z
The measurement of the t{bar t} production cross section at {radical}s = 1.96 TeV using the final state with an electron and jets is studied with Monte Carlo event samples. All methods used in the real data analysis to measure efficiencies and to estimate the background contributions are examined. The studies focus on measuring the electron reconstruction efficiencies as well as on improving the electron identification and background suppression. With a generated input cross section of 7 pb the following result is obtained: {sigma}{sub t{bar t}} = (7 {+-} 1.63(stat){sub -1.14}{sup +0.94} (syst)) pb.
Mayorga, P. A. [FISRAD S.A.S., CR 64 A No. 22 - 41, Bogotá D C (Colombia) [FISRAD S.A.S., CR 64 A No. 22 - 41, Bogotá D C (Colombia); Departamento de Física Atómica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain); Brualla, L.; Sauerwein, W. [NCTeam, Strahlenklinik, Universitätsklinikum Essen, Hufelandstraße 55, D-45122 Essen (Germany)] [NCTeam, Strahlenklinik, Universitätsklinikum Essen, Hufelandstraße 55, D-45122 Essen (Germany); Lallena, A. M., E-mail: lallena@ugr.es [Departamento de Física Atómica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)
2014-01-15T23:59:59.000Z
Purpose: Retinoblastoma is the most common intraocular malignancy in the early childhood. Patients treated with external beam radiotherapy respond very well to the treatment. However, owing to the genotype of children suffering hereditary retinoblastoma, the risk of secondary radio-induced malignancies is high. The University Hospital of Essen has successfully treated these patients on a daily basis during nearly 30 years using a dedicated “D”-shaped collimator. The use of this collimator that delivers a highly conformed small radiation field, gives very good results in the control of the primary tumor as well as in preserving visual function, while it avoids the devastating side effects of deformation of midface bones. The purpose of the present paper is to propose a modified version of the “D”-shaped collimator that reduces even further the irradiation field with the scope to reduce as well the risk of radio-induced secondary malignancies. Concurrently, the new dedicated “D”-shaped collimator must be easier to build and at the same time produces dose distributions that only differ on the field size with respect to the dose distributions obtained by the current collimator in use. The scope of the former requirement is to facilitate the employment of the authors' irradiation technique both at the authors' and at other hospitals. The fulfillment of the latter allows the authors to continue using the clinical experience gained in more than 30 years. Methods: The Monte Carlo codePENELOPE was used to study the effect that the different structural elements of the dedicated “D”-shaped collimator have on the absorbed dose distribution. To perform this study, the radiation transport through a Varian Clinac 2100 C/D operating at 6 MV was simulated in order to tally phase-space files which were then used as radiation sources to simulate the considered collimators and the subsequent dose distributions. With the knowledge gained in that study, a new, simpler, “D”-shaped collimator is proposed. Results: The proposed collimator delivers a dose distribution which is 2.4 cm wide along the inferior-superior direction of the eyeball. This width is 0.3 cm narrower than that of the dose distribution obtained with the collimator currently in clinical use. The other relevant characteristics of the dose distribution obtained with the new collimator, namely, depth doses at clinically relevant positions, penumbrae width, and shape of the lateral profiles, are statistically compatible with the results obtained for the collimator currently in use. Conclusions: The smaller field size delivered by the proposed collimator still fully covers the planning target volume with at least 95% of the maximum dose at a depth of 2 cm and provides a safety margin of 0.2 cm, so ensuring an adequate treatment while reducing the irradiated volume.
Barbosa, Marcia C. B.
the lack of consensus concerning the origin of water-like anomalies, it is widely believed of the Bell-Lavis model for water Carlos E. Fiore Departamento de F´isica, Universidade Federal do Paran for liquid water is investigated through numerical simulations. The lattice- gas model on a triangular
Foyevtsova, Kateryna [ORNL] [ORNL; Krogel, Jaron T [ORNL] [ORNL; Kim, Jeongnim [ORNL] [ORNL; Kent, Paul R [ORNL] [ORNL; Dagotto, Elbio R [ORNL] [ORNL; Reboredo, Fernando A [ORNL] [ORNL
2014-01-01T23:59:59.000Z
In view of the continuous theoretical efforts aimed at an accurate microscopic description of the strongly correlated transition metal oxides and related materials, we show that with continuum quantum Monte Carlo (QMC) calculations it is possible to obtain the value of the spin superexchange coupling constant of a copper oxide in a quantitatively excellent agreement with experiment. The variational nature of the QMC total energy allows us to identify the best trial wave function out of the available pool of wave functions, which makes the approach essentially free from adjustable parameters and thus truly ab initio. The present results on magnetic interactions suggest that QMC is capable of accurately describing ground state properties of strongly correlated materials.
Raychaudhuri, Subhadip
2015-01-01T23:59:59.000Z
Death ligand mediated apoptotic activation is a mode of programmed cell death that is widely used in cellular and physiological situations. Interest in studying death ligand induced apoptosis has increased due to the promising role of recombinant soluble forms of death ligands (mainly recombinant TRAIL) in anti-cancer therapy. A clear elucidation of how death ligands activate the type 1 and type 2 apoptotic pathways in healthy and cancer cells may help develop better chemotherapeutic strategies. In this work, we use kinetic Monte Carlo simulations to address the problem of type 1/ type 2 choice in death ligand mediated apoptosis of cancer cells. Our study provides insights into the activation of membrane proximal death module that results from complex interplay between death and decoy receptors. Relative abundance of death and decoy receptors was shown to be a key parameter for activation of the initiator caspases in the membrane module. Increased concentration of death ligands frequently increased the type 1...
Bostani, Maryam, E-mail: mbostani@mednet.ucla.edu; McMillan, Kyle; Cagnon, Chris H.; McNitt-Gray, Michael F. [Departments of Biomedical Physics and Radiology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90024 (United States); DeMarco, John J. [Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, California 90095 (United States)
2014-11-01T23:59:59.000Z
Purpose: Monte Carlo (MC) simulation methods have been widely used in patient dosimetry in computed tomography (CT), including estimating patient organ doses. However, most simulation methods have undergone a limited set of validations, often using homogeneous phantoms with simple geometries. As clinical scanning has become more complex and the use of tube current modulation (TCM) has become pervasive in the clinic, MC simulations should include these techniques in their methodologies and therefore should also be validated using a variety of phantoms with different shapes and material compositions to result in a variety of differently modulated tube current profiles. The purpose of this work is to perform the measurements and simulations to validate a Monte Carlo model under a variety of test conditions where fixed tube current (FTC) and TCM were used. Methods: A previously developed MC model for estimating dose from CT scans that models TCM, built using the platform of MCNPX, was used for CT dose quantification. In order to validate the suitability of this model to accurately simulate patient dose from FTC and TCM CT scan, measurements and simulations were compared over a wide range of conditions. Phantoms used for testing range from simple geometries with homogeneous composition (16 and 32 cm computed tomography dose index phantoms) to more complex phantoms including a rectangular homogeneous water equivalent phantom, an elliptical shaped phantom with three sections (where each section was a homogeneous, but different material), and a heterogeneous, complex geometry anthropomorphic phantom. Each phantom requires varying levels of x-, y- and z-modulation. Each phantom was scanned on a multidetector row CT (Sensation 64) scanner under the conditions of both FTC and TCM. Dose measurements were made at various surface and depth positions within each phantom. Simulations using each phantom were performed for FTC, detailed x–y–z TCM, and z-axis-only TCM to obtain dose estimates. This allowed direct comparisons between measured and simulated dose values under each condition of phantom, location, and scan to be made. Results: For FTC scans, the percent root mean square (RMS) difference between measurements and simulations was within 5% across all phantoms. For TCM scans, the percent RMS of the difference between measured and simulated values when using detailed TCM and z-axis-only TCM simulations was 4.5% and 13.2%, respectively. For the anthropomorphic phantom, the difference between TCM measurements and detailed TCM and z-axis-only TCM simulations was 1.2% and 8.9%, respectively. For FTC measurements and simulations, the percent RMS of the difference was 5.0%. Conclusions: This work demonstrated that the Monte Carlo model developed provided good agreement between measured and simulated values under both simple and complex geometries including an anthropomorphic phantom. This work also showed the increased dose differences for z-axis-only TCM simulations, where considerable modulation in the x–y plane was present due to the shape of the rectangular water phantom. Results from this investigation highlight details that need to be included in Monte Carlo simulations of TCM CT scans in order to yield accurate, clinically viable assessments of patient dosimetry.
Jinaphanh, A.; Miss, J.; Richet, Y. [Inst. for Radiological Protection and Nuclear Safety IRSN, BP 17, 92262 Fontenay-Aux-Roses Cedex (France); Jacquet, O. [Independent Consultant (France)
2012-07-01T23:59:59.000Z
Monte Carlo (MC) criticality calculations are based on an iterative method. It requires a converged fission source distribution before beginning tallying the effective multiplication factor (K{sub eff}) or other quantities of interest. However, it is pretty difficult to locate on the run, the end of the source convergence and scores may be biased by an initial transient. This paper deals with a method that locates and suppresses the transient due to the initialization in an output series, applied here to K{sub eff} and Shannon entropy. It relies on modeling stationary series by an order 1 auto regressive process and applying statistical tests based on a Student Bridge statistics. It should be noticed that the initial transient suppression only aims at obtaining stationary output series and cannot guarantee any kind of convergence. The truncation method is applied on both K{sub eff} and Shannon entropy on three test cases. (authors)
Hu, Z. M.; Xie, X. F.; Chen, Z. J.; Peng, X. Y.; Du, T. F.; Cui, Z. Q.; Ge, L. J.; Li, T.; Yuan, X.; Zhang, X.; Li, X. Q.; Zhang, G. H.; Chen, J. X.; Fan, T. S., E-mail: tsfan@pku.edu.cn [State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871 (China); Hu, L. Q.; Zhong, G. Q.; Lin, S. Y.; Wan, B. N. [Institute of Plasma Physics, CAS, Hefei 230031 (China); Gorini, G. [Dipartimento di Fisica, Università di Milano-Bicocca, Milano 20126 (Italy); Istituto di Fisica del Plasma “P. Caldirola,” Milano 20126 (Italy)
2014-11-15T23:59:59.000Z
To assess the neutron energy spectra and the neutron dose for different positions around the Experimental Advanced Superconducting Tokamak (EAST) device, a Bonner Sphere Spectrometer (BSS) was developed at Peking University, with totally nine polyethylene spheres and a SP9 {sup 3}He counter. The response functions of the BSS were calculated by the Monte Carlo codes MCNP and GEANT4 with dedicated models, and good agreement was found between these two codes. A feasibility study was carried out with a simulated neutron energy spectrum around EAST, and the simulated “experimental” result of each sphere was obtained by calculating the response with MCNP, which used the simulated neutron energy spectrum as the input spectrum. With the deconvolution of the “experimental” measurement, the neutron energy spectrum was retrieved and compared with the preset one. Good consistence was found which offers confidence for the application of the BSS system for dose and spectrum measurements around a fusion device.
Joint International Conference on Supercomputing in Nuclear Applications and Monte Carlo 2013 (SNA are expected to operate at temperatures exceeding 350Â°C, and possibly approaching 750Â°C, there is a genuine recoil atom is transferred, through interatomic interactions, to the local environment of the recoil atom
Chen, X; Xing, L; Luxton, G; Bush, K [Stanford University, Palo Alto, CA (United States); Azcona, J [Clinica Universidad de Navarra, Pamplona (Spain)
2014-06-01T23:59:59.000Z
Purpose: Patient-specific QA for VMAT is incapable of providing full 3D dosimetric information and is labor intensive in the case of severe heterogeneities or small-aperture beams. A cloud-based Monte Carlo dose reconstruction method described here can perform the evaluation in entire 3D space and rapidly reveal the source of discrepancies between measured and planned dose. Methods: This QA technique consists of two integral parts: measurement using a phantom containing array of dosimeters, and a cloud-based voxel Monte Carlo algorithm (cVMC). After a VMAT plan was approved by a physician, a dose verification plan was created and delivered to the phantom using our Varian Trilogy or TrueBeam system. Actual delivery parameters (i.e., dose fraction, gantry angle, and MLC at control points) were extracted from Dynalog or trajectory files. Based on the delivery parameters, the 3D dose distribution in the phantom containing detector were recomputed using Eclipse dose calculation algorithms (AAA and AXB) and cVMC. Comparison and Gamma analysis is then conducted to evaluate the agreement between measured, recomputed, and planned dose distributions. To test the robustness of this method, we examined several representative VMAT treatments. Results: (1) The accuracy of cVMC dose calculation was validated via comparative studies. For cases that succeeded the patient specific QAs using commercial dosimetry systems such as Delta- 4, MAPCheck, and PTW Seven29 array, agreement between cVMC-recomputed, Eclipse-planned and measured doses was obtained with >90% of the points satisfying the 3%-and-3mm gamma index criteria. (2) The cVMC method incorporating Dynalog files was effective to reveal the root causes of the dosimetric discrepancies between Eclipse-planned and measured doses and provide a basis for solutions. Conclusion: The proposed method offers a highly robust and streamlined patient specific QA tool and provides a feasible solution for the rapidly increasing use of VMAT treatments in the clinic.