Marcus, Ryan C. [Los Alamos National Laboratory
2012-07-25T23:59:59.000Z
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
Quantum Gibbs ensemble Monte Carlo
Fantoni, Riccardo, E-mail: rfantoni@ts.infn.it [Dipartimento di Scienze Molecolari e Nanosistemi, Università Ca’ Foscari Venezia, Calle Larga S. Marta DD2137, I-30123 Venezia (Italy); Moroni, Saverio, E-mail: moroni@democritos.it [DEMOCRITOS National Simulation Center, Istituto Officina dei Materiali del CNR and SISSA Scuola Internazionale Superiore di Studi Avanzati, Via Bonomea 265, I-34136 Trieste (Italy)
2014-09-21T23:59:59.000Z
We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.
Is Monte Carlo embarrassingly parallel?
Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)
2012-07-01T23:59:59.000Z
Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5(Million Cubic Feet) Oregon (Including Vehicle Fuel) (Million Cubic Feet)setsManagementProton ChannelMedia3/2012Quantifying the AerosolQuantum Monte
Zimmerman, G.B.
1997-06-24T23:59:59.000Z
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
Yeo, Sang Chul
Ammonia (NH[subscript 3]) nitridation on an Fe surface was studied by combining density functional theory (DFT) and kinetic Monte Carlo (kMC) calculations. A DFT calculation was performed to obtain the energy barriers ...
Monte Carlo Methods in Quantum Field Theory
I. Montvay
2007-05-30T23:59:59.000Z
In these lecture notes some applications of Monte Carlo integration methods in Quantum Field Theory - in particular in Quantum Chromodynamics - are introduced and discussed.
A 9 Monte Carlo Simulations Michael Bachmann
Bachmann, Michael
generally called "thermal fluctua- tions") or "lose" energy by friction effects (dissipation). The total Reweighting methods 9 3.1 Single-histogram reweighting . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-ensemble Monte Carlo methods 12 4.1 Replica-exchange Monte Carlo method (parallel tempering
Applications of FLUKA Monte Carlo Code for Nuclear and Accelerator...
Office of Scientific and Technical Information (OSTI)
Journal Article: Applications of FLUKA Monte Carlo Code for Nuclear and Accelerator Physics Citation Details In-Document Search Title: Applications of FLUKA Monte Carlo Code for...
Monte Carlo simulation in systems biology
Schellenberger, Jan
2010-01-01T23:59:59.000Z
2 The history of Monte Carlo Sampling in Systems Biology 1.1simulation tools: the systems biology workbench and biospiceCellular and Molecular Biology. ASM Press, Washington
THE BEGINNING of the MONTE CARLO METHOD
. For a whole host of 125 #12;Monte Carlo reasons, he had become seriously inter- ested in the thermonuclear a preliminary computational model of a thermonuclear reaction for the ENIAC. He felt he could convince
Multiple quadrature by Monte Carlo techniques
Voss, John Dietrich
1966-01-01T23:59:59.000Z
of a multiple integral ordinarily hopeless to attempt by 1 classical methods. " In this paper the Monte Carlo Method of numerical quadrature is used to integrate some functions that are extremely difficult and tedious to integrate by any other known... and the table of known values can be extended. The method developed here may also be used to evaluate the distribution at any desired values of the parameters . C HAP TER II THEORETICAL CONSIDERATIONS Hammersley has said: "Every Monte Carlo computation...
Fractured reservoir evaluation using Monte Carlo techniques
Sears, G.F.; Phillips, N.V.
1987-01-01T23:59:59.000Z
Pro forma cash-flow analysis of petroleum ventures usually is considered as a deterministic model. In the last 10 years, Monte Carlo analysis has allowed the introduction of probability distributions of input variables in place of single-valued functions. Reserve determination and rate scheduling in these current Monte Carlo techniques have relied on the volumetric formula, which works well in nonfractured reservoirs. Recent massive drilling in fractured reservoirs has rendered this approach unusable. This paper develops a variation of the Arps rate-cumulative equation as a basic model for the determination of the distribution of original reserves and the decline rates. Continuation of the Monte Carlo technique into net present value analysis and internal rate of return (IRR) is also developed.
Quantum Mechanical Single Molecule Partition Function from Path Integral Monte Carlo Simulations
Chempath, Shaji; Bell, Alexis T.; Predescu, Cristian
2008-01-01T23:59:59.000Z
calculated from path integral Monte Carlo(PIMC) and harmoniccalculated from path integral Monte Carlo (PIMC) andFunction from Path Integral Monte Carlo Simulations Shaji
Monte Carlo Tools for Jet Quenching
Korinna Zapp
2011-09-07T23:59:59.000Z
A thorough understanding of jet quenching on the basis of multi-particle final states and jet observables requires new theoretical tools. This talk summarises the status and propects of the theoretical description of jet quenching in terms of Monte Carlo generators.
Monte Carlo event reconstruction implemented with artificial neural networks
Tolley, Emma Elizabeth
2011-01-01T23:59:59.000Z
I implemented event reconstruction of a Monte Carlo simulation using neural networks. The OLYMPUS Collaboration is using a Monte Carlo simulation of the OLYMPUS particle detector to evaluate systematics and reconstruct ...
John von Neumann Institute for Computing Monte Carlo Protein Folding
Hsu, Hsiao-Ping
John von Neumann Institute for Computing Monte Carlo Protein Folding: Simulations of Met://www.fz-juelich.de/nic-series/volume20 #12;#12;Monte Carlo Protein Folding: Simulations of Met-Enkephalin with Solvent-Accessible Area difficulties in applying Monte Carlo methods to protein folding. The solvent-accessible area method, a popular
Deterministic Simulation for Risk Management QuasiMonte Carlo beats
Papageorgiou, Anargyros
1 Deterministic Simulation for Risk Management QuasiMonte Carlo beats Monte Carlo for Value are widely used in pricing and risk management of complex financial instruments. Recently, quasiMonte Carlo and accuracy. In this paper we address the application of these deterministic methods to risk management. Our
Deterministic Simulation for Risk Management Quasi-Monte Carlo beats
Papageorgiou, Anargyros
1 Deterministic Simulation for Risk Management Quasi-Monte Carlo beats Monte Carlo for Value are widely used in pricing and risk management of complex financial instruments. Recently, quasi-Monte Carlo and accuracy. In this paper we address the application of these deterministic methods to risk management. Our
Random number stride in Monte Carlo calculations
Hendricks, J.S.
1990-01-01T23:59:59.000Z
Monte Carlo radiation transport codes use a sequence of pseudorandom numbers to sample from probability distributions. A common practice is to start each source particle a predetermined number of random numbers up the pseudorandom number sequence. This number of random numbers skipped between each source particles the random number stride, S. Consequently, the jth source particle always starts with the j{center dot}Sth random number providing correlated sampling'' between similar calculations. A new machine-portable random number generator has been written for the Monte Carlo radiation transport code MCNP providing user's control of the random number stride. First the new MCNP random number generator algorithm will be described and then the effects of varying the stride will be presented. 2 refs., 1 fig.
Status of Monte-Carlo Event Generators
Hoeche, Stefan; /SLAC
2011-08-11T23:59:59.000Z
Recent progress on general-purpose Monte-Carlo event generators is reviewed with emphasis on the simulation of hard QCD processes and subsequent parton cascades. Describing full final states of high-energy particle collisions in contemporary experiments is an intricate task. Hundreds of particles are typically produced, and the reactions involve both large and small momentum transfer. The high-dimensional phase space makes an exact solution of the problem impossible. Instead, one typically resorts to regarding events as factorized into different steps, ordered descending in the mass scales or invariant momentum transfers which are involved. In this picture, a hard interaction, described through fixed-order perturbation theory, is followed by multiple Bremsstrahlung emissions off initial- and final-state and, finally, by the hadronization process, which binds QCD partons into color-neutral hadrons. Each of these steps can be treated independently, which is the basic concept inherent to general-purpose event generators. Their development is nowadays often focused on an improved description of radiative corrections to hard processes through perturbative QCD. In this context, the concept of jets is introduced, which allows to relate sprays of hadronic particles in detectors to the partons in perturbation theory. In this talk, we briefly review recent progress on perturbative QCD in event generation. The main focus lies on the general-purpose Monte-Carlo programs HERWIG, PYTHIA and SHERPA, which will be the workhorses for LHC phenomenology. A detailed description of the physics models included in these generators can be found in [8]. We also discuss matrix-element generators, which provide the parton-level input for general-purpose Monte Carlo.
A Monte Carlo algorithm for degenerate plasmas
Turrell, A.E., E-mail: a.turrell09@imperial.ac.uk; Sherlock, M.; Rose, S.J.
2013-09-15T23:59:59.000Z
A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the Fermi–Dirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electron–ion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.
Monte Carlo errors with less errors
Ulli Wolff
2006-11-29T23:59:59.000Z
We explain in detail how to estimate mean values and assess statistical errors for arbitrary functions of elementary observables in Monte Carlo simulations. The method is to estimate and sum the relevant autocorrelation functions, which is argued to produce more certain error estimates than binning techniques and hence to help toward a better exploitation of expensive simulations. An effective integrated autocorrelation time is computed which is suitable to benchmark efficiencies of simulation algorithms with regard to specific observables of interest. A Matlab code is offered for download that implements the method. It can also combine independent runs (replica) allowing to judge their consistency.
Marcus, Ryan C. [Los Alamos National Laboratory
2012-07-24T23:59:59.000Z
Overview of this presentation is (1) Exascale computing - different technologies, getting there; (2) high-performance proof-of-concept MCMini - features and results; and (3) OpenCL toolkit - Oatmeal (OpenCL Automatic Memory Allocation Library) - purpose and features. Despite driver issues, OpenCL seems like a good, hardware agnostic tool. MCMini demonstrates the possibility for GPGPU-based Monte Carlo methods - it shows great scaling for HPC application and algorithmic equivalence. Oatmeal provides a flexible framework to aid in the development of scientific OpenCL codes.
Metodos de Monte Carlo Paulo Roberto de Carvalho Junior
JÂ´unior MÂ´etodos de Monte Carlo #12;Exemplo: CÂ´alculo de Paulo Roberto de Carvalho JÂ´unior MÂ´etodos de Monte Carlo #12;Exemplo: CÂ´alculo de EquaÂ¸c~ao da Circunfer^encia: x2 + y2 = r2 x2 + y2 = 1 AQ Paulo Roberto de Carvalho JÂ´unior MÂ´etodos de Monte Carlo #12;Algoritmo: CÂ´alculo de double calc
Kinetic lattice Monte Carlo simulations of interdiffusion in...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Abstract: Point-defect-mediated diffusion processes are investigated in strained SiGe alloys using kinetic lattice Monte Carlo *KLMC* simulation technique. The KLMC...
Monte Carlo Simulations of the Corrosion of Aluminoborosilicate...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Monte Carlo Simulations of the Corrosion of Aluminoborosilicate Glasses. Abstract: Aluminum is one of the most common components included in nuclear waste glasses. Therefore,...
Quantum Monte Carlo methods for nuclear physics
J. Carlson; S. Gandolfi; F. Pederiva; Steven C. Pieper; R. Schiavilla; K. E. Schmidt; R. B. Wiringa
2015-04-29T23:59:59.000Z
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Quantum Monte Carlo methods for nuclear physics
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Carlson, Joseph A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Gandolfi, Stefano [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pederiva, Francesco [Univ. of Trento (Italy); Pieper, Steven C. [Argonne National Lab. (ANL), Argonne, IL (United States); Schiavilla, Rocco [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Old Dominion Univ., Norfolk, VA (United States); Schmidt, K. E, [Arizona State Univ., Tempe, AZ (United States); Wiringa, Robert B. [Argonne National Lab. (ANL), Argonne, IL (United States)
2012-01-01T23:59:59.000Z
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Quantum Monte Carlo methods for nuclear physics
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.
2014-10-19T23:59:59.000Z
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-bodymore »interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less
Metallic lithium by quantum Monte Carlo
Sugiyama, G.; Zerah, G.; Alder, B.J.
1986-12-01T23:59:59.000Z
Lithium was chosen as the simplest known metal for the first application of quantum Monte Carlo methods in order to evaluate the accuracy of conventional one-electron band theories. Lithium has been extensively studied using such techniques. Band theory calculations have certain limitations in general and specifically in their application to lithium. Results depend on such factors as charge shape approximations (muffin tins), pseudopotentials (a special problem for lithium where the lack of rho core states requires a strong pseudopotential), and the form and parameters chosen for the exchange potential. The calculations are all one-electron methods in which the correlation effects are included in an ad hoc manner. This approximation may be particularly poor in the high compression regime, where the core states become delocalized. Furthermore, band theory provides only self-consistent results rather than strict limits on the energies. The quantum Monte Carlo method is a totally different technique using a many-body rather than a mean field approach which yields an upper bound on the energies. 18 refs., 4 figs., 1 tab.
Monte Carlo Evaluation of Resampling-Based Hypothesis Tests
Boos, Dennis
of rejections. At each alternative this Monte Carlo estimate will be unbiased for the true power function of the function ( ), where (A) = 1 if A is true and = 0 otherwise. The connection to measurement error methods 1998 Abstract Monte Carlo estimation of the power of tests that require resampling can be very com
CERN-TH.6275/91 Monte Carlo Event Generation
Sjöstrand, Torbjörn
CERN-TH.6275/91 Monte Carlo Event Generation for LHC T. Sj¨ostrand CERN -- Geneva Abstract The necessity of event generators for LHC physics studies is illustrated, and the Monte Carlo approach is outlined. A survey is presented of existing event generators, followed by a more detailed study
RADIATIVE HEAT TRANSFER WITH QUASI-MONTE CARLO METHODS
RADIATIVE HEAT TRANSFER WITH QUASI-MONTE CARLO METHODS A. Kersch1 W. Moroko2 A. Schuster1 1Siemens of Quasi-Monte Carlo to this problem. 1.1 Radiative Heat Transfer Reactors In the manufacturing of the problems which can be solved by such a simulation is high accuracy modeling of the radiative heat transfer
Monte Carlo techniques applied to PERT networks
McGowan, Lawrence Lee
1964-01-01T23:59:59.000Z
distribution is given by: f(t;A, B, , I!) = ~ (t-A) (B t) A t -B {5) n. P. (B-A) = 0 A 5+1 + B (r+I) The mean is given by elsewhere u-. lj'k+1) B-A I The variance is given by 2 (Ix+IF+2) (a+I+3) uB+ BA and the mode is given by II: The parameters...; Statistics MONTE CARLO TECHNIQUES APPLIED TO PERT NETWORKS A Thesis By IAWRENCE LEE MCGOWAN Approved as to style arid content by: Chairman Committee Head of Department Member of Comm'ttee Member of Committee i August 1964 '] i P 'f TABLE...
Exploring theory space with Monte Carlo reweighting
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun
2014-10-13T23:59:59.000Z
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. In particular, we suggest procedures that allow more efficient collaboration between theoristsmore »and experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less
Correlations in the Monte Carlo Glauber model
Jean-Paul Blaizot; Wojciech Broniowski; Jean-Yves Ollitrault
2014-09-12T23:59:59.000Z
Event-by-event fluctuations of observables are often modeled using the Monte Carlo Glauber model, in which the energy is initially deposited in sources associated with wounded nucleons. In this paper, we analyze in detail the correlations between these sources in proton-nucleus and nucleus-nucleus collisions. There are correlations arising from nucleon-nucleon correlations within each nucleus, and correlations due to the collision mechanism, which we dub twin correlations. We investigate this new phenomenon in detail. At the RHIC and LHC energies, correlations are found to have modest effects on size and eccentricity fluctuations, such that the Glauber model produces to a good approximation a collection of independent sources.
Parametric Learning and Monte Carlo Optimization
Wolpert, David H
2007-01-01T23:59:59.000Z
This paper uncovers and explores the close relationship between Monte Carlo Optimization of a parametrized integral (MCO), Parametric machine-Learning (PL), and `blackbox' or `oracle'-based optimization (BO). We make four contributions. First, we prove that MCO is mathematically identical to a broad class of PL problems. This identity potentially provides a new application domain for all broadly applicable PL techniques: MCO. Second, we introduce immediate sampling, a new version of the Probability Collectives (PC) algorithm for blackbox optimization. Immediate sampling transforms the original BO problem into an MCO problem. Accordingly, by combining these first two contributions, we can apply all PL techniques to BO. In our third contribution we validate this way of improving BO by demonstrating that cross-validation and bagging improve immediate sampling. Finally, conventional MC and MCO procedures ignore the relationship between the sample point locations and the associated values of the integrand; only th...
Exploring theory space with Monte Carlo reweighting
Gainer, James S. [Univ. of Florida, Gainesville, FL (United States); Lykken, Joseph [Fermi National Accelerator Laboratory, Batavia, IL (United States); Matchev, Konstantin T. [Univ. of Florida, Gainesville, FL (United States); Mrenna, Stephen [Fermi National Accelerator Laboratory, Batavia, IL (United States); Park, Myeonghun [The Univ. of Tokyo, Kashiwa (Japan)
2014-10-01T23:59:59.000Z
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. In particular, we suggest procedures that allow more efficient collaboration between theorists and experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.
Monte Carlo Methods for Uncertainty Quantification Mathematical Institute, University of Oxford
Giles, Mike
Monte Carlo Methods for Uncertainty Quantification Mike Giles Mathematical Institute, University of Oxford ERCOFTAC course on Mathematical Methods and Tools in Uncertainty Management and Quantification: Introduction and Monte Carlo basics some model applications random number generation Monte Carlo estimation
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Urbatsch, T.J.
1995-11-01T23:59:59.000Z
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Sequential Monte Carlo Methods for Protein Folding
Peter Grassberger
2004-08-26T23:59:59.000Z
We describe a class of growth algorithms for finding low energy states of heteropolymers. These polymers form toy models for proteins, and the hope is that similar methods will ultimately be useful for finding native states of real proteins from heuristic or a priori determined force fields. These algorithms share with standard Markov chain Monte Carlo methods that they generate Gibbs-Boltzmann distributions, but they are not based on the strategy that this distribution is obtained as stationary state of a suitably constructed Markov chain. Rather, they are based on growing the polymer by successively adding individual particles, guiding the growth towards configurations with lower energies, and using "population control" to eliminate bad configurations and increase the number of "good ones". This is not done via a breadth-first implementation as in genetic algorithms, but depth-first via recursive backtracking. As seen from various benchmark tests, the resulting algorithms are extremely efficient for lattice models, and are still competitive with other methods for simple off-lattice models.
Variance Reduction Techniques for Implicit Monte Carlo Simulations
Landman, Jacob Taylor
2013-09-19T23:59:59.000Z
The Implicit Monte Carlo (IMC) method is widely used for simulating thermal radiative transfer and solving the radiation transport equation. During an IMC run a grid network is constructed and particles are sourced into the problem to simulate...
An Analysis Tool for Flight Dynamics Monte Carlo Simulations
Restrepo, Carolina 1982-
2011-05-20T23:59:59.000Z
and analysis work to understand vehicle operating limits and identify circumstances that lead to mission failure. A Monte Carlo simulation approach that varies a wide range of physical parameters is typically used to generate thousands of test cases...
Shift: A Massively Parallel Monte Carlo Radiation Transport Package
Pandya, Tara M [ORNL; Johnson, Seth R [ORNL; Davidson, Gregory G [ORNL; Evans, Thomas M [ORNL; Hamilton, Steven P [ORNL
2015-01-01T23:59:59.000Z
This paper discusses the massively-parallel Monte Carlo radiation transport package, Shift, de- veloped at Oak Ridge National Laboratory. It reviews the capabilities, implementation, and parallel performance of this code package. Scaling results demonstrate very good strong and weak scaling behavior of the implemented algorithms. Benchmark results from various reactor problems show that Shift results compare well to other contemporary Monte Carlo codes and experimental results.
Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments
Pevey, Ronald E.
2005-09-15T23:59:59.000Z
Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL.
Lattice Monte Carlo Simulations of Polymer Melts
Hsiao-Ping Hsu
2015-03-03T23:59:59.000Z
We use Monte Carlo simulations to study polymer melts consisting of fully flexible and moderately stiff chains in the bond fluctuation model at a volume fraction $0.5$. In order to reduce the local density fluctuations, we test a pre-packing process for the preparation of the initial configurations of the polymer melts, before the excluded volume interaction is switched on completely. This process leads to a significantly faster decrease of the number of overlapping monomers on the lattice. This is useful for simulating very large systems, where the statistical properties of the model with a marginally incomplete elimination of excluded volume violations are the same as those of the model with strictly excluded volume. We find that the internal mean square end-to-end distance for moderately stiff chains in a melt can be very well described by a freely rotating chain model with a precise estimate of the bond-bond orientational correlation between two successive bond vectors in equilibrium. The plot of the probability distributions of the reduced end-to-end distance of chains of different stiffness also shows that the data collapse is excellent and described very well by the Gaussian distribution for ideal chains. However, while our results confirm the systematic deviations between Gaussian statistics for the chain structure factor $S_c(q)$ [minimum in the Kratky-plot] found by Wittmer et al.~\\{EPL {\\bf 77} 56003 (2007).\\} for fully flexible chains in a melt, we show that for the available chain length these deviations are no longer visible, when the chain stiffness is included. The mean square bond length and the compressibility estimated from collective structure factors depend slightly on the stiffness of the chains.
Monte Carlo Methods for Uncertainty Quantification Mathematical Institute, University of Oxford
Giles, Mike
Monte Carlo Methods for Uncertainty Quantification Mike Giles Mathematical Institute, University of Oxford ERCOFTAC course on Mathematical Methods and Tools in Uncertainty Management and Quantification Lecture 1: Introduction and Monte Carlo basics some model applications random number generation Monte
The Monte Carlo method in quantum field theory
Colin Morningstar
2007-02-20T23:59:59.000Z
This series of six lectures is an introduction to using the Monte Carlo method to carry out nonperturbative studies in quantum field theories. Path integrals in quantum field theory are reviewed, and their evaluation by the Monte Carlo method with Markov-chain based importance sampling is presented. Properties of Markov chains are discussed in detail and several proofs are presented, culminating in the fundamental limit theorem for irreducible Markov chains. The example of a real scalar field theory is used to illustrate the Metropolis-Hastings method and to demonstrate the effectiveness of an action-preserving (microcanonical) local updating algorithm in reducing autocorrelations. The goal of these lectures is to provide the beginner with the basic skills needed to start carrying out Monte Carlo studies in quantum field theories, as well as to present the underlying theoretical foundations of the method.
Quantum Monte Carlo Calculations of Light Nuclei Using Chiral Potentials
J. E. Lynn; J. Carlson; E. Epelbaum; S. Gandolfi; A. Gezerlis; A. Schwenk
2014-11-09T23:59:59.000Z
We present the first Green's function Monte Carlo calculations of light nuclei with nuclear interactions derived from chiral effective field theory up to next-to-next-to-leading order. Up to this order, the interactions can be constructed in a local form and are therefore amenable to quantum Monte Carlo calculations. We demonstrate a systematic improvement with each order for the binding energies of $A=3$ and $A=4$ systems. We also carry out the first few-body tests to study perturbative expansions of chiral potentials at different orders, finding that higher-order corrections are more perturbative for softer interactions. Our results confirm the necessity of a three-body force for correct reproduction of experimental binding energies and radii, and pave the way for studying few- and many-nucleon systems using quantum Monte Carlo methods with chiral interactions.
Monte Carlo sampling from the quantum state space. II
Yi-Lin Seah; Jiangwei Shang; Hui Khoon Ng; David John Nott; Berthold-Georg Englert
2015-04-27T23:59:59.000Z
High-quality random samples of quantum states are needed for a variety of tasks in quantum information and quantum computation. Searching the high-dimensional quantum state space for a global maximum of an objective function with many local maxima or evaluating an integral over a region in the quantum state space are but two exemplary applications of many. These tasks can only be performed reliably and efficiently with Monte Carlo methods, which involve good samplings of the parameter space in accordance with the relevant target distribution. We show how the Markov-chain Monte Carlo method known as Hamiltonian Monte Carlo, or hybrid Monte Carlo, can be adapted to this context. It is applicable when an efficient parameterization of the state space is available. The resulting random walk is entirely inside the physical parameter space, and the Hamiltonian dynamics enable us to take big steps, thereby avoiding strong correlations between successive sample points while enjoying a high acceptance rate. We use examples of single and double qubit measurements for illustration.
Kinetic Monte Carlo approach to modeling dislocation mobility
Cai, Wei
surface diffusion and growth processes [3], in which the energy barriers for the atomic mechanisms the evolution of a physical system through numerical sampling of (Markovian) sto- chastic processes. While the traditional Monte Carlo (MC) method is applied to sample systems in or close to the thermal equilibrium, k
A Monte Carlo Approach for Football Play Generation Kennard Laviers
Sukthankar, Gita Reese
A Monte Carlo Approach for Football Play Generation Kennard Laviers School of EECS U. of Central, adversarial games and demonstrate its utility at gen- erating American football plays for Rush Football 2008. In football, like in many other multi-agent games, the actions of all of the agents are not equally crucial
Evolutionary Monte Carlo for protein folding simulations Faming Lianga)
Liang, Faming
Evolutionary Monte Carlo for protein folding simulations Faming Lianga) Department of Statistics to simulations of protein folding on simple lattice models, and to finding the ground state of a protein. In all structures in protein folding. The numerical results show that it is drastically superior to other methods
Thermal Properties of Supercritical Carbon Dioxide by Monte Carlo Simulations
Lisal, Martin
Thermal Properties of Supercritical Carbon Dioxide by Monte Carlo Simulations C.M. COLINAa,b, *, C and speed of sound for carbon dioxide (CO2) in the supercritical region, using the fluctuation method based: Fluctuations; Carbon dioxide; 2CLJQ; JouleThomson coefficient; Speed of sound INTRODUCTION Simulation methods
Path Integral Monte-Carlo Calculations for Relativistic Oscillator
Alexandr Ivanov; Oleg Pavlovsky
2014-11-11T23:59:59.000Z
The problem of Relativistic Oscillator has been studied in the framework of Path Integral Monte-Carlo(PIMC) approach. Ultra-relativistic and non-relativistic limits have been discussed. We show that PIMC method can be effectively used for investigation of relativistic systems.
Calculating coherent pair production with Monte Carlo methods
Bottcher, C.; Strayer, M.R.
1989-01-01T23:59:59.000Z
We discuss calculations of the coherent electromagnetic pair production in ultra-relativistic hadron collisions. This type of production, in lowest order, is obtained from three diagrams which contain two virtual photons. We discuss simple Monte Carlo methods for evaluating these classes of diagrams without recourse to involved algebraic reduction schemes. 19 refs., 11 figs.
Monte Carlo Simulations of Thermal Conductivity in Nanoporous Si Membranes
1 Monte Carlo Simulations of Thermal Conductivity in Nanoporous Si Membranes Stefanie Wolf1 transport in Si nanomeshes. Phonons are treated semiclassically as particles of specific energy and velocity, ii) the roughness amplitude of the pore surfaces on the thermal conductivity of the nanomeshes. We
Nonlocal Monte Carlo algorithms for statistical physics applications
Janke, Wolfhard
magnets to polymers or proteins, to mention only a few classical problems. Quantum statistical problems different theoretical approaches such as field theory or series expansions, and, of course, with experimentsNonlocal Monte Carlo algorithms for statistical physics applications Wolfhard Janke1 Institut fu
Auxiliary field Monte Carlo for charged particles A. C. Maggs
Maggs, Anthony
~ . This is the wrong statistical weight for particles interacting via Coulomb's law. While evaluation of the energy; accepted 20 November 2003 This article describes Monte Carlo algorithms for charged systems using.1063/1.1642587 I. INTRODUCTION Fast methods for calculating Coulomb interactions are of the greatest importance
MCMs: Early History and The Basics Monte Carlo Methods
Mascagni, Michael
: Early History and The Basics The Stars Align at Los Alamos The Technology The Technology Simulation viaMCMs: Early History and The Basics Monte Carlo Methods: Early History and The Basics Prof. Michael: http://www.cs.fsu.edu/mascagni #12;MCMs: Early History and The Basics Outline of the Talk Early History
Monte Carlo: in the beginning and some great expectations
Metropolis, N.
1985-01-01T23:59:59.000Z
The central theme will be on the historical setting and origins of the Monte Carlo Method. The scene was post-war Los Alamos Scientific Laboratory. There was an inevitability about the Monte Carlo Event: the ENIAC had recently enjoyed its meteoric rise (on a classified Los Alamos problem); Stan Ulam had returned to Los Alamos; John von Neumann was a frequent visitor. Techniques, algorithms, and applications developed rapidly at Los Alamos. Soon, the fascination of the Method reached wider horizons. The first paper was submitted for publication in the spring of 1949. In the summer of 1949, the first open conference was held at the University of California at Los Angeles. Of some interst perhaps is an account of Fermi's earlier, independent application in neutron moderation studies while at the University of Rome. The quantum leap expected with the advent of massively parallel processors will provide stimuli for very ambitious applications of the Monte Carlo Method in disciplines ranging from field theories to cosmology, including more realistic models in the neurosciences. A structure of multi-instruction sets for parallel processing is ideally suited for the Monte Carlo approach. One may even hope for a modest hardening of the soft sciences.
ENVIRONMENTAL MODELING: 1 APPLICATIONS: MONTE CARLO SENSITIVITY SIMULATIONS
Dimov, Ivan
SIMULATIONS TO THE PROBLEM OF AIR POLLUTION TRANSPORT 3 1.1 The Danish Eulerian Model #12;Chapter 1 APPLICATIONS: MONTE CARLO SENSITIVITY SIMULATIONS TO THE PROBLEM OF AIR POLLUTION of pollutants in a real-live scenario of air-pollution transport over Europe. First, the developed technique
Romano, Paul K. (Paul Kollath)
2013-01-01T23:59:59.000Z
Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there ...
Types of random numbers and Monte Carlo Methods Pseudorandom number generation
Mascagni, Michael
Types of random numbers and Monte Carlo Methods Pseudorandom number generation Quasirandom number generation Conclusions WE246: Random Number Generation A Practitioner's Overview Prof. Michael Mascagni #12;Types of random numbers and Monte Carlo Methods Pseudorandom number generation Quasirandom number
Guan, Fada 1982-
2012-04-27T23:59:59.000Z
Monte Carlo method has been successfully applied in simulating the particles transport problems. Most of the Monte Carlo simulation tools are static and they can only be used to perform the static simulations for the problems with fixed physics...
Hybrid Probabilistic Roadmap and Monte Carlo Methods for Biomolecule Conformational Changes
Han, Li
1 Hybrid Probabilistic Roadmap and Monte Carlo Methods for Biomolecule Conformational Changes Li Han 1 Keywords: Conformation space, conformational changes, Monte Carlo, probabilistic roadmaps. 1. In this work, we have developed a hybrid Probabilistic Roadmap and Monte Carlo planner for biomolecule
Energy Monte Carlo (EMCEE) | Open Energy Information
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page on DeliciousPlasmaP a g eWorksVillagesource History(RedirectedEl Segundo,EnerNOC Incsource History ViewEnergyEnergy Monte
Molecular physics and chemistry applications of quantum Monte Carlo
Reynolds, P.J.; Barnett, R.N.; Hammond, B.L.; Lester, W.A. Jr.
1985-09-01T23:59:59.000Z
We discuss recent work with the diffusion quantum Monte Carlo (QMC) method in its application to molecular systems. The formal correspondence of the imaginary time Schroedinger equation to a diffusion equation allows one to calculate quantum mechanical expectation values as Monte Carlo averages over an ensemble of random walks. We report work on atomic and molecular total energies, as well as properties including electron affinities, binding energies, reaction barriers, and moments of the electronic charge distribution. A brief discussion is given on how standard QMC must be modified for calculating properties. Calculated energies and properties are presented for a number of molecular systems, including He, F, F , H2, N, and N2. Recent progress in extending the basic QMC approach to the calculation of ''analytic'' (as opposed to finite-difference) derivatives of the energy is presented, together with an H2 potential-energy curve obtained using analytic derivatives. 39 refs., 1 fig., 2 tabs.
The hybrid Monte Carlo Algorithm and the chiral transition
Gupta, R.
1987-01-01T23:59:59.000Z
In this talk the author describes tests of the Hybrid Monte Carlo Algorithm for QCD done in collaboration with Greg Kilcup and Stephen Sharpe. We find that the acceptance in the glubal Metropolis step for Staggered fermions can be tuned and kept large without having to make the step-size prohibitively small. We present results for the finite temperature transition on 4/sup 4/ and 4 x 6/sup 3/ lattices using this algorithm.
Testing trivializing maps in the Hybrid Monte Carlo algorithm
Georg P. Engel; Stefan Schaefer
2011-02-09T23:59:59.000Z
We test a recent proposal to use approximate trivializing maps in a field theory to speed up Hybrid Monte Carlo simulations. Simulating the CP^{N-1} model, we find a small improvement with the leading order transformation, which is however compensated by the additional computational overhead. The scaling of the algorithm towards the continuum is not changed. In particular, the effect of the topological modes on the autocorrelation times is studied.
FZ2MC: A Tool for Monte Carlo Transport Code Geometry Manipulation
Hackel, B M; Nielsen Jr., D E; Procassini, R J
2009-02-25T23:59:59.000Z
The process of creating and validating combinatorial geometry representations of complex systems for use in Monte Carlo transport simulations can be both time consuming and error prone. To simplify this process, a tool has been developed which employs extensions of the Form-Z commercial solid modeling tool. The resultant FZ2MC (Form-Z to Monte Carlo) tool permits users to create, modify and validate Monte Carlo geometry and material composition input data. Plugin modules that export this data to an input file, as well as parse data from existing input files, have been developed for several Monte Carlo codes. The FZ2MC tool is envisioned as a 'universal' tool for the manipulation of Monte Carlo geometry and material data. To this end, collaboration on the development of plug-in modules for additional Monte Carlo codes is desired.
Properties of Reactive Oxygen Species by Quantum Monte Carlo
Andrea Zen; Bernhardt L. Trout; Leonardo Guidoni
2014-06-16T23:59:59.000Z
The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of Chemistry, Biology and Atmospheric Science. Nevertheless, the electronic structure of such species is a challenge for ab-initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as $N^3-N^4$, where $N$ is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.
Global neutrino parameter estimation using Markov Chain Monte Carlo
Steen Hannestad
2007-10-10T23:59:59.000Z
We present a Markov Chain Monte Carlo global analysis of neutrino parameters using both cosmological and experimental data. Results are presented for the combination of all presently available data from oscillation experiments, cosmology, and neutrinoless double beta decay. In addition we explicitly study the interplay between cosmological, tritium decay and neutrinoless double beta decay data in determining the neutrino mass parameters. We furthermore discuss how the inference of non-neutrino cosmological parameters can benefit from future neutrino mass experiments such as the KATRIN tritium decay experiment or neutrinoless double beta decay experiments.
Markov Chain Monte Carlo Method without Detailed Balance
Hidemaro Suwa; Synge Todo
2010-10-13T23:59:59.000Z
We present a specific algorithm that generally satisfies the balance condition without imposing the detailed balance in the Markov chain Monte Carlo. In our algorithm, the average rejection rate is minimized, and even reduced to zero in many relevant cases. The absence of the detailed balance also introduces a net stochastic flow in a configuration space, which further boosts up the convergence. We demonstrate that the autocorrelation time of the Potts model becomes more than 6 times shorter than that by the conventional Metropolis algorithm. Based on the same concept, a bounce-free worm algorithm for generic quantum spin models is formulated as well.
Validation of Phonon Physics in the CDMS Detector Monte Carlo
McCarthy, K.A.; Leman, S.W.; Anderson, A.J.; /MIT; Brandt, D.; /SLAC; Brink, P.L.; Cabrera, B.; Cherry, M.; /Stanford U.; Do Couto E Silva, E.; /SLAC; Cushman, P.; /Minnesota U.; Doughty, T.; /UC, Berkeley; Figueroa-Feliciano, E.; /MIT; Kim, P.; /SLAC; Mirabolfathi, N.; /UC, Berkeley; Novak, L.; /Stanford U.; Partridge, R.; /SLAC; Pyle, M.; /Stanford U.; Reisetter, A.; /Minnesota U. /St. Olaf Coll.; Resch, R.; /SLAC; Sadoulet, B.; Serfass, B.; Sundqvist, K.M.; /UC, Berkeley /Stanford U.
2012-06-06T23:59:59.000Z
The SuperCDMS collaboration is a dark matter search effort aimed at detecting the scattering of WIMP dark matter from nuclei in cryogenic germanium targets. The CDMS Detector Monte Carlo (CDMS-DMC) is a simulation tool aimed at achieving a deeper understanding of the performance of the SuperCDMS detectors and aiding the dark matter search analysis. We present results from validation of the phonon physics described in the CDMS-DMC and outline work towards utilizing it in future WIMP search analyses.
Monte Carlo beam capture and charge breeding simulation
Kim, J.S.; Liu, C.; Edgell, D.H.; Pardo, R. [FAR-TECH, Inc., 10350 Science Center Drive, San Diego, California 92121 (United States); FAR-TECH, Inc., 10350 Science Center Drive, San Diego, California 92121 (United States) and University of Rochester, Rochester, New York (United States); Argonne National Laboratory, Argonne, Illinois (United States)
2006-03-15T23:59:59.000Z
A full six-dimensional (6D) phase space Monte Carlo beam capture charge-breeding simulation code examines the beam capture processes of singly charged ion beams injected to an electron cyclotron resonance (ECR) charge breeder from entry to exit. The code traces injected beam ions in an ECR ion source (ECRIS) plasma including Coulomb collisions, ionization, and charge exchange. The background ECRIS plasma is modeled within the current frame work of the generalized ECR ion source model. A simple sample case of an oxygen background plasma with an injected Ar +1 ion beam produces lower charge breeding efficiencies than experimentally obtained. Possible reasons for discrepancies are discussed.
Quantitative Monte Carlo-based holmium-166 SPECT reconstruction
Elschot, Mattijs; Smits, Maarten L. J.; Nijsen, Johannes F. W.; Lam, Marnix G. E. H.; Zonnenberg, Bernard A.; Bosch, Maurice A. A. J. van den; Jong, Hugo W. A. M. de [Department of Radiology and Nuclear Medicine, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht (Netherlands); Viergever, Max A. [Image Sciences Institute, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht (Netherlands)] [Image Sciences Institute, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht (Netherlands)
2013-11-15T23:59:59.000Z
Purpose: Quantitative imaging of the radionuclide distribution is of increasing interest for microsphere radioembolization (RE) of liver malignancies, to aid treatment planning and dosimetry. For this purpose, holmium-166 ({sup 166}Ho) microspheres have been developed, which can be visualized with a gamma camera. The objective of this work is to develop and evaluate a new reconstruction method for quantitative {sup 166}Ho SPECT, including Monte Carlo-based modeling of photon contributions from the full energy spectrum.Methods: A fast Monte Carlo (MC) simulator was developed for simulation of {sup 166}Ho projection images and incorporated in a statistical reconstruction algorithm (SPECT-fMC). Photon scatter and attenuation for all photons sampled from the full {sup 166}Ho energy spectrum were modeled during reconstruction by Monte Carlo simulations. The energy- and distance-dependent collimator-detector response was modeled using precalculated convolution kernels. Phantom experiments were performed to quantitatively evaluate image contrast, image noise, count errors, and activity recovery coefficients (ARCs) of SPECT-fMC in comparison with those of an energy window-based method for correction of down-scattered high-energy photons (SPECT-DSW) and a previously presented hybrid method that combines MC simulation of photopeak scatter with energy window-based estimation of down-scattered high-energy contributions (SPECT-ppMC+DSW). Additionally, the impact of SPECT-fMC on whole-body recovered activities (A{sup est}) and estimated radiation absorbed doses was evaluated using clinical SPECT data of six {sup 166}Ho RE patients.Results: At the same noise level, SPECT-fMC images showed substantially higher contrast than SPECT-DSW and SPECT-ppMC+DSW in spheres ?17 mm in diameter. The count error was reduced from 29% (SPECT-DSW) and 25% (SPECT-ppMC+DSW) to 12% (SPECT-fMC). ARCs in five spherical volumes of 1.96–106.21 ml were improved from 32%–63% (SPECT-DSW) and 50%–80% (SPECT-ppMC+DSW) to 76%–103% (SPECT-fMC). Furthermore, SPECT-fMC recovered whole-body activities were most accurate (A{sup est}= 1.06 × A ? 5.90 MBq, R{sup 2}= 0.97) and SPECT-fMC tumor absorbed doses were significantly higher than with SPECT-DSW (p = 0.031) and SPECT-ppMC+DSW (p = 0.031).Conclusions: The quantitative accuracy of {sup 166}Ho SPECT is improved by Monte Carlo-based modeling of the image degrading factors. Consequently, the proposed reconstruction method enables accurate estimation of the radiation absorbed dose in clinical practice.
Monte Carlo Tools for charged Higgs boson production
K. Kovarik
2014-12-18T23:59:59.000Z
In this short review we discuss two implementations of the charged Higgs boson production process in association with a top quark in Monte Carlo event generators at next-to-leading order in QCD. We introduce the MC@NLO and the POWHEG method of matching next-to-leading order matrix elements with parton showers and compare both methods analyzing the charged Higgs boson production process in association with a top quark. We shortly discuss the case of a light charged Higgs boson where the associated charged Higgs production interferes with the charged Higgs production via t tbar-production and subsequent decay of the top quark.
Electron scattering in helium for Monte Carlo simulations
Khrabrov, Alexander V.; Kaganovich, Igor D. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States)
2012-09-15T23:59:59.000Z
An analytical approximation for differential cross-section of electron scattering on helium atoms is introduced. It is intended for Monte Carlo simulations, which, instead of angular distributions based on experimental data (or on first-principle calculations), usually rely on approximations that are accurate yet numerically efficient. The approximation is based on the screened-Coulomb differential cross-section with energy-dependent screening. For helium, a two-pole approximation of the screening parameter is found to be highly accurate over a wide range of energies.
Monte Carlo tests of Orbital-Free Density Functional Theory
D. I. Palade
2014-12-12T23:59:59.000Z
The relationship between the exact kinetic energy density in a quantum system in the frame of Density Functional Theory and the semiclassical functional expression for the same quantity is investigated. The analysis is performed with Monte Carlo simulations of the Kohn-Sham potentials. We find that the semiclassical form represents the statistical expectation value of the quantum nature. Based on the numerical results, we propose an empirical correction to the existing functional and an associated method to improve the Orbital-Free results.
Joint International Conference on Supercomputing in Nuclear Applications and Monte Carlo 2013 (SNA-Cr alloys are investigated using Density Functional Theory (DFT) formalism, in the form of constrained non temperature, represent the key unknown entities critical to the development of viable fusion reactor design
A Look at general cavity theory through a code incorporating Monte Carlo techniques
Weyland, Mark Duffy
1989-01-01T23:59:59.000Z
material, the wall, being exponentially attenuated into the dosimeter, or the cavity. This assumption was investigated in this research using the Monte Carlo techniques in a modern computer code EGS4, Appropriate geometries were defined in the code and a... and relate the measured dose to that within the material, Monte Carlo techniques have been used to simulate the irradiation of various materials. The computer code EGS4 uses Monte Carlo techniques to simulate the randomness of radiation interactions...
Thermoelectric transport perpendicular to thin-film heterostructures calculated using the Monte The Monte Carlo technique is used to calculate electrical as well as thermoelectric transport properties ballistic thermionic transport and fully diffusive thermoelectric transport is also described. DOI: 10
Direct Monte Carlo simulation of chemical reaction systems: Dissociation and recombination
Anderson, James B.
Direct Monte Carlo simulation of chemical reaction systems: Dissociation and recombination Shannon Carlo simulations of a chemical reaction system with bimolecular and termolecular dissociation8 to be well suited for treating chemical reaction systems with nonequilibrium distributions, coupled gas
Four-quark energies in SU(2) lattice Monte Carlo using a tetrahedral geometry
A. M. Green; J. Lukkarinen; P. Pennanen; C. Michael; S. Furui
1994-12-05T23:59:59.000Z
This contribution -- a continuation of earlier work -- reports on recent developments in the calculation and understanding of 4-quark energies generated using lattice Monte Carlo techniques.
Monte Carlo simulation of quantum Zeno effect in the brain
Danko Georgiev
2014-12-11T23:59:59.000Z
Environmental decoherence appears to be the biggest obstacle for successful construction of quantum mind theories. Nevertheless, the quantum physicist Henry Stapp promoted the view that the mind could utilize quantum Zeno effect to influence brain dynamics and that the efficacy of such mental efforts would not be undermined by environmental decoherence of the brain. To address the physical plausibility of Stapp's claim, we modeled the brain using quantum tunneling of an electron in a multiple-well structure such as the voltage sensor in neuronal ion channels and performed Monte Carlo simulations of quantum Zeno effect exerted by the mind upon the brain in the presence or absence of environmental decoherence. The simulations unambiguously showed that the quantum Zeno effect breaks down for timescales greater than the brain decoherence time. To generalize the Monte Carlo simulation results for any n-level quantum system, we further analyzed the change of brain entropy due to the mind probing actions and proved a theorem according to which local projections cannot decrease the von Neumann entropy of the unconditional brain density matrix. The latter theorem establishes that Stapp's model is physically implausible but leaves a door open for future development of quantum mind theories provided the brain has a decoherence-free subspace.
Monte Carlo model for electron degradation in methane
Bhardwaj, Anil
2015-01-01T23:59:59.000Z
We present a Monte Carlo model for degradation of 1-10,000 eV electrons in an atmosphere of methane. The electron impact cross sections for CH4 are compiled and analytical representations of these cross sections are used as input to the model.model.Yield spectra, which provides information about the number of inelastic events that have taken place in each energy bin, is used to calculate the yield (or population) of various inelastic processes. The numerical yield spectra, obtained from the Monte Carlo simulations, is represented analytically, thus generating the Analytical Yield Spectra (AYS). AYS is employed to obtain the mean energy per ion pair and efficiencies of various inelastic processes.Mean energy per ion pair for neutral CH4 is found to be 26 (27.8) eV at 10 (0.1) keV. Efficiency calculation showed that ionization is the dominant process at energies >50 eV, for which more than 50% of the incident electron energy is used. Above 25 eV, dissociation has an efficiency of 27%. Below 10 eV, vibrational e...
Brachytherapy structural shielding calculations using Monte Carlo generated, monoenergetic data
Zourari, K.; Peppa, V.; Papagiannis, P., E-mail: ppapagi@phys.uoa.gr [Medical Physics Laboratory, Medical School, University of Athens, 75 Mikras Asias, 11527 Athens (Greece); Ballester, Facundo [Department of Atomic, Molecular and Nuclear Physics, University of Valencia, Burjassot 46100 (Spain)] [Department of Atomic, Molecular and Nuclear Physics, University of Valencia, Burjassot 46100 (Spain); Siebert, Frank-André [Clinic of Radiotherapy, University Hospital of Schleswig-Holstein, Campus Kiel 24105 (Germany)] [Clinic of Radiotherapy, University Hospital of Schleswig-Holstein, Campus Kiel 24105 (Germany)
2014-04-15T23:59:59.000Z
Purpose: To provide a method for calculating the transmission of any broad photon beam with a known energy spectrum in the range of 20–1090 keV, through concrete and lead, based on the superposition of corresponding monoenergetic data obtained from Monte Carlo simulation. Methods: MCNP5 was used to calculate broad photon beam transmission data through varying thickness of lead and concrete, for monoenergetic point sources of energy in the range pertinent to brachytherapy (20–1090 keV, in 10 keV intervals). The three parameter empirical model introduced byArcher et al. [“Diagnostic x-ray shielding design based on an empirical model of photon attenuation,” Health Phys. 44, 507–517 (1983)] was used to describe the transmission curve for each of the 216 energy-material combinations. These three parameters, and hence the transmission curve, for any polyenergetic spectrum can then be obtained by superposition along the lines of Kharrati et al. [“Monte Carlo simulation of x-ray buildup factors of lead and its applications in shielding of diagnostic x-ray facilities,” Med. Phys. 34, 1398–1404 (2007)]. A simple program, incorporating a graphical user interface, was developed to facilitate the superposition of monoenergetic data, the graphical and tabular display of broad photon beam transmission curves, and the calculation of material thickness required for a given transmission from these curves. Results: Polyenergetic broad photon beam transmission curves of this work, calculated from the superposition of monoenergetic data, are compared to corresponding results in the literature. A good agreement is observed with results in the literature obtained from Monte Carlo simulations for the photon spectra emitted from bare point sources of various radionuclides. Differences are observed with corresponding results in the literature for x-ray spectra at various tube potentials, mainly due to the different broad beam conditions or x-ray spectra assumed. Conclusions: The data of this work allow for the accurate calculation of structural shielding thickness, taking into account the spectral variation with shield thickness, and broad beam conditions, in a realistic geometry. The simplicity of calculations also obviates the need for the use of crude transmission data estimates such as the half and tenth value layer indices. Although this study was primarily designed for brachytherapy, results might also be useful for radiology and nuclear medicine facility design, provided broad beam conditions apply.
Schulze, Tim
An Energy Localization Principle and its Application to Fast Kinetic Monte Carlo Simulation of Michigan, Ann Arbor, MI 48109-1109 Abstract Simulation of heteroepitaxial growth using kinetic Monte Carlo (KMC) is often based on rates determined by differences in elastic energy between two configurations
MONTE CARLO SIMULATION METHOD By Ronald R. Charpentier and Timothy R. Klett
Laughlin, Robert B.
EMCEE and Emc2 are Monte-Carlo simulation programs for assessing undiscovered conventional oil and gasChapter MC MONTE CARLO SIMULATION METHOD By Ronald R. Charpentier and Timothy R. Klett in U in the toolbar to return. U.S. GEOLOGICAL SURVEY WORLD PETROLEUM ASSESSMENT 2000-- DESCRIPTION AND RESULTS U
Author's personal copy Monte Carlo methods for design and analysis of radiation detectors
Shultis, J. Kenneth
Author's personal copy Monte Carlo methods for design and analysis of radiation detectors William L Radiation detectors Inverse problems Detector design a b s t r a c t An overview of Monte Carlo as a practical method for designing and analyzing radiation detectors is provided. The emphasis is on detectors
BAYESIAN INFERENCE FOR MODELS OF TRANSCRIPTIONAL REGULATION USING MARKOV CHAIN MONTE CARLO SAMPLING
Opper, Manfred
]. In this contribution we present a Markov chain Monte Carlo (MCMC) sampler which infers the TF activity based on a modelBAYESIAN INFERENCE FOR MODELS OF TRANSCRIPTIONAL REGULATION USING MARKOV CHAIN MONTE CARLO SAMPLING]. Transcription of genes is controlled by proteins which can bind to particular base-sequences of DNA
Direct Monte Carlo simulation of chemical reaction systems: Simple bimolecular reactions
Anderson, James B.
Direct Monte Carlo simulation of chemical reaction systems: Simple bimolecular reactions Shannon D and understanding the behavior of gas phase chemical reaction systems. This Monte Carlo method, originated by Bird. Extension to chemical reactions offers a powerful tool for treating reaction systems with nonthermal
A New Monte Carlo Simulation Method for Tolerance Analysis of Kinematically Constrained Assemblies
A New Monte Carlo Simulation Method for Tolerance Analysis of Kinematically Constrained Assemblies Abstract A generalized Monte Carlo simulation method is presented for tolerance analysis of mechanical assemblies with small kinematic adjustments. This is a new tool for assembly tolerance analysis based
Path Integral Monte Carlo Calculation of the Deuterium Hugoniot B. Militzer and D. M. Ceperley
Militzer, Burkhard
Path Integral Monte Carlo Calculation of the Deuterium Hugoniot B. Militzer and D. M. Ceperley-Champaign, Urbana, IL 61801 (January 21, 2000) Restricted path integral Monte Carlo simulations have been used#11;ects and the dependence on the time step of the path integral. Further, we compare the results
Continuous Contour Monte Carlo for Marginal Density Estimation With an Application to a
Liang, Faming
; Gelman and Meng 1998), reverse logistic regression (Geyer 1994), marginal likelihood (Chib 1995; Chib; Reversible jump Markov chain Monte Carlo; Stochastic approximation; Wang-Landau algorithm. 1. INTRODUCTION;Continuous Contour Monte Carlo 609 variety of approaches including reversible jump MCMC (Green 1995; Green
Kinetic Monte Carlo simulations of the response of carbon nanotubes to electron irradiation
Krasheninnikov, Arkady V.
Kinetic Monte Carlo simulations of the response of carbon nanotubes to electron irradiation J of Technology, Finland (Dated: January 12, 2007) Irradiation is increasingly used nowadays to tailor of nanotubes to irradiation is still lacking, we have implemented the kinetic Monte Carlo method with Bortz
Population Monte Carlo algorithms Yukito Iba The Institute of Statistical Mathematics
Iba, Yukito
279 ¤ Population Monte Carlo algorithms Yukito Iba The Institute of Statistical Mathematics iba algorithm Summary We give a cross-disciplinary survey on "population" Monte Carlo algorithms. In these algorithms, a set of "walkers" or "particles" is used as a representation of a high-dimensional vector
Hybrid Probabilistic RoadMap -Monte Carlo Motion Planning for Closed Chain Systems with
Han, Li
Hybrid Probabilistic RoadMap - Monte Carlo Motion Planning for Closed Chain Systems with Spherical@clarku.edu Abstract-- In this paper we propose a hybrid Probabilistic RoadMap - Monte Carlo (PRM-MC) motion planner and connect a large number of robot configurations in order to build a roadmap that reflects the properties
Monte Carlo Simulation of Dense Polymer Melts Using Event Chain Algorithms
Tobias Alexander Kampmann; Horst-Holger Boltz; Jan Kierfeld
2015-07-23T23:59:59.000Z
We propose an efficient Monte Carlo algorithm for the off-lattice simulation of dense hard sphere polymer melts using cluster moves, called event chains, which allow for a rejection-free treatment of the excluded volume. Event chains also allow for an efficient preparation of initial configurations in polymer melts. We parallelize the event chain Monte Carlo algorithm to further increase simulation speeds and suggest additional local topology-changing moves ("swap" moves) to accelerate equilibration. By comparison with other Monte Carlo and molecular dynamics simulations, we verify that the event chain algorithm reproduces the correct equilibrium behavior of polymer chains in the melt. By comparing intrapolymer diffusion time scales, we show that event chain Monte Carlo algorithms can achieve simulation speeds comparable to optimized molecular dynamics simulations. The event chain Monte Carlo algorithm exhibits Rouse dynamics on short time scales. In the absence of swap moves, we find reptation dynamics on intermediate time scales for long chains.
SKIRT: the design of a suite of input models for Monte Carlo radiative transfer simulations
Baes, Maarten
2015-01-01T23:59:59.000Z
The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can...
Single temperature for Monte Carlo optimization on complex landscapes
Tolkunov, Denis
2012-01-01T23:59:59.000Z
We propose a new strategy for Monte Carlo (MC) optimization on rugged multidimensional landscapes. The strategy is based on querying the statistical properties of the landscape in order to find the temperature at which the mean first passage time across the current region of the landscape is minimized. Thus, in contrast to other algorithms such as simulated annealing (SA), we explicitly match the temperature schedule to the statistics of landscape irregularities. In cases where this statistics is approximately the same over the entire landscape, or where non-local moves couple distant parts of the landscape, single-temperature MC will outperform any other MC algorithm with the same move set. We also find that in strongly anisotropic Coulomb spin glass and traveling salesman problems, the only relevant statistics (which we use to assign a single MC temperature) is that of irregularities in low-energy funnels. Our results may explain why protein folding in nature is efficient at room temperatures.
Monte Carlo Simulation Tool Installation and Operation Guide
Aguayo Navarrete, Estanislao; Ankney, Austin S.; Berguson, Timothy J.; Kouzes, Richard T.; Orrell, John L.; Troy, Meredith D.; Wiseman, Clinton G.
2013-09-02T23:59:59.000Z
This document provides information on software and procedures for Monte Carlo simulations based on the Geant4 toolkit, the ROOT data analysis software and the CRY cosmic ray library. These tools have been chosen for its application to shield design and activation studies as part of the simulation task for the Majorana Collaboration. This document includes instructions for installation, operation and modification of the simulation code in a high cyber-security computing environment, such as the Pacific Northwest National Laboratory network. It is intended as a living document, and will be periodically updated. It is a starting point for information collection by an experimenter, and is not the definitive source. Users should consult with one of the authors for guidance on how to find the most current information for their needs.
Improving multivariate Horner schemes with Monte Carlo tree search
J. Kuipers; J. A. M. Vermaseren; A. Plaat; H. J. van den Herik
2012-07-30T23:59:59.000Z
Optimizing the cost of evaluating a polynomial is a classic problem in computer science. For polynomials in one variable, Horner's method provides a scheme for producing a computationally efficient form. For multivariate polynomials it is possible to generalize Horner's method, but this leaves freedom in the order of the variables. Traditionally, greedy schemes like most-occurring variable first are used. This simple textbook algorithm has given remarkably efficient results. Finding better algorithms has proved difficult. In trying to improve upon the greedy scheme we have implemented Monte Carlo tree search, a recent search method from the field of artificial intelligence. This results in better Horner schemes and reduces the cost of evaluating polynomials, sometimes by factors up to two.
The Quantum Energy Density: Improved Efficiency for Quantum Monte Carlo
Krogel, Jaron T; Kim, Jeongnim; Ceperley, David M
2013-01-01T23:59:59.000Z
We establish a physically meaningful representation of a quantum energy density for use in Quantum Monte Carlo calculations. The energy density operator, defined in terms of Hamiltonian components and density operators, returns the correct Hamiltonian when integrated over a volume containing a cluster of particles. This property is demonstrated for a helium-neon "gas," showing that atomic energies obtained from the energy density correspond to eigenvalues of isolated systems. The formation energies of defects or interfaces are typically calculated as total energy differences. Using a model of delta-doped silicon (where dopant atoms form a thin plane) we show how interfacial energies can be calculated more efficiently with the energy density, since the region of interest is small. We also demonstrate how the energy density correctly transitions to the bulk limit away from the interface where the correct energy is obtainable from a separate total energy calculation.
Strain in the mesoscale kinetic Monte Carlo model for sintering
Bjørk, R; Tikare, V; Olevsky, E; Pryds, N
2014-01-01T23:59:59.000Z
Shrinkage strains measured from microstructural simulations using the mesoscale kinetic Monte Carlo (kMC) model for solid state sintering are discussed. This model represents the microstructure using digitized discrete sites that are either grain or pore sites. The algorithm used to simulate densification by vacancy annihilation removes an isolated pore site at a grain boundary and collapses a column of sites extending from the vacancy to the surface of sintering compact, through the center of mass of the nearest grain. Using this algorithm, the existing published kMC models are shown to produce anisotropic strains for homogeneous powder compacts with aspect ratios different from unity. It is shown that the line direction biases shrinkage strains in proportion the compact dimension aspect ratios. A new algorithm that corrects this bias in strains is proposed; the direction for collapsing the column is determined by choosing a random sample face and subsequently a random point on that face as the end point for...
Peelle's pertinent puzzle using the Monte Carlo technique
Kawano, Toshihiko [Los Alamos National Laboratory; Talou, Patrick [Los Alamos National Laboratory; Burr, Thomas [Los Alamos National Laboratory; Pan, Feng [Los Alamos National Laboratory
2009-01-01T23:59:59.000Z
We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, and if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.
Lifting -- A Nonreversible Markov Chain Monte Carlo Algorithm
Vucelja, Marija
2015-01-01T23:59:59.000Z
Markov Chain Monte Carlo algorithms are invaluable numerical tools for exploring stationary properties of physical systems -- in particular when direct sampling is not feasible. They are widely used in many areas of physics and other sciences. Most common implementations are done with reversible Markov chains -- Markov chains that obey detailed balance. Reversible Markov chains are sufficient in order for the physical system to relax to equilibrium, but it is not necessary. Here we review several works that use "lifted" or nonreversible Markov chains, which violate detailed balance, yet still converge to the correct stationary distribution (they obey the global balance condition). In certain cases, the acceleration is a square root improvement at most, to the conventional reversible Markov chains. We introduce the problem in a way that makes it accessible to non-specialists. We illustrate the method on several representative examples (sampling on a ring, sampling on a torus, an Ising model on a complete graph...
The neutron instrument Monte Carlo library MCLIB: Recent developments
Seeger, P.A.; Daemen, L.L.; Hjelm, R.P. Jr.; Thelliez, T.G.
1998-12-31T23:59:59.000Z
A brief review is given of the developments since the ICANS-XIII meeting made in the neutron instrument design codes using the Monte Carlo library MCLIB. Much of the effort has been to assure that the library and the executing code MC{_}RUN connect efficiently with the World Wide Web application MC-WEB as part of the Los Alamos Neutron Instrument Simulation Package (NISP). Since one of the most important features of MCLIB is its open structure and capability to incorporate any possible neutron transport or scattering algorithm, this document describes the current procedure that would be used by an outside user to add a feature to MCLIB. Details of the calling sequence of the core subroutine OPERATE are discussed, and questions of style are considered and additional guidelines given. Suggestions for standardization are solicited, as well as code for new algorithms.
Hybrid Monte Carlo simulation on the graphene hexagonal lattice
Richard Brower; Claudio Rebbi; David Schaich
2012-04-24T23:59:59.000Z
One of the many remarkable properties of graphene is that in the low energy limit the dynamics of its electrons can be effectively described by the massless Dirac equation. This has prompted investigations of graphene based on the lattice simulation of a system of 2-dimensional fermions on a square staggered lattice. We demonstrate here how to construct the path integral for graphene working directly on the graphene hexagonal lattice. For the nearest neighbor tight binding model interacting with a long range Coulomb interaction between the electrons, this leads to the hybrid Monte Carlo algorithm with no sign problem. The only approximation is the discretization of the Euclidean time. So as we extrapolate to the time continuum limit, the exact tight binding solution maybe found numerically to arbitrary precession on a finite hexagonal lattice. The potential for this approach is tested on a single hexagonal cell.
Improved version of the PHOBOS Glauber Monte Carlo
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Loizides, C.; Nagle, J.; Steinberg, P.
2015-09-01T23:59:59.000Z
“Glauber” models are used to calculate geometric quantities in the initial state of heavy ion collisions, such as impact parameter, number of participating nucleons and initial eccentricity. Experimental heavy-ion collaborations, in particular at RHIC and LHC, use Glauber Model calculations for various geometric observables for determination of the collision centrality. In this document, we describe the assumptions inherent to the approach, and provide an updated implementation (v2) of the Monte Carlo based Glauber Model calculation, which originally was used by the PHOBOS collaboration. The main improvement w.r.t. the earlier version (v1) (Alver et al. 2008) is the inclusion of Tritium,more »Helium-3, and Uranium, as well as the treatment of deformed nuclei and Glauber–Gribov fluctuations of the proton in p +A collisions. A users’ guide (updated to reflect changes in v2) is provided for running various calculations.« less
Quality assurance for the ALICE Monte Carlo procedure
M. Ajaz; Seforo Mohlalisi; Peter Hristov; Jean Pierre Revol
2009-04-10T23:59:59.000Z
We implement the already existing macro,$ALICE_ROOT/STEER /CheckESD.C that is ran after reconstruction to compute the physics efficiency, as a task that will run on proof framework like CAF. The task was implemented in a C++ class called AliAnalysisTaskCheckESD and it inherits from AliAnalysisTaskSE base class. The function of AliAnalysisTaskCheckESD is to compute the ratio of the number of reconstructed particles to the number of particle generated by the Monte Carlo generator.The class AliAnalysisTaskCheckESD was successfully implemented. It was used during the production for first physics and permitted to discover several problems (missing track in the MUON arm reconstruction, low efficiency in the PHOS detector etc.). The code is committed to the SVN repository and will become standard tool for quality assurance.
Koh, Wonshill
2013-02-22T23:59:59.000Z
The light propagation in highly scattering turbid media composed of the particles with different size distribution is studied using a Monte Carlo simulation model implemented in Standard C. Monte Carlo method has been widely utilized to study...
Straub, John E.
Statistical-Temperature Monte Carlo and Molecular Dynamics Algorithms Jaegil Kim,* John E. Straub. A novel molecular dynamics algorithm (STMD) applicable to complex systems and a Monte Carlo algorithmRevLett.97.050601 PACS numbers: 05.10.ÿa, 02.70.Rr, 87.18.Bb The Wang-Landau (WL) Monte Carlo (MC) algorithm
Using Stochastic Discounted Cash Flow and Real Option Monte Carlo Simulation to Analyse the Impacts in the presence of a windfall profits tax. Real options Monte Carlo simulation is used to characterise from the project. The results highlight that Monte Carlo simulation paired with the real option
Coupled Deterministic-Monte Carlo Transport for Radiation Portal Modeling
Smith, Leon E.; Miller, Erin A.; Wittman, Richard S.; Shaver, Mark W.
2008-01-14T23:59:59.000Z
Radiation portal monitors are being deployed, both domestically and internationally, to detect illicit movement of radiological materials concealed in cargo. Evaluation of the current and next generations of these radiation portal monitor (RPM) technologies is an ongoing process. 'Injection studies' that superimpose, computationally, the signature from threat materials onto empirical vehicle profiles collected at ports of entry, are often a component of the RPM evaluation process. However, measurement of realistic threat devices can be both expensive and time-consuming. Radiation transport methods that can predict the response of radiation detection sensors with high fidelity, and do so rapidly enough to allow the modeling of many different threat-source configurations, are a cornerstone of reliable evaluation results. Monte Carlo methods have been the primary tool of the detection community for these kinds of calculations, in no small part because they are particularly effective for calculating pulse-height spectra in gamma-ray spectrometers. However, computational times for problems with a high degree of scattering and absorption can be extremely long. Deterministic codes that discretize the transport in space, angle, and energy offer potential advantages in computational efficiency for these same kinds of problems, but the pulse-height calculations needed to predict gamma-ray spectrometer response are not readily accessible. These complementary strengths for radiation detection scenarios suggest that coupling Monte Carlo and deterministic methods could be beneficial in terms of computational efficiency. Pacific Northwest National Laboratory and its collaborators are developing a RAdiation Detection Scenario Analysis Toolbox (RADSAT) founded on this coupling approach. The deterministic core of RADSAT is Attila, a three-dimensional, tetrahedral-mesh code originally developed by Los Alamos National Laboratory, and since expanded and refined by Transpire, Inc. [1]. MCNP5 is used to calculate sensor pulse-height tallies. RADSAT methods, including adaptive, problem-specific energy-group creation, ray-effect mitigation strategies and the porting of deterministic angular flux to MCNP for individual particle creation are described in [2][3][4]. This paper discusses the application of RADSAT to the modeling of gamma-ray spectrometers in RPMs.
Perfetti, Christopher M [ORNL; Rearden, Bradley T [ORNL
2014-01-01T23:59:59.000Z
This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.
Franke, B. C. [Sandia National Laboratories, Albuquerque, NM 87185 (United States); Prinja, A. K. [Department of Chemical and Nuclear Engineering, University of New Mexico, Albuquerque, NM 87131 (United States)
2013-07-01T23:59:59.000Z
The stochastic Galerkin method (SGM) is an intrusive technique for propagating data uncertainty in physical models. The method reduces the random model to a system of coupled deterministic equations for the moments of stochastic spectral expansions of result quantities. We investigate solving these equations using the Monte Carlo technique. We compare the efficiency with brute-force Monte Carlo evaluation of uncertainty, the non-intrusive stochastic collocation method (SCM), and an intrusive Monte Carlo implementation of the stochastic collocation method. We also describe the stability limitations of our SGM implementation. (authors)
A study of the contrast of a submerged disc using Monte Carlo techniques
Hagan, Donald Frank
1980-01-01T23:59:59.000Z
in the simulation of lioht interactions within the Earth's ocean system. Using the Monte Carlo computer program the contrast of a Secchi disc and its ocean background was calculated. A Secchi disc 1s a horizontal disc in the ocean that is v1ewed from the surface... of samples which requires more computation time. Before the advent of high speed computers, the Monte Carlo Method was generally useless because of the massive amount of computation it required. The Monte Carlo Method is fairly simple in application...
Enhanced physics design with hexagonal repeated structure tools using Monte Carlo methods
Carter, L L; Lan, J S; Schwarz, R A
1991-01-01T23:59:59.000Z
This report discusses proposed new missions for the Fast Flux Test Facility (FFTF) reactor which involve the use of target assemblies containing local hydrogenous moderation within this otherwise fast reactor. Parametric physics design studies with Monte Carlo methods are routinely utilized to analyze the rapidly changing neutron spectrum. An extensive utilization of the hexagonal lattice within lattice capabilities of the Monte Carlo Neutron Photon (MCNP) continuous energy Monte Carlo computer code is applied here to solving such problems. Simpler examples that use the lattice capability to describe fuel pins within a brute force'' description of the hexagonal assemblies are also given.
treatment of the ge- ometry, but successive versions added such features as cross-section libraries and green above. As the temperature of the plasma decreases, lattice-like peaks begin to form in the pair
Monte Carlo sampling from the quantum state space. I
Jiangwei Shang; Yi-Lin Seah; Hui Khoon Ng; David John Nott; Berthold-Georg Englert
2015-04-27T23:59:59.000Z
High-quality random samples of quantum states are needed for a variety of tasks in quantum information and quantum computation. Searching the high-dimensional quantum state space for a global maximum of an objective function with many local maxima or evaluating an integral over a region in the quantum state space are but two exemplary applications of many. These tasks can only be performed reliably and efficiently with Monte Carlo methods, which involve good samplings of the parameter space in accordance with the relevant target distribution. We show how the standard strategies of rejection sampling, importance sampling, and Markov-chain sampling can be adapted to this context, where the samples must obey the constraints imposed by the positivity of the statistical operator. For a comparison of these sampling methods, we generate sample points in the probability space for two-qubit states probed with a tomographically incomplete measurement, and then use the sample for the calculation of the size and credibility of the recently-introduced optimal error regions [see New J. Phys. 15 (2013) 123026]. Another illustration is the computation of the fractional volume of separable two-qubit states.
Monte Carlo simulation of the terrestrial hydrogen exosphere
Hodges, R.R. Jr. [Univ. of Texas, Dallas, TX (United States)
1994-12-01T23:59:59.000Z
Methods for Monte Carlo simulation of planetary exospheres have evolved from early work on the lunar atmosphere, where the regolith surface provides a well defined exobase. A major limitation of the successor simulations of the exospheres of Earth and Venus is the use of an exobase surface as an artifice to separate the collisional processes of the thermosphere from a collisionles exosphere. In this paper a new generalized approach to exosphere simulation is described, wherein the exobase is replaced by a barometric depletion of the major constitents of the thermosphere. Exospheric atoms in the thermosphere-exosphere transition region, and in the outer exosphere as well, travel in ballistic trajectories that are interrupted by collisons with the background gas, and by charge exchange interactions with ionospheric particles. The modified simulator has been applied to the terrestrial hydrogen exosphere problem, using velocity dependent differential cross sections to provide statistically correct collisional scattering in H-O and H-H(+) interactions. Global models are presented for both solstice and equinox over the effective solar cycle range of the F{sub 10.7} index (80 to 230). Simulation results show significant differences with previous terrestrial exosphere models, as well as with the H distributions of the MSIS-86 thermosphere model.
Monte Carlo Simulations of Cosmic Rays Hadronic Interactions
Aguayo Navarrete, Estanislao; Orrell, John L.; Kouzes, Richard T.
2011-04-01T23:59:59.000Z
This document describes the construction and results of the MaCoR software tool, developed to model the hadronic interactions of cosmic rays with different geometries of materials. The ubiquity of cosmic radiation in the environment results in the activation of stable isotopes, referred to as cosmogenic activities. The objective is to use this application in conjunction with a model of the MAJORANA DEMONSTRATOR components, from extraction to deployment, to evaluate cosmogenic activation of such components before and after deployment. The cosmic ray showers include several types of particles with a wide range of energy (MeV to GeV). It is infeasible to compute an exact result with a deterministic algorithm for this problem; Monte Carlo simulations are a more suitable approach to model cosmic ray hadronic interactions. In order to validate the results generated by the application, a test comparing experimental muon flux measurements and those predicted by the application is presented. The experimental and simulated results have a deviation of 3%.
Monte Carlo Sampling of Negative-temperature Plasma States
John A. Krommes; Sharadini Rath
2002-07-19T23:59:59.000Z
A Monte Carlo procedure is used to generate N-particle configurations compatible with two-temperature canonical equilibria in two dimensions, with particular attention to nonlinear plasma gyrokinetics. An unusual feature of the problem is the importance of a nontrivial probability density function R0(PHI), the probability of realizing a set {Phi} of Fourier amplitudes associated with an ensemble of uniformly distributed, independent particles. This quantity arises because the equilibrium distribution is specified in terms of {Phi}, whereas the sampling procedure naturally produces particles states gamma; {Phi} and gamma are related via a gyrokinetic Poisson equation, highly nonlinear in its dependence on gamma. Expansion and asymptotic methods are used to calculate R0(PHI) analytically; excellent agreement is found between the large-N asymptotic result and a direct numerical calculation. The algorithm is tested by successfully generating a variety of states of both positive and negative temperature, including ones in which either the longest- or shortest-wavelength modes are excited to relatively very large amplitudes.
Monte Carlo simulations of lattice models for single polymer systems
Hsu, Hsiao-Ping, E-mail: hsu@mpip-mainz.mpg.de [Max-Planck-Institut für Polymerforschung, Ackermannweg 10, D-55128 Mainz (Germany)
2014-10-28T23:59:59.000Z
Single linear polymer chains in dilute solutions under good solvent conditions are studied by Monte Carlo simulations with the pruned-enriched Rosenbluth method up to the chain length N?O(10{sup 4}). Based on the standard simple cubic lattice model (SCLM) with fixed bond length and the bond fluctuation model (BFM) with bond lengths in a range between 2 and ?(10), we investigate the conformations of polymer chains described by self-avoiding walks on the simple cubic lattice, and by random walks and non-reversible random walks in the absence of excluded volume interactions. In addition to flexible chains, we also extend our study to semiflexible chains for different stiffness controlled by a bending potential. The persistence lengths of chains extracted from the orientational correlations are estimated for all cases. We show that chains based on the BFM are more flexible than those based on the SCLM for a fixed bending energy. The microscopic differences between these two lattice models are discussed and the theoretical predictions of scaling laws given in the literature are checked and verified. Our simulations clarify that a different mapping ratio between the coarse-grained models and the atomistically realistic description of polymers is required in a coarse-graining approach due to the different crossovers to the asymptotic behavior.
A review of Monte Carlo simulations of polymers with PERM
Hsiao-Ping Hsu; Peter Grassberger
2011-07-06T23:59:59.000Z
In this review, we describe applications of the pruned-enriched Rosenbluth method (PERM), a sequential Monte Carlo algorithm with resampling, to various problems in polymer physics. PERM produces samples according to any given prescribed weight distribution, by growing configurations step by step with controlled bias, and correcting "bad" configurations by "population control". The latter is implemented, in contrast to other population based algorithms like e.g. genetic algorithms, by depth-first recursion which avoids storing all members of the population at the same time in computer memory. The problems we discuss all concern single polymers (with one exception), but under various conditions: Homopolymers in good solvents and at the $\\Theta$ point, semi-stiff polymers, polymers in confining geometries, stretched polymers undergoing a forced globule-linear transition, star polymers, bottle brushes, lattice animals as a model for randomly branched polymers, DNA melting, and finally -- as the only system at low temperatures, lattice heteropolymers as simple models for protein folding. PERM is for some of these problems the method of choice, but it can also fail. We discuss how to recognize when a result is reliable, and we discuss also some types of bias that can be crucial in guiding the growth into the right directions.
Nuclear Force from Monte Carlo Simulations of Lattice Quantum Chromodynamics
S. Aoki; T. Hatsuda; N. Ishii
2008-10-24T23:59:59.000Z
The nuclear force acting between protons and neutrons is studied in the Monte Carlo simulations of the fundamental theory of the strong interaction, the quantum chromodynamics defined on the hypercubic space-time lattice. After a brief summary of the empirical nucleon-nucleon (NN) potentials which can fit the NN scattering experiments in high precision, we outline the basic formulation to derive the potential between the extended objects such as the nucleons composed of quarks. The equal-time Bethe-Salpeter amplitude is a key ingredient for defining the NN potential on the lattice. We show the results of the numerical simulations on a $32^4$ lattice with the lattice spacing $a \\simeq 0.137 $fm (lattice volume (4.4 fm)$^4$) in the quenched approximation. The calculation was carried out using the massively parallel computer Blue Gene/L at KEK. We found that the calculated NN potential at low energy has basic features expected from the empirical NN potentials; attraction at long and medium distances and the repulsive core at short distance. Various future directions along this line of research are also summarized.
Protein folding and phylogenetic tree reconstruction using stochastic approximation Monte Carlo
Cheon, Sooyoung
2007-09-17T23:59:59.000Z
folding problems. The numerical results indicate that it outperforms simulated annealing and conventional Monte Carlo algorithms as a stochastic optimization algorithm. We also propose one method for the use of secondary structures in protein folding...
Xu, Sheng, S.M. Massachusetts Institute of Technology
2013-01-01T23:59:59.000Z
In order to use Monte Carlo methods for reactor simulations beyond benchmark activities, the traditional way of preparing and using nuclear cross sections needs to be changed, since large datasets of cross sections at many ...
Monte Carlo and thermal hydraulic coupling using low-order nonlinear diffusion acceleration
Herman, Bryan R. (Bryan Robert)
2014-01-01T23:59:59.000Z
Monte Carlo (MC) methods for reactor analysis are most often employed as a benchmark tool for other transport and diffusion methods. In this work, we identify and resolve a few of the issues associated with using MC as a ...
APR1400 LBLOCA uncertainty quantification by Monte Carlo method and comparison with Wilks' formula
Hwang, M.; Bae, S.; Chung, B. D. [Korea Atomic Energy Research Inst., 150 Dukjin-dong, Yuseong-gu, Daejeon (Korea, Republic of)
2012-07-01T23:59:59.000Z
An analysis of the uncertainty quantification for the PWR LBLOCA by the Monte Carlo calculation has been performed and compared with the tolerance level determined by Wilks' formula. The uncertainty range and distribution of each input parameter associated with the LBLOCA accident were determined by the PIRT results from the BEMUSE project. The Monte-Carlo method shows that the 95. percentile PCT value can be obtained reliably with a 95% confidence level using the Wilks' formula. The extra margin by the Wilks' formula over the true 95. percentile PCT by the Monte-Carlo method was rather large. Even using the 3 rd order formula, the calculated value using the Wilks' formula is nearly 100 K over the true value. It is shown that, with the ever increasing computational capability, the Monte-Carlo method is accessible for the nuclear power plant safety analysis within a realistic time frame. (authors)
Show me the way to Monte Carlo: density-based trajectory Steven Strachan1
Murray-Smith, Roderick
with a combination of Global Positioning System data, a music player, inertial sen- sing, magnetic bearing data, magnetic bearing data and Monte Carlo samp- ling and modulates a listener's music in order to guide them
Improvements and applications of the Uniform Fission Site method in Monte Carlo
Hunter, Jessica Lynn
2014-01-01T23:59:59.000Z
Monte Carlo methods for reactor analysis have been in development with the eventual goal of full-core analysis. To attain results with reasonable uncertainties, large computational resources are needed. Variance reduction ...
Physics-based Predictive Time Propagation Method for Monte Carlo Coupled Depletion Simulations
Johns, Jesse Merlin
2014-12-18T23:59:59.000Z
Monte Carlo techniques for numerical simulation has humble beginnings during the Manhattan project. They were developed to rein in intractable problems of nuclear implosion hydrodynamics, thermonuclear reactions, and computing neutron fluxes and core...
Combining Strategies for Parallel Stochastic Approximation Monte Carlo Algorithm of Big Data
Lin, Fang-Yu
2014-10-15T23:59:59.000Z
of iterations and is prone to get trapped into local optima. On the other hand, Stochastic Approximation in Monte Carlo algorithm (SAMC), a very sophisticated algorithm in theory and applications, can avoid getting trapped into local optima and produce more...
Walsh, Jonathan A. (Jonathan Alan)
2014-01-01T23:59:59.000Z
This thesis presents the development and analysis of computational methods for efficiently accessing and utilizing nuclear data in Monte Carlo neutron transport code simulations. Using the OpenMC code, profiling studies ...
Pasciak, Alexander Samuel
2007-04-25T23:59:59.000Z
Advancements in parallel and cluster computing have made many complex Monte Carlo simulations possible in the past several years. Unfortunately, cluster computers are large, expensive, and still not fast enough to make the ...
Physics-based Predictive Time Propagation Method for Monte Carlo Coupled Depletion Simulations
Johns, Jesse Merlin
2014-12-18T23:59:59.000Z
Monte Carlo techniques for numerical simulation has humble beginnings during the Manhattan project. They were developed to rein in intractable problems of nuclear implosion hydrodynamics, thermonuclear reactions, and computing neutron fluxes and core...
MARKOV CHAIN MONTE CARLO FOR AUTOMATED TRACKING OF GENEALOGY IN MICROSCOPY VIDEOS
MARKOV CHAIN MONTE CARLO FOR AUTOMATED TRACKING OF GENEALOGY IN MICROSCOPY VIDEOS KATHLEEN CHAMPION of the nuclei in the images and their genealogies. Evan Tice '09 has already developed some code that aims
Parallel Markov Chain Monte Carlo Methods for Large Scale Statistical Inverse Problems
Wang, Kainan
2014-04-18T23:59:59.000Z
but also the uncertainty of these estimations. Markov chain Monte Carlo (MCMC) is a useful technique to sample the posterior distribution and information can be extracted from the sampled ensemble. However, MCMC is very expensive to compute, especially...
Exponentially-convergent Monte Carlo for the One-dimensional Transport Equation
Peterson, Jacob Ross
2014-04-23T23:59:59.000Z
singular problems. Computational results are presented demonstrating the efficacy of the new approach. We tested our ECMC algorithm against standard Monte Carlo and found the ECMC method to be generally much more efficient. For a manufacture solution...
Fourth-order diffusion Monte Carlo algorithms for solving quantum many-body problems
Forbert, HA; Chin, Siu A.
2001-01-01T23:59:59.000Z
By decomposing the important sampled imaginary time Schrodinger evolution operator to fourth order with positive coefficients, we derived a number of distinct fourth-order diffusion Monte Carlo algorithms. These sophisticated algorithms require...
Radiative transfer in the earth's atmosphere-ocean system using Monte Carlo techniques
Bradley, Paul Andrew
1987-01-01T23:59:59.000Z
TRANSFER PROBLEM MONTE CARLO METHOD Assumptions of the Model Photon Pathlength Emulation Techniques Sampling Scattering Functions: Angles and Probabilities Emulation of an Interface Computing the Radiance by Statistical Estimation Determination... radiance values in both the atmosphere and the ocean from the scattering functions and other input data, with a Monte Carlo computer code. The polarization ot the radiation was taken into account by Kattawar et al. s in their computation...
PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code
Iandola, F N; O'Brien, M J; Procassini, R J
2010-11-29T23:59:59.000Z
Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.
MONTE CARLO SIMULATION OF METASTABLE OXYGEN PHOTOCHEMISTRY IN COMETARY ATMOSPHERES
Bisikalo, D. V.; Shematovich, V. I. [Institute of Astronomy of the Russian Academy of Sciences, Moscow (Russian Federation); Gérard, J.-C.; Hubert, B. [Laboratory for Planetary and Atmospheric Physics (LPAP), University of Liège, Liège (Belgium); Jehin, E.; Decock, A. [Origines Cosmologiques et Astrophysiques (ORCA), University of Liège (Belgium); Hutsemékers, D. [Extragalactic Astrophysics and Space Observations (EASO), University of Liège (Belgium); Manfroid, J., E-mail: B.Hubert@ulg.ac.be [High Energy Astrophysics Group (GAPHE), University of Liège (Belgium)
2015-01-01T23:59:59.000Z
Cometary atmospheres are produced by the outgassing of material, mainly H{sub 2}O, CO, and CO{sub 2} from the nucleus of the comet under the energy input from the Sun. Subsequent photochemical processes lead to the production of other species generally absent from the nucleus, such as OH. Although all comets are different, they all have a highly rarefied atmosphere, which is an ideal environment for nonthermal photochemical processes to take place and influence the detailed state of the atmosphere. We develop a Monte Carlo model of the coma photochemistry. We compute the energy distribution functions (EDF) of the metastable O({sup 1}D) and O({sup 1}S) species and obtain the red (630 nm) and green (557.7 nm) spectral line shapes of the full coma, consistent with the computed EDFs and the expansion velocity. We show that both species have a severely non-Maxwellian EDF, that results in broad spectral lines and the suprathermal broadening dominates due to the expansion motion. We apply our model to the atmosphere of comet C/1996 B2 (Hyakutake) and 103P/Hartley 2. The computed width of the green line, expressed in terms of speed, is lower than that of the red line. This result is comparable to previous theoretical analyses, but in disagreement with observations. We explain that the spectral line shape does not only depend on the exothermicity of the photochemical production mechanisms, but also on thermalization, due to elastic collisions, reducing the width of the emission line coming from the O({sup 1}D) level, which has a longer lifetime.
Utility of Monte Carlo Modelling for Holdup Measurements.
Belian, Anthony P.; Russo, P. A. (Phyllis A.); Weier, Dennis R. (Dennis Ray),
2005-01-01T23:59:59.000Z
Non-destructive assay (NDA) measurements performed to locate and quantify holdup in the Oak Ridge K25 enrichment cascade used neutron totals counting and low-resolution gamma-ray spectroscopy. This facility housed the gaseous diffusion process for enrichment of uranium, in the form of UF{sub 6} gas, from {approx} 20% to 93%. Inventory of {sup 235}U inventory in K-25 is all holdup. These buildings have been slated for decontaminatino and decommissioning. The NDA measurements establish the inventory quantities and will be used to assure criticality safety and meet criteria for waste analysis and transportation. The tendency to err on the side of conservatism for the sake of criticality safety in specifying total NDA uncertainty argues, in the interests of safety and costs, for obtaining the best possible value of uncertainty at the conservative confidence level for each item of process equipment. Variable deposit distribution is a complex systematic effect (i.e., determined by multiple independent variables) on the portable NDA results for very large and bulk converters that contributes greatly to total uncertainty for holdup in converters measured by gamma or neutron NDA methods. Because the magnitudes of complex systematic effects are difficult to estimate, computational tools are important for evaluating those that are large. Motivated by very large discrepancies between gamma and neutron measurements of high-mass converters with gamma results tending to dominate, the Monte Carlo code MCNP has been used to determine the systematic effects of deposit distribution on gamma and neutron results for {sup 235}U holdup mass in converters. This paper details the numerical methodology used to evaluate large systematic effects unique to each measurement type, validates the methodology by comparison with measurements, and discusses how modeling tools can supplement the calibration of instruments used for holdup measurements by providing realistic values at well-defined confidence levels for dominating systematic effects.
Review of Monte Carlo simulations for backgrounds from radioactivity
Selvi, Marco [INFN - Sezione di Bologna (Italy)] [INFN - Sezione di Bologna (Italy)
2013-08-08T23:59:59.000Z
For all experiments dealing with the rare event searches (neutrino, dark matter, neutrino-less double-beta decay), the reduction of the radioactive background is one of the most important and difficult tasks. There are basically two types of background, electron recoils and nuclear recoils. The electron recoil background is mostly from the gamma rays through the radioactive decay. The nuclear recoil background is from neutrons from spontaneous fission, (?, n) reactions and muoninduced interactions (spallations, photo-nuclear and hadronic interaction). The external gammas and neutrons from the muons and laboratory environment, can be reduced by operating the detector at deep underground laboratories and by placing active or passive shield materials around the detector. The radioactivity of the detector materials also contributes to the background; in order to reduce it a careful screening campaign is mandatory to select highly radio-pure materials. In this review I present the status of current Monte Carlo simulations aimed to estimate and reproduce the background induced by gamma and neutron radioactivity of the materials and the shield of rare event search experiment. For the electromagnetic background a good level of agreement between the data and the MC simulation has been reached by the XENON100 and EDELWEISS experiments, using the GEANT4 toolkit. For the neutron background, a comparison between the yield of neutrons from spontaneous fission and (?, n) obtained with two dedicated softwares, SOURCES-4A and the one developed by Mei-Zhang-Hime, show a good overall agreement, with total yields within a factor 2 difference. The energy spectra from SOURCES-4A are in general smoother, while those from MZH presents sharp peaks. The neutron propagation through various materials has been studied with two MC codes, GEANT4 and MCNPX, showing a reasonably good agreement, inside 50% discrepancy.
Structural Stability and Defect Energetics of ZnO from Diffusion Quantum Monte Carlo
Santana Palacio, Juan A [ORNL; Krogel, Jaron T [ORNL; Kim, Jeongnim [ORNL; Kent, Paul R [ORNL; Reboredo, Fernando A [ORNL
2015-01-01T23:59:59.000Z
We have applied the many-body ab-initio diffusion quantum Monte Carlo (DMC) method to study Zn and ZnO crystals under pressure, and the energetics of the oxygen vacancy, zinc interstitial and hydrogen impurities in ZnO. We show that DMC is an accurate and practical method that can be used to characterize multiple properties of materials that are challenging for density functional theory approximations. DMC agrees with experimental measurements to within 0.3 eV, including the band-gap of ZnO, the ionization potential of O and Zn, and the atomization energy of O2, ZnO dimer, and wurtzite ZnO. DMC predicts the oxygen vacancy as a deep donor with a formation energy of 5.0(2) eV under O-rich conditions and thermodynamic transition levels located between 1.8 and 2.5 eV from the valence band maximum. Our DMC results indicate that the concentration of zinc interstitial and hydrogen impurities in ZnO should be low under n-type, and Zn- and H-rich conditions because these defects have formation energies above 1.4 eV under these conditions. Comparison of DMC and hybrid functionals shows that these DFT approximations can be parameterized to yield a general correct qualitative description of ZnO. However, the formation energy of defects in ZnO evaluated with DMC and hybrid functionals can differ by more than 0.5 eV.
Final Report: 06-LW-013, Nuclear Physics the Monte Carlo Way
Ormand, W E
2009-03-01T23:59:59.000Z
This is document reports the progress and accomplishments achieved in 2006-2007 with LDRD funding under the proposal 06-LW-013, 'Nuclear Physics the Monte Carlo Way'. The project was a theoretical study to explore a novel approach to dealing with a persistent problem in Monte Carlo approaches to quantum many-body systems. The goal was to implement a solution to the notorious 'sign-problem', which if successful, would permit, for the first time, exact solutions to quantum many-body systems that cannot be addressed with other methods. In this document, we outline the progress and accomplishments achieved during FY2006-2007 with LDRD funding in the proposal 06-LW-013, 'Nuclear Physics the Monte Carlo Way'. This project was funded under the Lab Wide LDRD competition at Lawrence Livermore National Laboratory. The primary objective of this project was to test the feasibility of implementing a novel approach to solving the generic quantum many-body problem, which is one of the most important problems being addressed in theoretical physics today. Instead of traditional methods based matrix diagonalization, this proposal focused a Monte Carlo method. The principal difficulty with Monte Carlo methods, is the so-called 'sign problem'. The sign problem, which will discussed in some detail later, is endemic to Monte Carlo approaches to the quantum many-body problem, and is the principal reason that they have not been completely successful in the past. Here, we outline our research in the 'shifted-contour method' applied the Auxiliary Field Monte Carlo (AFMC) method.
A Fano cavity test for Monte Carlo proton transport algorithms
Sterpin, Edmond, E-mail: esterpin@yahoo.fr [Université catholique de Louvain, Center of Molecular Imaging, Radiotherapy and Oncology, Institut de Recherche Experimentale et Clinique, Avenue Hippocrate 54, 1200 Brussels (Belgium)] [Université catholique de Louvain, Center of Molecular Imaging, Radiotherapy and Oncology, Institut de Recherche Experimentale et Clinique, Avenue Hippocrate 54, 1200 Brussels (Belgium); Sorriaux, Jefferson; Souris, Kevin [Université catholique de Louvain, Center of Molecular Imaging, Radiotherapy and Oncology, Institut de Recherche Experimentale et Clinique, Avenue Hippocrate 54, 1200 Brussels, Belgium and Université catholique de Louvain, ICTEAM institute, Chemin du cyclotron 6, 1348 Louvain-la-Neuve (Belgium)] [Université catholique de Louvain, Center of Molecular Imaging, Radiotherapy and Oncology, Institut de Recherche Experimentale et Clinique, Avenue Hippocrate 54, 1200 Brussels, Belgium and Université catholique de Louvain, ICTEAM institute, Chemin du cyclotron 6, 1348 Louvain-la-Neuve (Belgium); Vynckier, Stefaan [Université catholique de Louvain, Center of Molecular Imaging, Radiotherapy and Oncology, Institut de Recherche Experimentale et Clinique, Avenue Hippocrate 54, 1200 Brussels, Belgium and Département de Radiothérapie, Cliniques Universitaires Saint-Luc, Avenue Hippocrate 54, 1200 Brussels (Belgium)] [Université catholique de Louvain, Center of Molecular Imaging, Radiotherapy and Oncology, Institut de Recherche Experimentale et Clinique, Avenue Hippocrate 54, 1200 Brussels, Belgium and Département de Radiothérapie, Cliniques Universitaires Saint-Luc, Avenue Hippocrate 54, 1200 Brussels (Belgium); Bouchard, Hugo [Département de radio-oncologie, Centre hospitalier de l’Université de Montréal (CHUM), 1560 Sherbrooke est, Montréal, Québec H2L 4M1 (Canada)] [Département de radio-oncologie, Centre hospitalier de l’Université de Montréal (CHUM), 1560 Sherbrooke est, Montréal, Québec H2L 4M1 (Canada)
2014-01-15T23:59:59.000Z
Purpose: In the scope of reference dosimetry of radiotherapy beams, Monte Carlo (MC) simulations are widely used to compute ionization chamber dose response accurately. Uncertainties related to the transport algorithm can be verified performing self-consistency tests, i.e., the so-called “Fano cavity test.” The Fano cavity test is based on the Fano theorem, which states that under charged particle equilibrium conditions, the charged particle fluence is independent of the mass density of the media as long as the cross-sections are uniform. Such tests have not been performed yet for MC codes simulating proton transport. The objectives of this study are to design a new Fano cavity test for proton MC and to implement the methodology in two MC codes: Geant4 and PENELOPE extended to protons (PENH). Methods: The new Fano test is designed to evaluate the accuracy of proton transport. Virtual particles with an energy ofE{sub 0} and a mass macroscopic cross section of (?)/(?) are transported, having the ability to generate protons with kinetic energy E{sub 0} and to be restored after each interaction, thus providing proton equilibrium. To perform the test, the authors use a simplified simulation model and rigorously demonstrate that the computed cavity dose per incident fluence must equal (?E{sub 0})/(?) , as expected in classic Fano tests. The implementation of the test is performed in Geant4 and PENH. The geometry used for testing is a 10 × 10 cm{sup 2} parallel virtual field and a cavity (2 × 2 × 0.2 cm{sup 3} size) in a water phantom with dimensions large enough to ensure proton equilibrium. Results: For conservative user-defined simulation parameters (leading to small step sizes), both Geant4 and PENH pass the Fano cavity test within 0.1%. However, differences of 0.6% and 0.7% were observed for PENH and Geant4, respectively, using larger step sizes. For PENH, the difference is attributed to the random-hinge method that introduces an artificial energy straggling if step size is not small enough. Conclusions: Using conservative user-defined simulation parameters, both PENH and Geant4 pass the Fano cavity test for proton transport. Our methodology is applicable to any kind of charged particle, provided that the considered MC code is able to track the charged particle considered.
Fission matrix-based Monte Carlo criticality analysis of fuel storage pools
Farlotti, M. [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI 48109 (United States); Ecole Polytechnique, Palaiseau, F 91128 (France); Larsen, E. W. [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI 48109 (United States)
2013-07-01T23:59:59.000Z
Standard Monte Carlo transport procedures experience difficulties in solving criticality problems in fuel storage pools. Because of the strong neutron absorption between fuel assemblies, source convergence can be very slow, leading to incorrect estimates of the eigenvalue and the eigenfunction. This study examines an alternative fission matrix-based Monte Carlo transport method that takes advantage of the geometry of a storage pool to overcome this difficulty. The method uses Monte Carlo transport to build (essentially) a fission matrix, which is then used to calculate the criticality and the critical flux. This method was tested using a test code on a simple problem containing 8 assemblies in a square pool. The standard Monte Carlo method gave the expected eigenfunction in 5 cases out of 10, while the fission matrix method gave the expected eigenfunction in all 10 cases. In addition, the fission matrix method provides an estimate of the error in the eigenvalue and the eigenfunction, and it allows the user to control this error by running an adequate number of cycles. Because of these advantages, the fission matrix method yields a higher confidence in the results than standard Monte Carlo. We also discuss potential improvements of the method, including the potential for variance reduction techniques. (authors)
A Proposal for a Standard Interface Between Monte Carlo Tools And One-Loop Programs
Binoth, T.; /Edinburgh U.; Boudjema, F.; /Annecy, LAPP; Dissertori, G.; Lazopoulos, A.; /Zurich, ETH; Denner, A.; /PSI, Villigen; Dittmaier, S.; /Freiburg U.; Frederix, R.; Greiner, N.; Hoeche, Stefan; /Zurich U.; Giele, W.; Skands, P.; Winter, J.; /Fermilab; Gleisberg, T.; /SLAC; Archibald, J.; Heinrich, G.; Krauss, F.; Maitre, D.; /Durham U., IPPP; Huber, M.; /Munich, Max Planck Inst.; Huston, J.; /Michigan State U.; Kauer, N.; /Royal Holloway, U. of London; Maltoni, F.; /Louvain U., CP3 /Milan Bicocca U. /INFN, Turin /Turin U. /Granada U., Theor. Phys. Astrophys. /CERN /NIKHEF, Amsterdam /Heidelberg U. /Oxford U., Theor. Phys.
2011-11-11T23:59:59.000Z
Many highly developed Monte Carlo tools for the evaluation of cross sections based on tree matrix elements exist and are used by experimental collaborations in high energy physics. As the evaluation of one-loop matrix elements has recently been undergoing enormous progress, the combination of one-loop matrix elements with existing Monte Carlo tools is on the horizon. This would lead to phenomenological predictions at the next-to-leading order level. This note summarises the discussion of the next-to-leading order multi-leg (NLM) working group on this issue which has been taking place during the workshop on Physics at TeV Colliders at Les Houches, France, in June 2009. The result is a proposal for a standard interface between Monte Carlo tools and one-loop matrix element programs.
Monte Carlo implementation of a guiding-center Fokker-Planck kinetic equation
Hirvijoki, E.; Snicker, A.; Kurki-Suonio, T. [Department of Applied Physics, Aalto University, FI-00076 Aalto (Finland)] [Department of Applied Physics, Aalto University, FI-00076 Aalto (Finland); Brizard, A. [Department of Physics, Saint Michael's College, Colchester, Vermont 05439 (United States)] [Department of Physics, Saint Michael's College, Colchester, Vermont 05439 (United States)
2013-09-15T23:59:59.000Z
A Monte Carlo method for the collisional guiding-center Fokker-Planck kinetic equation is derived in the five-dimensional guiding-center phase space, where the effects of magnetic drifts due to the background magnetic field nonuniformity are included. It is shown that, in the limit of a homogeneous magnetic field, our guiding-center Monte Carlo collision operator reduces to the guiding-center Monte Carlo Coulomb operator previously derived by Xu and Rosenbluth [Phys. Fluids B 3, 627 (1991)]. Applications of the present work will focus on the collisional transport of energetic ions in complex nonuniform magnetized plasmas in the large mean-free-path (collisionless) limit, where magnetic drifts must be retained.
Data decomposition of Monte Carlo particle transport simulations via tally servers
Romano, Paul K., E-mail: paul.k.romano@gmail.com [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States); Siegel, Andrew R., E-mail: siegala@mcs.anl.gov [Argonne National Laboratory, Theory and Computing Sciences, 9700 S Cass Ave., Argonne, IL 60439 (United States); Forget, Benoit, E-mail: bforget@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States)] [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States); Smith, Kord, E-mail: kord@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States)] [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States)
2013-11-01T23:59:59.000Z
An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.
Crossing the mesoscale no-mans land via parallel kinetic Monte Carlo.
Garcia Cardona, Cristina (San Diego State University); Webb, Edmund Blackburn, III; Wagner, Gregory John; Tikare, Veena; Holm, Elizabeth Ann; Plimpton, Steven James; Thompson, Aidan Patrick; Slepoy, Alexander (U. S. Department of Energy, NNSA); Zhou, Xiao Wang; Battaile, Corbett Chandler; Chandross, Michael Evan
2009-10-01T23:59:59.000Z
The kinetic Monte Carlo method and its variants are powerful tools for modeling materials at the mesoscale, meaning at length and time scales in between the atomic and continuum. We have completed a 3 year LDRD project with the goal of developing a parallel kinetic Monte Carlo capability and applying it to materials modeling problems of interest to Sandia. In this report we give an overview of the methods and algorithms developed, and describe our new open-source code called SPPARKS, for Stochastic Parallel PARticle Kinetic Simulator. We also highlight the development of several Monte Carlo models in SPPARKS for specific materials modeling applications, including grain growth, bubble formation, diffusion in nanoporous materials, defect formation in erbium hydrides, and surface growth and evolution.
General purpose dynamic Monte Carlo with continuous energy for transient analysis
Sjenitzer, B. L.; Hoogenboom, J. E. [Delft Univ. of Technology, Dept. of Radiation, Radionuclide and Reactors, Mekelweg 15, 2629JB Delft (Netherlands)
2012-07-01T23:59:59.000Z
For safety assessments transient analysis is an important tool. It can predict maximum temperatures during regular reactor operation or during an accident scenario. Despite the fact that this kind of analysis is very important, the state of the art still uses rather crude methods, like diffusion theory and point-kinetics. For reference calculations it is preferable to use the Monte Carlo method. In this paper the dynamic Monte Carlo method is implemented in the general purpose Monte Carlo code Tripoli4. Also, the method is extended for use with continuous energy. The first results of Dynamic Tripoli demonstrate that this kind of calculation is indeed accurate and the results are achieved in a reasonable amount of time. With the method implemented in Tripoli it is now possible to do an exact transient calculation in arbitrary geometry. (authors)
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, William P. (Tracy, CA); Hartmann-Siantar, Christine L. (San Ramon, CA); Rathkopf, James A. (Livermore, CA)
1999-01-01T23:59:59.000Z
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, W.P.; Hartmann-Siantar, C.L.; Rathkopf, J.A.
1999-02-09T23:59:59.000Z
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media. 57 figs.
Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis
Wilson, Paul; Evans, Thomas; Tautges, Tim
2012-12-24T23:59:59.000Z
This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well-suited to coupling with the unstructured meshes that are used in other physics simulations.
An Advanced Neutronic Analysis Toolkit with Inline Monte Carlo capability for BHTR Analysis
William R. Martin; John C. Lee
2009-12-30T23:59:59.000Z
Monte Carlo capability has been combined with a production LWR lattice physics code to allow analysis of high temperature gas reactor configurations, accounting for the double heterogeneity due to the TRISO fuel. The Monte Carlo code MCNP5 has been used in conjunction with CPM3, which was the testbench lattice physics code for this project. MCNP5 is used to perform two calculations for the geometry of interest, one with homogenized fuel compacts and the other with heterogeneous fuel compacts, where the TRISO fuel kernels are resolved by MCNP5.
Monte Carlo simulations of the HP model (the "Ising model" of protein folding)
Li, Ying Wai; Landau, David P; 10.1016/j.cpc.2010.12.049
2011-01-01T23:59:59.000Z
Using Wang-Landau sampling with suitable Monte Carlo trial moves (pull moves and bond-rebridging moves combined) we have determined the density of states and thermodynamic properties for a short sequence of the HP protein model. For free chains these proteins are known to first undergo a collapse "transition" to a globule state followed by a second "transition" into a native state. When placed in the proximity of an attractive surface, there is a competition between surface adsorption and folding that leads to an intriguing sequence of "transitions". These transitions depend upon the relative interaction strengths and are largely inaccessible to "standard" Monte Carlo methods.
FREYA-a new Monte Carlo code for improved modeling of fission chains
Hagmann, C A; Randrup, J; Vogt, R L
2012-06-12T23:59:59.000Z
A new simulation capability for modeling of individual fission events and chains and the transport of fission products in materials is presented. FREYA ( Fission Yield Event Yield Algorithm ) is a Monte Carlo code for generating fission events providing correlated kinematic information for prompt neutrons, gammas, and fragments. As a standalone code, FREYA calculates quantities such as multiplicity-energy, angular, and gamma-neutron energy sharing correlations. To study materials with multiplication, shielding effects, and detectors, we have integrated FREYA into the general purpose Monte Carlo code MCNP. This new tool will allow more accurate modeling of detector responses including correlations and the development of SNM detectors with increased sensitivity.
Matching NLO QCD with parton shower in Monte Carlo scheme - the KrkNLO method
S. Jadach; W. Placzek; S. Sapeta; A. Siodmok; M. Skrzypek
2015-05-11T23:59:59.000Z
A new method of including the complete NLO QCD corrections to hard processes in the LO parton-shower Monte Carlo (PSMC) is presented. This method, called KrkNLO, requires the use of parton distribution functions in a dedicated Monte Carlo factorization scheme, which is also discussed in this paper. In the future, it may simplify introduction of the NNLO corrections to hard processes and the NLO corrections to PSMC. Details of the method and numerical examples of its practical implementation, as well as comparisons with other calculations, such as MCFM, MC@NLO, POWHEG, for single $Z/\\gamma^*$-boson production at the LHC, are presented.
A Look at general cavity theory through a code incorporating Monte Carlo techniques
Weyland, Mark Duffy
1989-01-01T23:59:59.000Z
A LOOK AT GENERAL CAVITY THEORY THROUGH A CODE INCORPORATING MONTE CARLO TECHNIQUES A Thesis by MARK DUFFY WEYLAND Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree... of MASTER OF SCIENCE December 1989 Major Subject: Health Physics A LOOK AT GENERAL CAVITY THEORY THROUGH A CODE INCORPORATING MONTE CARLO TECHNIQUES A Thesis by MARK DUFFY WEYLAND Approved as to style and content by: I hn W. Po ton air of Committee...
Anderson, James B.
Direct Monte Carlo simulation of chemical reaction systems: Internal energy transfer and an energy a direct Monte Carlo simulation of an energy-dependent t&molecular reaction system of the type A+ B simulation of a unimo- lecular reaction with an energy-dependent rate constant k3 and with explicit treatment
Wu, Zhigang
Quantum Monte Carlo calculations of the energy-level alignment at hybrid interfaces: Role of many; published 29 May 2009 An approach is presented for obtaining a highly accurate description of the energy-level alignment at hybrid interfaces, using quantum Monte Carlo calculations to include many-body effects
Sailhac, Pascal
Inversion of surface nuclear magnetic resonance data by an adapted Monte Carlo method applied, France Abstract Inversion of surface nuclear magnetic resonance (SNMR) provides important information Science B.V. All rights reserved. Keywords: Inversion; Surface nuclear magnetic resonance; Monte Carlo 1
Mezei, Mihaly
An improved acceptance criterion for local move Monte Carlo method in which trial steps change only sevenEfficient Monte Carlo sampling for long molecular chains using local moves, tested on a solvated, New York University, New York, New York 10029 Received 20 February 2002; accepted 27 November 2002
Chung, Kiwhan
1996-01-01T23:59:59.000Z
While the use of Monte Carlo method has been prevalent in nuclear engineering, it has yet to fully blossom in the study of solute transport in porous media. By using an etched-glass micromodel, an attempt is made to apply Monte Carlo method...
A Scalable Parallel Monte Carlo Method for Free Energy Simulations of Molecular Systems
Chan, Derek Y C
A Scalable Parallel Monte Carlo Method for Free Energy Simulations of Molecular Systems MALEK O for problems where the energy dominates the entropy. An example is parallel tempering, in which simulations the free energy of the system as a direct output of the simulation. Traditional Metropolis MC samples phase
Introduction to Markov Chain Monte Carlo Simulations and their Statistical Analysis
Bernd A. Berg
2004-10-19T23:59:59.000Z
This article is a tutorial on Markov chain Monte Carlo simulations and their statistical analysis. The theoretical concepts are illustrated through many numerical assignments from the author's book on the subject. Computer code (in Fortran) is available for all subjects covered and can be downloaded from the web.
Monte Carlo study of a luminosity detector for the International Linear Collider
H. Abramowicz; R. Ingbir; S. Kananov; A. Levy
2005-08-11T23:59:59.000Z
This paper presents the status of Monte Carlo simulation of one of the luminosity detectors considered for the future e+e- International Linear Collider (ILC). The detector consists of a tungsten/silicon sandwich calorimeter with pad readout. The study was performed for Bhabha scattering events assuming a zero crossing angle for the beams.
Reconstruction for proton computed tomography by tracing proton trajectories: A Monte Carlo study
Reconstruction for proton computed tomography by tracing proton trajectories: A Monte Carlo study 11 January 2006; published 22 February 2006 Proton computed tomography pCT has been explored computed tomography pCT has several potential ad- vantages in medical applications. Its favorable dose
Collective enhancement of nuclear state densities by the shell model Monte Carlo approach
C. Özen; Y. Alhassid; H. Nakada
2015-01-22T23:59:59.000Z
The shell model Monte Carlo (SMMC) approach allows for the microscopic calculation of statistical and collective properties of heavy nuclei using the framework of the configuration-interaction shell model in very large model spaces. We present recent applications of the SMMC method to the calculation of state densities and their collective enhancement factors in rare-earth nuclei.
MonteCarloType Techniques for Processing Interval Uncertainty, and Their Geophysical and
Ward, Karen
MonteCarloType Techniques for Processing Interval Uncertainty, and Their Geophysical contact email vladik@cs.utep.edu Abstract To determine the geophysical structure of a region, we measure are independently normally distributed. Problem: the resulting accuracies are not in line with geophysical intuition
Monte-Carlo-Type Techniques for Processing Interval Uncertainty, and Their Geophysical and
Ward, Karen
Monte-Carlo-Type Techniques for Processing Interval Uncertainty, and Their Geophysical contact email vladik@cs.utep.edu Abstract To determine the geophysical structure of a region, we measure are independently normally distributed. Problem: the resulting accuracies are not in line with geophysical intuition
First-row hydrides: Dissociation and ground state energies using quantum Monte Carlo
Anderson, James B.
First-row hydrides: Dissociation and ground state energies using quantum Monte Carlo Arne Lu, Pennsylvania 16802 Received 20 May 1996; accepted 24 July 1996 Accurate ground state energies comparable FN-DQMC method. The residual energy, the nodal error due to the error in the nodal structure
A Combined Density Functional and Monte Carlo Study of Polycarbonate R. O. Jones and P. Ballone[*
A Combined Density Functional and Monte Carlo Study of Polycarbonate R. O. Jones and P. Ballone and reactivity for organic systems closely related to bisphenol-A-polycarbonate(BPA- PC). The results provide a detailed description of polymers, using bisphenol A polycarbonate (BPA- PC) as an example
Multivariate Population Balances via Moment and Monte Carlo Simulation Methods: An Important Sol application of current/future importance, a multivariate description is required, for which the existing, hopefully, motivate a broader attack on important multivariate population balance problems, including those
Alfè, Dario
Structural properties and enthalpy of formation of magnesium hydride from quantum Monte Carlo calculations to study the structural properties of magnesium hydride MgH2 , including the pressure. INTRODUCTION The energetics of metal hydrides has recently become an issue of large scientific
Thermodynamics and quark susceptibilities: a Monte-Carlo approach to the PNJL model
Weise, Wolfram
Thermodynamics and quark susceptibilities: a Monte-Carlo approach to the PNJL model M on the thermodynamics of the model, both in the case of pure gauge theory and including two quark flavors. In the two- flavor case, we calculate the second-order Taylor expansion coefficients of the thermodynamic grand
Explicit estimation of higher order modes in fission source distribution of Monte-Carlo calculation
Yamamoto, A.; Sakata, K.; Endo, T. [Nagoya University, Department of Materials, Physics and Energy Engineering, Furo-cho, Chikusa-ku, Nagoya, 464-8603 (Japan)
2013-07-01T23:59:59.000Z
Magnitude of higher order modes in fission source distribution of a multi-group Monte-Carlo calculation is estimated using the orthogonal property of forward and adjoint fission source distributions. Calculation capability of the forward and adjoint fission source distributions for fundamental and higher order modes are implemented in the AEGIS code, which is a two-dimensional transport code based on the method of characteristics. With the calculation results of the AEGIS code, magnitudes of the first to fifth higher order modes in fission source distribution obtained by the multi-group Monte-Carlo code GMVP are estimated. There are two contributions in the present study - (1) establishment of a surrogate model, which represents convergence of fission source distribution taking into account the inherent statistical 'noise' of higher order modes of Monte-Carlo calculations and (2) independent confirmation of the estimated dominance ratio in a Monte-Carlo calculation. The surrogate model would contribute to studies of the inter-cycle correlation and estimation of sufficient number of inactive/active cycles. (authors)
Dose distribution close to metal implants in Gamma Knife Radiosurgery: A Monte Carlo study
Yu, K.N.
Detachable Coil GDC system was used to localize and obliterate the aneurysm.5 Soft platinum coils were8 II. METHODOLOGY The Monte Carlo system employed is the PRESTA Pa- rameter Reduced Electron be predicted correctly by the present treatment planning system, GammaPlan,1 be- cause the calculations
Bayes and Big Data: The Consensus Monte Carlo Algorithm Steven L. Scott1
Cortes, Corinna
Bayes and Big Data: The Consensus Monte Carlo Algorithm Steven L. Scott1 , Alexander W. Blocker1 of Business October 31, 2013 Abstract A useful definition of "big data" is data that is too big to comfortably by splitting data across multiple machines. Communication between large numbers of machines is expensive
Hale, Barbara N.
CALCULATION OF SCALED NUCLEATION RATES FOR WATER USING MONTE CARLO GENERATED CLUSTER FREE ENERGYMattio All Rights Reserved #12;iii ABSTRACT Helmholtz free energy differences, -dFn , are calculated inconsistent with the experimental properties of water. Summation of the scaled TIP4P free energy differences
Monte Carlo simulation of electron transport in degenerate and inhomogeneous semiconductors
Monte Carlo simulation of electron transport in degenerate and inhomogeneous semiconductors Mona concentrations up to 1020 cm-3 . De- generate semiconductors are important for thermoelectric and thermionic transport in degenerate semiconductor-based structures. If the electron wavelength is smaller than
K-effective of the world: and other concerns for Monte Carlo Eigenvalue calculations
Brown, Forrest B [Los Alamos National Laboratory
2010-01-01T23:59:59.000Z
Monte Carlo methods have been used to compute k{sub eff} and the fundamental model eigenfunction of critical systems since the 1950s. Despite the sophistication of today's Monte Carlo codes for representing realistic geometry and physics interactions, correct results can be obtained in criticality problems only if users pay attention to source convergence in the Monte Carlo iterations and to running a sufficient number of neutron histories to adequately sample all significant regions of the problem. Recommended best practices for criticality calculations are reviewed and applied to several practical problems for nuclear reactors and criticality safety, including the 'K-effective of the World' problem. Numerical results illustrate the concerns about convergence and bias. The general conclusion is that with today's high-performance computers, improved understanding of the theory, new tools for diagnosing convergence (e.g., Shannon entropy of the fission distribution), and clear practical guidance for performing calculations, practitioners will have a greater degree of confidence than ever of obtaining correct results for Monte Carlo criticality calculations.
Monte Carlo Simulation of Alzheimer's Disease in the United States: 2010-2060
Feres, Renato
Monte Carlo Simulation of Alzheimer's Disease in the United States: 2010-2060 Michael Blech concerns facing the United States over the next 50 years. This progressive disease is currently the sixth on the United States population, and second, the simulation models both prevalence and mortality. Both
Sequential Monte Carlo in Model Comparison: Example in Cellular Dynamics in Systems Biology
Richardson, David
: American Statistical Association (2009): 1274-1287. Abstract Sequential Monte Carlo analysis of time series. Mukherjee L. You M. West -- Published in: JSM Proceedings/Bayesian Statistical Science. Alexandria, VA statistical model assessment is really just beginning in this new field. Single cell time series data
A new approach to Monte Carlo simulations in statistical physics: Wang-Landau sampling
Holzwarth, Natalie
it to models exhibiting first-order or second-order phase transitions. © 2004 American Association of PhysicsA new approach to Monte Carlo simulations in statistical physics: Wang-Landau sampling D. P. Landau for doing simulations in classical statistical physics in a different way. Instead of sampling
arXiv:physics/000104722Jan2000 Path Integral Monte Carlo Calculation of the Deuterium Hugoniot
Militzer, Burkhard
arXiv:physics/000104722Jan2000 Path Integral Monte Carlo Calculation of the Deuterium Hugoniot B University of Illinois at Urbana-Champaign, Urbana, IL 61801 (January 21, 2000) Restricted path integral of the path integral. Further, we compare the results obtained with a free particle nodal restriction
Calculating Risk of Cost Using Monte Carlo Simulations with Fuzzy Parameters in Civil Engineering
Pownuk, Andrzej
Calculating Risk of Cost Using Monte Carlo Simulations with Fuzzy Parameters in Civil Engineering MICHAL BE¸TKOWSKI Department of Civil Engineering, Silesian University of Technology,Gliwice, Poland, mb@zeus.polsl.gliwice.pl ANDRZEJ POWNUK Department of Civil Engineering, Silesian University of Technology,Gliwice, Poland, pownuk
Performance Characteristics of Cathode Materials for Lithium-Ion Batteries: A Monte Carlo Strategy
Subramanian, Venkat
Performance Characteristics of Cathode Materials for Lithium-Ion Batteries: A Monte Carlo Strategy to study the performance of cathode materials in lithium-ion batteries. The methodology takes into account. Published September 26, 2008. Lithium-ion batteries are state-of-the-art power sources1 for por- table
Monte Carlo Tree Search for Simulated Car Racing Jacob Fischer1
Togelius, Julian
(TORCS) is a popular platform for experimenting with different AI methods in car racing. A va- rietyMonte Carlo Tree Search for Simulated Car Racing Jacob Fischer1 , Nikolaj Falsted1 , Mathias be modified to achieve this. In this paper, we investi- gate the application of MCTS to simulated car racing
Study of CANDU Thorium-based Fuel Cycles by Deterministic and Monte Carlo Methods
Paris-Sud XI, Université de
Study of CANDU Thorium-based Fuel Cycles by Deterministic and Monte Carlo Methods A. Nuttin1 , P, there is a renewal of interest in self-sustainable thorium fuel cycles applied to various concepts such as Molten here, with a shorter term view, to re-evaluate the economic competitiveness of once-through thorium
The polarized emissivity of a wind-roughened sea surface: A Monte Carlo model
Theiler, James
The polarized emissivity of a wind-roughened sea surface: A Monte Carlo model Bradley G. Henderson-infrared emissivity of a wind-roughened sea surface. The model includes the effects of both shadowing and the reflected component of surface emission. By using Stokes vectors to quantify the radiation along a given ray
Quantum Monte Carlo study of a disordered 2D Josephson junction array
Stroud, David
Quantum Monte Carlo study of a disordered 2D Josephson junction array W.A. Al-Saidi *, D. Stroud not be established even * Corresponding author. E-mail addresses: al-saidi.1@osu.edu (W.A. Al-Saidi), stroud
Sequential Monte Carlo for Simultaneous Passive Device-Free Tracking and Sensor Localization Using
Rabbat, Michael
Sequential Monte Carlo for Simultaneous Passive Device-Free Tracking and Sensor Localization Using Men Beijing Univ. Posts & Telecom. Beijing, China menad@bupt.edu.cn ABSTRACT This paper presents and evaluates a method for simulta- neously tracking a target while localizing the sensor nodes of a passive
A Methodological Comparison of Monte Carlo Simulation and Epoch-Era Analysis for
de Weck, Olivier L.
techniques, morphological analysis, scenario planning · Semi-quantitative methods (can be used to initialize%) Probabilistic risk assessment (PRA), Fault Tree Analysis (FTA), Hazards Analysis (HA), Failure modes and effectsA Methodological Comparison of Monte Carlo Simulation and Epoch-Era Analysis for Tradespace
Ryan, Dominic
Monte Carlo simulations of transverse spin freezing in the three-dimensional frustrated Heisenberg of the spins freeze leading to a noncollinear spin structure dominated by ferromagnetic correlations. The phase as the transverse degrees of freedom order.' Theoretical support for a transverse spin freezing tran- sition
Monte Carlo Methods for Uncertainty Quantification Mathematical Institute, University of Oxford
Giles, Mike
exchange rates weather electricity/gas demand crude oil prices . . . Mike Giles (Oxford) Monte Carlo finance, stochastic differential equations are used to model the behaviour of stocks interest rates in Finance Examples: Geometric Brownian motion (Black-Scholes model for stock prices) dS = r S dt + S dW Cox
A Monte Carlo Method Used for the Identification of the Muscle Spindle
Rigas, Alexandros
the behavior of the muscle spindle by using a logistic regression model. The system receives input from. Key words: Exact logistic regression, likelihood function, Monte Carlo technique, muscle spin- dle. 21 is part of the skeletal muscles and is responsible for the initiation of move- ment and the maintenance
Ilan, Boaz
Monte-Carlo simulations of photon transport to predict the performance of LSCs based on "type-II" CdSe. In addition, when the LSC has CdSe-CdTe nanorods that are aligned perpendicular to the top surface, the escape.1063/1.3619809] I. INTRODUCTION Photovoltaic (PV) solar cells have become much more efficient over the past few
Usefulness of the reversible jump Markov chain Monte Carlo model in regional flood frequency
Ribatet, Mathieu
Usefulness of the reversible jump Markov chain Monte Carlo model in regional flood frequency; revised 3 May 2007; accepted 17 May 2007; published 3 August 2007. [1] Regional flood frequency analysis and the index flood approach. Results show that the proposed estimator is absolutely suited to regional
Instabilities in Molecular Dynamics Integrators used in Hybrid Monte Carlo Simulations
B. Joo; UKQCD Collaboration
2001-10-11T23:59:59.000Z
We discuss an instability in the leapfrog integration algorithm, widely used in current Hybrid Monte Carlo (HMC) simulations of lattice QCD. We demonstrate the instability in the simple harmonic oscillator (SHO) system where it is manifest. We demonstrate the instability in HMC simulations of lattic QCD with dynamical Wilson-Clover fermions and discuss implications for future simulations of lattice QCD.
Use of single scatter electron monte carlo transport for medical radiation sciences
Svatos, Michelle M. (Oakland, CA)
2001-01-01T23:59:59.000Z
The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.
Monte Carlo Adaptive Technique for Sensitivity Analysis of a Large-scale Air Pollution Model
Dimov, Ivan
Monte Carlo Adaptive Technique for Sensitivity Analysis of a Large-scale Air Pollution Model Ivan of input parameters contribution into output variability of a large- scale air pollution model]. This model simulates the transport of air pollutants and has been developed by Dr. Z. Zlatev and his
Autologistic Regression Analysis of Spatial-Temporal Binary Data via Monte Carlo
Aukema, Brian
Autologistic Regression Analysis of Spatial-Temporal Binary Data via Monte Carlo Maximum Likelihood regression analysis of binary data that are measured on a spatial lattice and repeatedly over discrete time points. We propose a spatial- temporal autologistic regression model and draw statistical inference via
Baes, Maarten
2008-01-01T23:59:59.000Z
that is inherent in Monte Carlo radiative transfer simulations. As the typical detectors used in Monte Carlo negligible, we recommend the use of smart detectors in Monte Carlo radiative transfer simulations. Key wordsMon. Not. R. Astron. Soc. 391, 617623 (2008) doi:10.1111/j.1365-2966.2008.13941.x Smart detectors
Takahiro Mizusaki; Noritaka Shimizu
2012-01-27T23:59:59.000Z
We propose a new variational Monte Carlo (VMC) method with an energy variance extrapolation for large-scale shell-model calculations. This variational Monte Carlo is a stochastic optimization method with a projected correlated condensed pair state as a trial wave function, and is formulated with the M-scheme representation of projection operators, the Pfaffian and the Markov-chain Monte Carlo (MCMC). Using this method, we can stochastically calculate approximated yrast energies and electro-magnetic transition strengths. Furthermore, by combining this VMC method with energy variance extrapolation, we can estimate exact shell-model energies.
Smart, Simon Daniel
2014-02-04T23:59:59.000Z
The use of spin-pure and non-orthogonal Hilbert spaces in Full Configuration Interaction Quantum Monte–Carlo Simon Smart Trinity College This dissertation is submitted for the degree of Doctor of Philosophy at the University of Cambridge, December... 2013 For my mother Diana Jean Smart 1956-2013 The use of spin-pure and non-orthogonal Hilbert spaces in Full Configuration Interaction Quantum Monte–Carlo Simon Smart Abstract Full Configuration Interaction Quantum Monte–Carlo (FCIQMC) al- lows...
MCViNE -- An object oriented Monte Carlo neutron ray tracing simulation package
Lin, Jiao Y Y; Granroth, Garrett E; Abernathy, Douglas L; Lumsden, Mark D; Winn, Barry; Aczel, Adam A; Aivazis, Michael; Fultz, Brent
2015-01-01T23:59:59.000Z
MCViNE (Monte-Carlo VIrtual Neutron Experiment) is a versatile Monte Carlo (MC) neutron ray-tracing program that provides researchers with tools for performing computer modeling and simulations that mirror real neutron scattering experiments. By adopting modern software engineering practices such as using composite and visitor design patterns for representing and accessing neutron scatterers, and using recursive algorithms for multiple scattering, MCViNE is flexible enough to handle sophisticated neutron scattering problems including, for example, neutron detection by complex detector systems, and single and multiple scattering events in a variety of samples and sample environments. In addition, MCViNE can take advantage of simulation components in linear-chain-based MC ray tracing packages widely used in instrument design and optimization, as well as NumPy-based components that make prototypes useful and easy to develop. These developments have enabled us to carry out detailed simulations of neutron scatteri...
The energy injection and losses in the Monte Carlo simulations of a diffusive shock
Wang, Xin
2011-01-01T23:59:59.000Z
Although diffusive shock acceleration (DSA) could be simulated by some well-established models, the assumption of the injection rate from the thermal particles to the superthermal population is still a contentious problem. But in the self-consistent Monte Carlo simulations, because of the prescribed scattering law instead of the assumption of the injected function, hence particle injection rate is intrinsically defined by the prescribed scattering law. We expect to examine the correlation of the energy injection with the prescribed multiple scattering angular distributions. According to the Rankine-Hugoniot conditions, the energy injection and the losses in the simulation system can directly decide the shock energy spectrum slope. By the simulations performed with multiple scattering law in the dynamical Monte Carlo model, the energy injection and energy loss functions are obtained. As results, the case applying anisotropic scattering law produce a small energy injection and large energy losses leading to a s...
Miura, Shinichi [Institute for Molecular Science, 38 Myodaiji, Okazaki 444-8585 (Japan)
2007-03-21T23:59:59.000Z
In this paper, we present a path integral hybrid Monte Carlo (PIHMC) method for rotating molecules in quantum fluids. This is an extension of our PIHMC for correlated Bose fluids [S. Miura and J. Tanaka, J. Chem. Phys. 120, 2160 (2004)] to handle the molecular rotation quantum mechanically. A novel technique referred to be an effective potential of quantum rotation is introduced to incorporate the rotational degree of freedom in the path integral molecular dynamics or hybrid Monte Carlo algorithm. For a permutation move to satisfy Bose statistics, we devise a multilevel Metropolis method combined with a configurational-bias technique for efficiently sampling the permutation and the associated atomic coordinates. Then, we have applied the PIHMC to a helium-4 cluster doped with a carbonyl sulfide molecule. The effects of the quantum rotation on the solvation structure and energetics were examined. Translational and rotational fluctuations of the dopant in the superfluid cluster were also analyzed.
Rubery, M. S.; Horsfield, C. J. [Plasma Physics Department, AWE plc, Reading RG7 4PR (United Kingdom)] [Plasma Physics Department, AWE plc, Reading RG7 4PR (United Kingdom); Herrmann, H.; Kim, Y.; Mack, J. M.; Young, C.; Evans, S.; Sedillo, T.; McEvoy, A.; Caldwell, S. E. [Plasma Physics Department, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)] [Plasma Physics Department, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Grafil, E.; Stoeffl, W. [Physics, Lawrence Livermore National Laboratory, Livermore, California 94551 (United States)] [Physics, Lawrence Livermore National Laboratory, Livermore, California 94551 (United States); Milnes, J. S. [Photek Limited UK, 26 Castleham Road, St. Leonards-on-sea TN38 9NS (United Kingdom)] [Photek Limited UK, 26 Castleham Road, St. Leonards-on-sea TN38 9NS (United Kingdom)
2013-07-15T23:59:59.000Z
The gas Cherenkov detectors at NIF and Omega measure several ICF burn characteristics by detecting multi-MeV nuclear ? emissions from the implosion. Of primary interest are ? bang-time (GBT) and burn width defined as the time between initial laser-plasma interaction and peak in the fusion reaction history and the FWHM of the reaction history respectively. To accurately calculate such parameters the collaboration relies on Monte Carlo codes, such as GEANT4 and ACCEPT, for diagnostic properties that cannot be measured directly. This paper describes a series of experiments performed at the High Intensity ? Source (HI?S) facility at Duke University to validate the geometries and material data used in the Monte Carlo simulations. Results published here show that model-driven parameters such as intensity and temporal response can be used with less than 50% uncertainty for all diagnostics and facilities.
Hard-sphere melting and crystallization with event-chain Monte Carlo
Isobe, Masaharu
2015-01-01T23:59:59.000Z
We simulate crystallization and melting with local Monte Carlo (LMC), event-chain Monte Carlo (ECMC), and with event-driven molecular dynamics (EDMD) in systems with up to one million three-dimensional hard spheres. We illustrate that our implementations of the three algorithms rigorously coincide in their equilibrium properties. We then study nucleation in the NVE ensemble from the fcc crystal into the homogeneous liquid phase and from the liquid into the homogeneous crystal. ECMC and EDMD both approach equilibrium orders of magnitude faster than LMC. ECMC is also notably faster than EDMD, especially for the equilibration into a crystal from a disordered initial condition at high density. ECMC can be trivially implemented for hard-sphere and for soft-sphere potentials, and we suggest possible applications of this algorithm for studying jamming and the physics of glasses, as well as disordered systems.
Rasch, Kevin M.; Hu, Shuming; Mitas, Lubos [Center for High Performance Simulation and Department of Physics, North Carolina State University, Raleigh, North Carolina 27695 (United States)] [Center for High Performance Simulation and Department of Physics, North Carolina State University, Raleigh, North Carolina 27695 (United States)
2014-01-28T23:59:59.000Z
We elucidate the origin of large differences (two-fold or more) in the fixed-node errors between the first- vs second-row systems for single-configuration trial wave functions in quantum Monte Carlo calculations. This significant difference in the valence fixed-node biases is studied across a set of atoms, molecules, and also Si, C solid crystals. We show that the key features which affect the fixed-node errors are the differences in electron density and the degree of node nonlinearity. The findings reveal how the accuracy of the quantum Monte Carlo varies across a variety of systems, provide new perspectives on the origins of the fixed-node biases in calculations of molecular and condensed systems, and carry implications for pseudopotential constructions for heavy elements.
M. A. Novotny; Shannon M. Wheeler
2002-11-02T23:59:59.000Z
We present the Monte Carlo with Absorbing Markov Chains (MCAMC) method for extremely long kinetic Monte Carlo simulations. The MCAMC algorithm does not modify the system dynamics. It is extremely useful for models with discrete state spaces when low-temperature simulations are desired. To illustrate the strengths and limitations of this algorithm we introduce a simple model involving random walkers on an energy landscape. This simple model has some of the characteristics of protein folding and could also be experimentally realizable in domain motion in nanoscale magnets. We find that even the simplest MCAMC algorithm can speed up calculations by many orders of magnitude. More complicated MCAMC simulations can gain further increases in speed by orders of magnitude.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Tringe, J. W.; Ileri, N.; Levie, H. W.; Stroeve, P.; Ustach, V.; Faller, R.; Renaud, P.
2015-08-01T23:59:59.000Z
We use Molecular Dynamics and Monte Carlo simulations to examine molecular transport phenomena in nanochannels, explaining four orders of magnitude difference in wheat germ agglutinin (WGA) protein diffusion rates observed by fluorescence correlation spectroscopy (FCS) and by direct imaging of fluorescently-labeled proteins. We first use the ESPResSo Molecular Dynamics code to estimate the surface transport distance for neutral and charged proteins. We then employ a Monte Carlo model to calculate the paths of protein molecules on surfaces and in the bulk liquid transport medium. Our results show that the transport characteristics depend strongly on the degree of molecular surface coverage.more »Atomic force microscope characterization of surfaces exposed to WGA proteins for 1000 s show large protein aggregates consistent with the predicted coverage. These calculations and experiments provide useful insight into the details of molecular motion in confined geometries.« less
Calculating alpha Eigenvalues in a Continuous-Energy Infinite Medium with Monte Carlo
Betzler, Benjamin R. [Los Alamos National Laboratory; Kiedrowski, Brian C. [Los Alamos National Laboratory; Brown, Forrest B. [Los Alamos National Laboratory; Martin, William R. [Los Alamos National Laboratory
2012-09-04T23:59:59.000Z
The {alpha} eigenvalue has implications for time-dependent problems where the system is sub- or supercritical. We present methods and results from calculating the {alpha}-eigenvalue spectrum for a continuous-energy infinite medium with a simplified Monte Carlo transport code. We formulate the {alpha}-eigenvalue problem, detail the Monte Carlo code physics, and provide verification and results. We have a method for calculating the {alpha}-eigenvalue spectrum in a continuous-energy infinite-medium. The continuous-time Markov process described by the transition rate matrix provides a way of obtaining the {alpha}-eigenvalue spectrum and kinetic modes. These are useful for the approximation of the time dependence of the system.
Pérez-Andújar, Angélica [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 (United States)] [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 (United States); Zhang, Rui; Newhauser, Wayne [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and The University of Texas Graduate School of Biomedical Sciences at Houston, 6767 Bertner Avenue, Houston, Texas 77030 (United States)] [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and The University of Texas Graduate School of Biomedical Sciences at Houston, 6767 Bertner Avenue, Houston, Texas 77030 (United States)
2013-12-15T23:59:59.000Z
Purpose: Stray neutron radiation is of concern after radiation therapy, especially in children, because of the high risk it might carry for secondary cancers. Several previous studies predicted the stray neutron exposure from proton therapy, mostly using Monte Carlo simulations. Promising attempts to develop analytical models have also been reported, but these were limited to only a few proton beam energies. The purpose of this study was to develop an analytical model to predict leakage neutron equivalent dose from passively scattered proton beams in the 100-250-MeV interval.Methods: To develop and validate the analytical model, the authors used values of equivalent dose per therapeutic absorbed dose (H/D) predicted with Monte Carlo simulations. The authors also characterized the behavior of the mean neutron radiation-weighting factor, w{sub R}, as a function of depth in a water phantom and distance from the beam central axis.Results: The simulated and analytical predictions agreed well. On average, the percentage difference between the analytical model and the Monte Carlo simulations was 10% for the energies and positions studied. The authors found that w{sub R} was highest at the shallowest depth and decreased with depth until around 10 cm, where it started to increase slowly with depth. This was consistent among all energies.Conclusion: Simple analytical methods are promising alternatives to complex and slow Monte Carlo simulations to predict H/D values. The authors' results also provide improved understanding of the behavior of w{sub R} which strongly depends on depth, but is nearly independent of lateral distance from the beam central axis.
Perfetti, Christopher M [ORNL] [ORNL; Martin, William R [University of Michigan] [University of Michigan; Rearden, Bradley T [ORNL] [ORNL; Williams, Mark L [ORNL] [ORNL
2012-01-01T23:59:59.000Z
Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the SHIFT Monte Carlo code within the Scale code package. The methods were used for several simple test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-06-30T23:59:59.000Z
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore »geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less
Monte Carlo Studies of the CALICE AHCAL Tiles Gaps and Non-uniformities
Felix Sefkow; Angela Lucaci-Timoce
2010-06-18T23:59:59.000Z
The CALICE analog HCAL is a highly granular calorimeter, proposed for the International Linear Collider. It is based on scintillator tiles, read out by silicon photomultipliers (SiPMs). The effects of gaps between the calorimeter tiles, as well as the non-uniform response of the tiles, in view of the impact on the energy resolution, are studied in Monte Carlo events. It is shown that these type of effects do not have a significant influence on the measurement of hadron showers.
Application of diffusion Monte Carlo to materials dominated by van der Waals interactions
Benali, Anouar [Argonne National Laboratory (ANL); Shulenburger, Luke [Sandia National Laboratory (SNL); Romero, Nichols [Argonne National Laboratory (ANL); Kim, Jeongnim [ORNL; Von Lilienfeld, Anatole [University of Basel
2014-01-01T23:59:59.000Z
Van der Waals forces are notoriously difficult to account for from first principles. We perform extensive calculation to assess the usefulness and validity of diffusion quantum Monte Carlo when applied to van der Waals forces. We present results for noble gas solids and clusters - archetypical van der Waals dominated assemblies, as well as a relevant pi-pi stacking supramolecular complex: DNA + intercalating anti-cancer drug Ellipticine.
Equation of state of strongly coupled quark--gluon plasma -- Path integral Monte Carlo results
V. S. Filinov; M. Bonitz; Y. B. Ivanov; V. V. Skokov; P. R. Levashov; V. E. Fortov
2009-05-04T23:59:59.000Z
A strongly coupled plasma of quark and gluon quasiparticles at temperatures from $ 1.1 T_c$ to $3 T_c$ is studied by path integral Monte Carlo simulations. This method extends previous classical nonrelativistic simulations based on a color Coulomb interaction to the quantum regime. We present the equation of state and find good agreement with lattice results. Further, pair distribution functions and color correlation functions are computed indicating strong correlations and liquid-like behavior.
Maximum likelihood parameter estimation in time series models using sequential Monte Carlo
Yildirim, Sinan
2013-06-11T23:59:59.000Z
, respectively. This approach is useful to handle the case where the columns of Y are generated sequentially in time, such as in audio signal processing. Usually very large number of columns in Y leads to the necessity of online algorithms to learn the model... .6 (dashed lines). For illustrative purposes, every 1000th estimate is shown . . . . . . . . . . . . . . . . . . . . . . . 130 6.1 Histograms of Monte Carlo estimates of gradients of log p?,?,?? (Y ?,?,?) w.r.t. the parameters of the ?-stable distribution...
Wang, Huihui; Meng, Lin; Liu, Dagang; Liu, Laqun [School of Physical Electronics, University of Electronic Science and Technology of China, Chengdu 610054 (China)] [School of Physical Electronics, University of Electronic Science and Technology of China, Chengdu 610054 (China)
2013-12-15T23:59:59.000Z
A particle-in-cell/Monte Carlo code is developed to rescale the microwave breakdown theory which is put forward by Vyskrebentsev and Raizer. The results of simulations show that there is a distinct error in this theory when the high energy tail of electron energy distribution function increases. A rescaling factor is proposed to modify this theory, and the change rule of the rescaling factor is presented.
Imaginary time correlations and the phaseless auxiliary field quantum Monte Carlo
Motta, M.; Galli, D. E.; Vitali, E. [Dipartimento di Fisica, Università degli Studi di Milano, via Celoria 16, 20133 Milano (Italy)] [Dipartimento di Fisica, Università degli Studi di Milano, via Celoria 16, 20133 Milano (Italy); Moroni, S. [IOM-CNR DEMOCRITOS National Simulation Center and SISSA, via Bonomea 265, 34136 Trieste (Italy)] [IOM-CNR DEMOCRITOS National Simulation Center and SISSA, via Bonomea 265, 34136 Trieste (Italy)
2014-01-14T23:59:59.000Z
The phaseless Auxiliary Field Quantum Monte Carlo (AFQMC) method provides a well established approximation scheme for accurate calculations of ground state energies of many-fermions systems. Here we address the possibility of calculating imaginary time correlation functions with the phaseless AFQMC. We give a detailed description of the technique and test the quality of the results for static properties and imaginary time correlation functions against exact values for small systems.
The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units
Hall, Clifford [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States) [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); Ji, Weixiao [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States)] [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); Blaisten-Barojas, Estela, E-mail: blaisten@gmu.edu [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States) [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States)
2014-02-01T23:59:59.000Z
We present a CPU–GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm, which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU–GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU–GPU duets. -- Highlights: •We parallelize the Metropolis Monte Carlo (MMC) algorithm on one CPU—GPU duet. •The Adaptive Tempering Monte Carlo employs MMC and profits from this CPU—GPU implementation. •Our benchmark shows a size scaling-up speedup of 62 for systems with 225,000 particles. •The testbed involves a polymeric system of oligopyrroles in the condensed phase. •The CPU—GPU parallelization includes dipole—dipole and Mie—Jones classic potentials.
An analysis of 4-quark energies in SU(2) lattice Monte Carlo
Sadataka Furui; Bilal Masud
1998-09-12T23:59:59.000Z
Energies of four-quark systems with the tetrahedral geometry measured by the static quenched SU(2) lattice Monte Carlo method are analyzed by parametrizing the gluon overlap factor in the form exp(-[bs EA+{\\sqrt bs}FP]) where A and P are the area and the perimeter defined mainly by the positions of the four quarks, bs is the string constant in the 2-quark potentials and E, F are constants.
Monte Carlo Study of Patchy Nanostructures Self-Assembled from a Single Multiblock Chain
Jakub Krajniak; Michal Banaszak
2014-10-15T23:59:59.000Z
We present a lattice Monte Carlo simulation for a multiblock copolymer chain of length N=240 and microarchitecture $(10-10)_{12}$.The simulation was performed using the Monte Carlo method with the Metropolis algorithm. We measured average energy, heat capacity, the mean squared radius of gyration, and the histogram of cluster count distribution. Those quantities were investigated as a function of temperature and incompatibility between segments, quantified by parameter {\\omega}. We determined the temperature of the coil-globule transition and constructed the phase diagram exhibiting a variety of patchy nanostructures. The presented results yield a qualitative agreement with those of the off-lattice Monte Carlo method reported earlier, with a significant exception for small incompatibilities,{\\omega}, and low temperatures, where 3-cluster patchy nanostructures are observed in contrast to the 2-cluster structures observed for the off-lattice $(10-10)_{12}$ chain. We attribute this difference to a considerable stiffness of lattice chains in comparison to that of the off-lattice chains.
Dornheim, Tobias; Groth, Simon; Filinov, Alexey; Bonitz, Michael
2015-01-01T23:59:59.000Z
The uniform electron gas (UEG) at finite temperature is of high current interest due to its key relevance for many applications including dense plasmas and laser excited solids. In particular, density functional theory heavily relies on accurate thermodynamic data for the UEG. Until recently, the only existing first-principle results had been obtained for $N=33$ electrons with restricted path integral Monte Carlo (RPIMC), for low to moderate density, $r_s = \\overline{r}/a_B \\gtrsim 1$. This data has been complemented by Configuration path integral Monte Carlo (CPIMC) simulations for $r_s \\leq 1$ that substantially deviate from RPIMC towards smaller $r_s$ and low temperature. In this work, we present results from an independent third method---the recently developed permutation blocking path integral Monte Carlo (PB-PIMC) approach [T. Dornheim \\textit{et al.}, NJP \\textbf{17}, 073017 (2015)] which we extend to the UEG. Interestingly, PB-PIMC allows us to perform simulations over the entire density range down to...
Alhassan, Erwin; Duan, Junfeng; Gustavsson, Cecilia; Koning, Arjan; Pomp, Stephan; Rochman, Dimitri; Österlund, Michael
2013-01-01T23:59:59.000Z
Analyses are carried out to assess the impact of nuclear data uncertainties on keff for the European Lead Cooled Training Reactor (ELECTRA) using the Total Monte Carlo method. A large number of Pu-239 random ENDF-formated libraries generated using the TALYS based system were processed into ACE format with NJOY99.336 code and used as input into the Serpent Monte Carlo neutron transport code to obtain distribution in keff. The keff distribution obtained was compared with the latest major nuclear data libraries - JEFF-3.1.2, ENDF/B-VII.1 and JENDL-4.0. A method is proposed for the selection of benchmarks for specific applications using the Total Monte Carlo approach. Finally, an accept/reject criterion was investigated based on chi square values obtained using the Pu-239 Jezebel criticality benchmark. It was observed that nuclear data uncertainties in keff were reduced considerably from 748 to 443 pcm by applying a more rigid acceptance criteria for accepting random files.
Erwin Alhassan; Henrik Sjöstrand; Junfeng Duan; Cecilia Gustavsson; Arjan Koning; Stephan Pomp; Dimitri Rochman; Michael Österlund
2013-04-04T23:59:59.000Z
Analyses are carried out to assess the impact of nuclear data uncertainties on keff for the European Lead Cooled Training Reactor (ELECTRA) using the Total Monte Carlo method. A large number of Pu-239 random ENDF-formated libraries generated using the TALYS based system were processed into ACE format with NJOY99.336 code and used as input into the Serpent Monte Carlo neutron transport code to obtain distribution in keff. The keff distribution obtained was compared with the latest major nuclear data libraries - JEFF-3.1.2, ENDF/B-VII.1 and JENDL-4.0. A method is proposed for the selection of benchmarks for specific applications using the Total Monte Carlo approach. Finally, an accept/reject criterion was investigated based on chi square values obtained using the Pu-239 Jezebel criticality benchmark. It was observed that nuclear data uncertainties in keff were reduced considerably from 748 to 443 pcm by applying a more rigid acceptance criteria for accepting random files.
Nonequilibrium candidate Monte Carlo: A new tool for efficient equilibrium simulation
Nilmeier, Jerome P.; Crooks, Gavin E.; Minh, David D. L.; Chodera, John D.
2011-11-08T23:59:59.000Z
Metropolis Monte Carlo simulation is a powerful tool for studying the equilibrium properties of matter. In complex condensed-phase systems, however, it is difficult to design Monte Carlo moves with high acceptance probabilities that also rapidly sample uncorrelated configurations. Here, we introduce a new class of moves based on nonequilibrium dynamics: candidate configurations are generated through a finite-time process in which a system is actively driven out of equilibrium, and accepted with criteria that preserve the equilibrium distribution. The acceptance rule is similar to the Metropolis acceptance probability, but related to the nonequilibrium work rather than the instantaneous energy difference. Our method is applicable to sampling from both a single thermodynamic state or a mixture of thermodynamic states, and allows both coordinates and thermodynamic parameters to be driven in nonequilibrium proposals. While generating finite-time switching trajectories incurs an additional cost, driving some degrees of freedom while allowing others to evolve naturally can lead to large enhancements in acceptance probabilities, greatly reducing structural correlation times. Using nonequilibrium driven processes vastly expands the repertoire of useful Monte Carlo proposals in simulations of dense solvated systems.
Kurebayashi, Shinya, 1976-
2004-01-01T23:59:59.000Z
Measurements from three classes of direct-drive implosions at the OMEGA laser system [T. R. Boehly et al., Opt. Commun. 133, 495 (1997)] were combined with Monte-Carlo simulations to investigate models for determining ...
Stanley, H. Eugene
Liquid-Liquid Phase Transition in Confined Water: A Monte Carlo Study Martin Meyer and H. Eugene Stanley* Center for Polymer Studies and Department of Physics, Boston UniVersity, Boston, Massachusetts
Majumdar, Amit
there is interest to simulate enormously large Monte Carlo particle transport problems for neutron and photon.e., the end of a time step. Besides absorption, the photons may undergo Thompson scattering. The overall
Erickson, Lori
1995-01-01T23:59:59.000Z
Monte Carlo modeling techniques using mean information fields (MIF), developed by Torsten Hagerstrand in the 1950s, were integrated with a geographic information system (GIS) to simulate lost person behavior in wilderness areas. Big Bend Ranch State...
Tutt, Teresa Elizabeth
2009-05-15T23:59:59.000Z
Monte Carlo method is an invaluable tool in the field of radiation protection, used to calculate shielding effectiveness, as well as dose for medical applications. With few exceptions, most of the objects currently simulated have been homogeneous...
A Monte-Carlo Method without Grid to Compute the Exchange Coefficient in the Double Porosity Model
Boyer, Edmond
Classification: 76S05 (65C05 76M35) Published in Monte Carlo Methods Appl.. 8:2, 129147, 2002 Archives, links Methods and Applications 8, 2 (2002) 129-147" #12;F. Campillo and A. Lejay / A Monte Carlo Method witouth consists in transforming (1) into a system: m Pm t = a-Pm - (Pm - Pf), m = Meas(m) Meas() f Pf t = a
A Positive-Weight Next-to-Leading-Order Monte Carlo for e+e- Annihilation to Hadrons
Oluseyi Latunde-Dada; Stefan Gieseke; Bryan Webber
2007-02-20T23:59:59.000Z
We apply the positive-weight Monte Carlo method of Nason for simulating QCD processes accurate to Next-To-Leading Order to the case of e+e- annihilation to hadrons. The method entails the generation of the hardest gluon emission first and then subsequently adding a `truncated' shower before the emission. We have interfaced our result to the Herwig++ shower Monte Carlo program and obtained better results than those obtained with Herwig++ at leading order with a matrix element correction.
Radiation doses in cone-beam breast computed tomography: A Monte Carlo simulation study
Yi Ying; Lai, Chao-Jen; Han Tao; Zhong Yuncheng; Shen Youtao; Liu Xinming; Ge Shuaiping; You Zhicheng; Wang Tianpeng; Shaw, Chris C. [Department of Imaging Physics, University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States)
2011-02-15T23:59:59.000Z
Purpose: In this article, we describe a method to estimate the spatial dose variation, average dose and mean glandular dose (MGD) for a real breast using Monte Carlo simulation based on cone beam breast computed tomography (CBBCT) images. We present and discuss the dose estimation results for 19 mastectomy breast specimens, 4 homogeneous breast models, 6 ellipsoidal phantoms, and 6 cylindrical phantoms. Methods: To validate the Monte Carlo method for dose estimation in CBBCT, we compared the Monte Carlo dose estimates with the thermoluminescent dosimeter measurements at various radial positions in two polycarbonate cylinders (11- and 15-cm in diameter). Cone-beam computed tomography (CBCT) images of 19 mastectomy breast specimens, obtained with a bench-top experimental scanner, were segmented and used to construct 19 structured breast models. Monte Carlo simulation of CBBCT with these models was performed and used to estimate the point doses, average doses, and mean glandular doses for unit open air exposure at the iso-center. Mass based glandularity values were computed and used to investigate their effects on the average doses as well as the mean glandular doses. Average doses for 4 homogeneous breast models were estimated and compared to those of the corresponding structured breast models to investigate the effect of tissue structures. Average doses for ellipsoidal and cylindrical digital phantoms of identical diameter and height were also estimated for various glandularity values and compared with those for the structured breast models. Results: The absorbed dose maps for structured breast models show that doses in the glandular tissue were higher than those in the nearby adipose tissue. Estimated average doses for the homogeneous breast models were almost identical to those for the structured breast models (p=1). Normalized average doses estimated for the ellipsoidal phantoms were similar to those for the structured breast models (root mean square (rms) percentage difference=1.7%; p=0.01), whereas those for the cylindrical phantoms were significantly lower (rms percentage difference=7.7%; p<0.01). Normalized MGDs were found to decrease with increasing glandularity. Conclusions: Our results indicate that it is sufficient to use homogeneous breast models derived from CBCT generated structured breast models to estimate the average dose. This investigation also shows that ellipsoidal digital phantoms of similar dimensions (diameter and height) and glandularity to actual breasts may be used to represent a real breast to estimate the average breast dose with Monte Carlo simulation. We have also successfully demonstrated the use of structured breast models to estimate the true MGDs and shown that the normalized MGDs decreased with the glandularity as previously reported by other researchers for CBBCT or mammography.
Charged-Particle Thermonuclear Reaction Rates: I. Monte Carlo Method and Statistical Distributions
Richard Longland; Christian Iliadis; Art Champagne; Joe Newton; Claudio Ugalde; Alain Coc; Ryan Fitzgerald
2010-04-23T23:59:59.000Z
A method based on Monte Carlo techniques is presented for evaluating thermonuclear reaction rates. We begin by reviewing commonly applied procedures and point out that reaction rates that have been reported up to now in the literature have no rigorous statistical meaning. Subsequently, we associate each nuclear physics quantity entering in the calculation of reaction rates with a specific probability density function, including Gaussian, lognormal and chi-squared distributions. Based on these probability density functions the total reaction rate is randomly sampled many times until the required statistical precision is achieved. This procedure results in a median (Monte Carlo) rate which agrees under certain conditions with the commonly reported recommended "classical" rate. In addition, we present at each temperature a low rate and a high rate, corresponding to the 0.16 and 0.84 quantiles of the cumulative reaction rate distribution. These quantities are in general different from the statistically meaningless "minimum" (or "lower limit") and "maximum" (or "upper limit") reaction rates which are commonly reported. Furthermore, we approximate the output reaction rate probability density function by a lognormal distribution and present, at each temperature, the lognormal parameters miu and sigma. The values of these quantities will be crucial for future Monte Carlo nucleosynthesis studies. Our new reaction rates, appropriate for bare nuclei in the laboratory, are tabulated in the second paper of this series (Paper II). The nuclear physics input used to derive our reaction rates is presented in the third paper of this series (Paper III). In the fourth paper of this series (Paper IV) we compare our new reaction rates to previous results.
Doebling, S.W.; Farrar, C.R. [Los Alamos National Lab., NM (United States); Cornwell, P.J. [Rose Hulman Inst. of Tech., Terre Haute, IN (United States)
1998-02-01T23:59:59.000Z
This paper presents a comparison of two techniques used to estimate the statistical confidence intervals on modal parameters identified from measured vibration data. The first technique is Monte Carlo simulation, which involves the repeated simulation of random data sets based on the statistics of the measured data and an assumed distribution of the variability in the measured data. A standard modal identification procedure is repeatedly applied to the randomly perturbed data sets to form a statistical distribution on the identified modal parameters. The second technique is the Bootstrap approach, where individual Frequency Response Function (FRF) measurements are randomly selected with replacement to form an ensemble average. This procedure, in effect, randomly weights the various FRF measurements. These weighted averages of the FRFs are then put through the modal identification procedure. The modal parameters identified from each randomly weighted data set are then used to define a statistical distribution for these parameters. The basic difference in the two techniques is that the Monte Carlo technique requires the assumption on the form of the distribution of the variability in the measured data, while the bootstrap technique does not. Also, the Monte Carlo technique can only estimate random errors, while the bootstrap statistics represent both random and bias (systematic) variability such as that arising from changing environmental conditions. However, the bootstrap technique requires that every frequency response function be saved for each average during the data acquisition process. Neither method can account for bias introduced during the estimation of the FRFs. This study has been motivated by a program to develop vibration-based damage identification procedures.
Monte-Carlo Simulation of Exclusive Channels in e+e- Annihilation at Low Energy
D. Anipko; S. Eidelman; A. Pak
2003-12-25T23:59:59.000Z
Software package for Monte-Carlo simulation of e+e- exclusive annihilation channels written in the C++ language for Linux/Solaris platforms has been developed. It incorporates matrix elements for several mechanisms of multipion production in a model of consequent two and three-body resonance decays. Possible charge states of intermediate and final particles are accounted automatically under the assumption of isospin conservation. Interference effects can be taken into acccount. Package structure allows adding new matrix elements written in a gauge-invariant form.
A Hybrid (Monte-Carlo/Deterministic) Approach for Multi-Dimensional Radiation Transport
Guillaume Bal; Anthony Davis; Ian Langmore
2011-05-07T23:59:59.000Z
A novel hybrid Monte Carlo transport scheme is demonstrated in a scene with solar illumination, scattering and absorbing 2D atmosphere, a textured reflecting mountain, and a small detector located in the sky (mounted on a satellite or a airplane). It uses a deterministic approximation of an adjoint transport solution to reduce variance, computed quickly by ignoring atmospheric interactions. This allows significant variance and computational cost reductions when the atmospheric scattering and absorption coefficient are small. When combined with an atmospheric photon-redirection scheme, significant variance reduction (equivalently acceleration) is achieved in the presence of atmospheric interactions.
A Hybrid (Monte-Carlo/Deterministic) Approach for Multi-Dimensional Radiation Transport
Bal, Guillaume; Langmore, Ian
2011-01-01T23:59:59.000Z
A novel hybrid Monte Carlo transport scheme is demonstrated in a scene with solar illumination, scattering and absorbing 2D atmosphere, a textured reflecting mountain, and a small detector located in the sky (mounted on a satellite or a airplane). It uses a deterministic approximation of an adjoint transport solution to reduce variance, computed quickly by ignoring atmospheric interactions. This allows significant variance and computational cost reductions when the atmospheric scattering and absorption coefficient are small. When combined with an atmospheric photon-redirection scheme, significant variance reduction (equivalently acceleration) is achieved in the presence of atmospheric interactions.
Kinetic lattice Monte Carlo simulations of interdiffusion in strained silicon germanium alloys
Chen, Renyu; Dunham, Scott T.
2010-03-03T23:59:59.000Z
Point-defect-mediated diffusion processes are investigated in strained SiGe alloys using kinetic lattice Monte Carlo *KLMC* simulation technique. The KLMC simulator incorporates an augmented lattice domain and includes defect structures, atomistic hopping mechanisms, and the stress dependence of transition rates obtained from density functional theory calculation results. Vacancy-mediated interdiffusion in strained SiGe alloys is analyzed, and the stress effect caused by the induced strain of germanium is quantified separately from that due to germanium-vacancy binding. The results indicate that both effects have substantial impact on interdiffusion. © 2010 American Vacuum Society.
Thermonuclear reaction rate of $^{18}$Ne($?$,$p$)$^{21}$Na from Monte-Carlo calculations
P. Mohr; R. Longland; C. Iliadis
2014-12-14T23:59:59.000Z
The $^{18}$Ne($\\alpha$,$p$)$^{21}$Na reaction impacts the break-out from the hot CNO-cycles to the $rp$-process in type I X-ray bursts. We present a revised thermonuclear reaction rate, which is based on the latest experimental data. The new rate is derived from Monte-Carlo calculations, taking into account the uncertainties of all nuclear physics input quantities. In addition, we present the reaction rate uncertainty and probability density versus temperature. Our results are also consistent with estimates obtained using different indirect approaches.
Thermonuclear reaction rate of $^{18}$Ne($\\alpha$,$p$)$^{21}$Na from Monte-Carlo calculations
Mohr, P; Iliadis, C
2014-01-01T23:59:59.000Z
The $^{18}$Ne($\\alpha$,$p$)$^{21}$Na reaction impacts the break-out from the hot CNO-cycles to the $rp$-process in type I X-ray bursts. We present a revised thermonuclear reaction rate, which is based on the latest experimental data. The new rate is derived from Monte-Carlo calculations, taking into account the uncertainties of all nuclear physics input quantities. In addition, we present the reaction rate uncertainty and probability density versus temperature. Our results are also consistent with estimates obtained using different indirect approaches.
A Monte Carlo study of the distribution of parameter estimators in a dual exponential decay model
Garcia, Raul
1969-01-01T23:59:59.000Z
of an estimate of the reliability of the parameter estimates calculated. In 1965, Bell and Garcia [2] developed a computer program which permits a solution of the parameters without the time-consuming effort of manual calcu- lations. The same year, Rossing [3...A MONTE CARLO STUDY OF THE DISTRIBUTION OF PARAMETER ESTIMATORS IN A DUAL EXPONENTIAL DECAY MODEL A Thesis by SAUL GARCIA Submitted to the Graduate College of Texas A&M University in partial fulfillment of the requirements for the degree...
Monte Carlo calculations of the physical properties of RDX, {beta}-HMX, and TATB
Sewell, T.D.
1997-09-01T23:59:59.000Z
Atomistic Monte Carlo simulations in the NpT ensemble are used to calculate the physical properties of crystalline RDX, {beta}-HMX, and TATB. Among the issues being considered are the effects of various treatments of the intermolecular potential, inclusion of intramolecular flexibility, and simulation size dependence of the results. Calculations of the density, lattice energy, and lattice parameters are made over a wide range of pressures; thereby allowing for predictions of the bulk and linear coefficients of isothermal expansion of the crystals. Comparison with experiment is made where possible.
S. Frixione; E. Laenen; P. Motylinski; B. R. Webber
2007-02-20T23:59:59.000Z
We explain how angular correlations in leptonic decays of vector bosons and top quarks can be included in Monte Carlo parton showers, in particular those matched to NLO QCD computations. We consider the production of $n$ pairs of leptons, originating from the decays of $n$ electroweak vector bosons or of $n$ top quarks, in the narrow-width approximation. In the latter case, the information on the $n$ $b$ quarks emerging from the decays is also retained. We give results of implementing this procedure in MC@NLO
Perera, Meewanage Dilina N [ORNL; Li, Ying Wai [ORNL; Eisenbach, Markus [ORNL; Vogel, Thomas [Los Alamos National Laboratory (LANL); Landau, David P [University of Georgia, Athens, GA
2015-01-01T23:59:59.000Z
We describe the study of thermodynamics of materials using replica-exchange Wang Landau (REWL) sampling, a generic framework for massively parallel implementations of the Wang Landau Monte Carlo method. To evaluate the performance and scalability of the method, we investigate the magnetic phase transition in body-centered cubic (bcc) iron using the classical Heisenberg model parameterized with first principles calculations. We demonstrate that our framework leads to a significant speedup without compromising the accuracy and precision and facilitates the study of much larger systems than is possible with its serial counterpart.
Monte Carlo Generators for Studies of the 3D Structure of the Nucleon
Avagyan, Harut A. [JLAB
2015-01-01T23:59:59.000Z
Extraction of transverse momentum and space distributions of partons from measurements of spin and azimuthal asymmetries requires development of a self consistent analysis framework, accounting for evolution effects, and allowing control of systematic uncertainties due to variations of input parameters and models. Development of realistic Monte-Carlo generators, accounting for TMD evolution effects, spin-orbit and quark-gluon correlations will be crucial for future studies of quark-gluon dynamics in general and 3D structure of the nucleon in particular.
Monte-Carlo study of the phase transition in the AA-stacked bilayer graphene
A. A. Nikolaev; M. V. Ulybyshev
2014-12-04T23:59:59.000Z
Tight-binding model of the AA-stacked bilayer graphene with screened electron-electron interactions has been studied using the Hybrid Monte Carlo simulations on the original double-layer hexagonal lattice. Instantaneous screened Coulomb potential is taken into account using Hubbard-Stratonovich transformation. G-type antiferromagnetic ordering has been studied and the phase transition with spontaneous generation of the mass gap has been observed. Dependence of the antiferromagnetic condensate on the on-site electron-electron interaction is examined.
Temperature-extrapolation method for Implicit Monte Carlo - Radiation hydrodynamics calculations
McClarren, R. G. [Department of Nuclear Engineering, Texas A and M University, 3133 TAMU, College Station, TX 77802 (United States); Urbatsch, T. J. [XTD-5: Air Force Systems, Los Alamos National Laboratory, P.O. Box 1663, Los Alamos, NM 77845 (United States)
2013-07-01T23:59:59.000Z
We present a method for implementing temperature extrapolation in Implicit Monte Carlo solutions to radiation hydrodynamics problems. The method is based on a BDF-2 type integration to estimate a change in material temperature over a time step. We present results for radiation only problems in an infinite medium and for a 2-D Cartesian hohlraum problem. Additionally, radiation hydrodynamics simulations are presented for an RZ hohlraum problem and a related 3D problem. Our results indicate that improvements in noise and general behavior are possible. We present considerations for future investigations and implementations. (authors)
The Imprints of IMBHs on the Structure of Globular Clusters: Monte-Carlo Simulations
Stefan Umbreit; John M. Fregeau; Frederic A. Rasio
2008-03-06T23:59:59.000Z
We present the first results of a series of Monte-Carlo simulations investigating the imprint of a central black hole on the core structure of a globular cluster. We investigate the three-dimensional and the projected density profile of the inner regions of idealized as well as more realistic globular cluster models, taking into account a stellar mass spectrum, stellar evolution and allowing for a larger, more realistic, number of stars than was previously possible with direct N-body methods. We compare our results to other N-body simulations published previously in the literature.
Alan M. Watson; William J. Henney
2001-08-30T23:59:59.000Z
We describe an efficient Monte Carlo algorithm for a restricted class of scattering problems in radiation transfer. This class includes many astrophysically interesting problems, including the scattering of ultraviolet and visible light by grains. The algorithm correctly accounts for multiply-scattered light. We describe the algorithm, present a number of important optimizations, and explicity show how the algorithm can be used to estimate quantities such as the emergent and mean intensity. We present two test cases, examine the importance of the optimizations, and show that this algorithm can be usefully applied to optically-thin problems, a regime sometimes considered limited to explicit single-scattering plus attenuation approximations.
Four-Quark Binding Energies from SU(2) Lattice Monte Carlo
A. M. Green; C. Michael; M. E. Sainio
1994-04-11T23:59:59.000Z
Energies of four-quark systems have been extracted in a static quenched SU(2) lattice Monte Carlo calculation for six different geometries, both planar and non-planar, with $\\beta=2.4$ and lattice size $16^3\\times 32$. In all cases, it is found that the binding energy is greatly enhanced when the four quarks can be partitioned in two ways with comparable energies. Also it is shown that the energies of the four-quark states cannot be understood simply in terms of two-quark potentials.
A new approach to hot particle dosimetry using a Monte Carlo transport code
Busche, Donna Marie
1989-01-01T23:59:59.000Z
Ci-hrs. This value assumes a threshold dose of 2000 rads to an area of 0. 1 cm&, at a depth of 100 ltm (NCRP 1988). The purpose of this research was evaluate the current methods used in industry to assess the doses from hot particles. A Monte Carlo electron... radioactivity being released from the site. Frisking, portal monitors, and step off pads are important HP areas and should involve overview and supervision. IDENTMCATION To properly assess the dose from these hot particles, the source strength, type...
Use of SCALE Continuous-Energy Monte Carlo Tools for Eigenvalue Sensitivity Coefficient Calculations
Perfetti, Christopher M [ORNL] [ORNL; Rearden, Bradley T [ORNL] [ORNL
2013-01-01T23:59:59.000Z
The TSUNAMI code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the development of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The CLUTCH and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE KENO framework to generate the capability for TSUNAMI-3D to perform eigenvalue sensitivity calculations in continuous-energy applications. This work explores the improvements in accuracy that can be gained in eigenvalue and eigenvalue sensitivity calculations through the use of the SCALE CE KENO and CE TSUNAMI continuous-energy Monte Carlo tools as compared to multigroup tools. The CE KENO and CE TSUNAMI tools were used to analyze two difficult models of critical benchmarks, and produced eigenvalue and eigenvalue sensitivity coefficient results that showed a marked improvement in accuracy. The CLUTCH sensitivity method in particular excelled in terms of efficiency and computational memory requirements.
Using Monte Carlo analyses in uptake models for evaluating risks to ecological receptors
Hayse, J.W.; Hlohowskyj, I. [Argonne National Lab., IL (United States). Environmental Assessment Div.
1995-12-31T23:59:59.000Z
A deterministic modeling approach was used to evaluate risks to wildlife receptors at a contaminated site in Maryland. Models to predict daily doses of contaminants to ecological receptors used single point estimates for media contaminant concentrations and for ecological exposure factors. Predicted doses exceeding contaminant- and species-specific dose values were considered to be indicative of adverse risk, and the model results are being used to develop and evaluate remedial alternatives for the site. Risk estimates based on the deterministic approach predicted daily contaminant doses exceeding acceptable dose levels for more than half of the modeled receptors. Ecological risks were also evaluated using a stochastic approach. In this approach the input parameters that most greatly affected the deterministic model outcome were identified using sensitivity analyses. Statistical distributions were assigned to these parameters, and Monte Carlo simulations of the models were conducted to generate probability density functions of contaminant doses. The resulting probability density functions were then used to quantify the probability that contaminant uptake would exceed the acceptable dose values. Models using Monte Carlo analyses identified only a low probability of exceeding the acceptable dose level for most of the contaminants and receptors. The differences in the risks predicted using the deterministic and stochastic models would likely result in the selection of different remediation goals and actions for the same area of contamination. Given the different interpretations that could result from these two modeling approaches, the authors recommend that both techniques be considered for estimating risks to ecological receptors.
The Proton Therapy Nozzles at Samsung Medical Center: A Monte Carlo Simulation Study using TOPAS
Chung, Kwangzoo; Kim, Dae-Hyun; Ahn, Sunghwan; Han, Youngyih
2015-01-01T23:59:59.000Z
To expedite the commissioning process of the proton therapy system at Samsung Medical Center (SMC), we have developed a Monte Carlo simulation model of the proton therapy nozzles using TOPAS. At SMC proton therapy center, we have two gantry rooms with different types of nozzles; a multi-purpose nozzle and a dedicated scanning nozzle. Each nozzle has been modeled in detail following the geometry information provided by the manufacturer, Sumitomo Heavy Industries, Ltd. For this purpose, novel features of TOPAS, such as the time feature or the ridge filter class, have been used. And the appropriate physics models for proton nozzle simulation were defined. Dosimetric properties, like percent depth dose curve, spread-out Bragg peak (SOBP), beam spot size, have been simulated and verified against measured beam data. Beyond the Monte Carlo nozzle modeling, we have developed an interface between TOPAS and the treatment planning system (TPS), RayStation. An exported RT plan data from the TPS has been interpreted by th...
An Evaluation of Monte Carlo Simulations of Neutron Multiplicity Measurements of Plutonium Metal
Mattingly, John [North Carolina State University; Miller, Eric [University of Michigan; Solomon, Clell J. Jr. [Los Alamos National Laboratory; Dennis, Ben [University of Michigan; Meldrum, Amy [University of Michigan; Clarke, Shaun [University of Michigan; Pozzi, Sara [University of Michigan
2012-06-21T23:59:59.000Z
In January 2009, Sandia National Laboratories conducted neutron multiplicity measurements of a polyethylene-reflected plutonium metal sphere. Over the past 3 years, those experiments have been collaboratively analyzed using Monte Carlo simulations conducted by University of Michigan (UM), Los Alamos National Laboratory (LANL), Sandia National Laboratories (SNL), and North Carolina State University (NCSU). Monte Carlo simulations of the experiments consistently overpredict the mean and variance of the measured neutron multiplicity distribution. This paper presents a sensitivity study conducted to evaluate the potential sources of the observed errors. MCNPX-PoliMi simulations of plutonium neutron multiplicity measurements exhibited systematic over-prediction of the neutron multiplicity distribution. The over-prediction tended to increase with increasing multiplication. MCNPX-PoliMi had previously been validated against only very low multiplication benchmarks. We conducted sensitivity studies to try to identify the cause(s) of the simulation errors; we eliminated the potential causes we identified, except for Pu-239 {bar {nu}}. A very small change (-1.1%) in the Pu-239 {bar {nu}} dramatically improved the accuracy of the MCNPX-PoliMi simulation for all 6 measurements. This observation is consistent with the trend observed in the bias exhibited by the MCNPX-PoliMi simulations: a very small error in {bar {nu}} is 'magnified' by increasing multiplication. We applied a scalar adjustment to Pu-239 {bar {nu}} (independent of neutron energy); an adjustment that depends on energy is probably more appropriate.
Massively parallel Monte Carlo for many-particle simulations on GPUs
Anderson, Joshua A.; Jankowski, Eric [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)] [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Grubb, Thomas L. [Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)] [Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Engel, Michael [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)] [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Glotzer, Sharon C., E-mail: sglotzer@umich.edu [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)
2013-12-01T23:59:59.000Z
Current trends in parallel processors call for the design of efficient massively parallel algorithms for scientific computing. Parallel algorithms for Monte Carlo simulations of thermodynamic ensembles of particles have received little attention because of the inherent serial nature of the statistical sampling. In this paper, we present a massively parallel method that obeys detailed balance and implement it for a system of hard disks on the GPU. We reproduce results of serial high-precision Monte Carlo runs to verify the method. This is a good test case because the hard disk equation of state over the range where the liquid transforms into the solid is particularly sensitive to small deviations away from the balance conditions. On a Tesla K20, our GPU implementation executes over one billion trial moves per second, which is 148 times faster than on a single Intel Xeon E5540 CPU core, enables 27 times better performance per dollar, and cuts energy usage by a factor of 13. With this improved performance we are able to calculate the equation of state for systems of up to one million hard disks. These large system sizes are required in order to probe the nature of the melting transition, which has been debated for the last forty years. In this paper we present the details of our computational method, and discuss the thermodynamics of hard disks separately in a companion paper.
SIMDET - Version 4 A Parametric Monte Carlo for a TESLA Detector
M. Pohl; H. J. Schreiber
2002-06-05T23:59:59.000Z
A new release of the parametric detector Monte Carlo program \\verb+SIMDET+ (version 4.01) is now available. We describe the principles of operation and the usage of this program to simulate the response of a detector for the TESLA linear collider. The detector components are implemented according to the TESLA Technical Design Report. All detector component responses are treated in a realistic way using a parametrisation of results from the {\\em ab initio} Monte Carlo program \\verb+BRAHMS+. Pattern recognition is emulated using a complete cross reference between generated particles and detector response. Also, for charged particles, the covariance matrix and $dE/dx$ information are made available. An idealised energy flow algorithm defines the output of the program, consisting of particles generically classified as electrons, photons, muons, charged and neutral hadrons as well as unresolved clusters. The program parameters adjustable by the user are described in detail. User hooks inside the program and the output data structure are documented.
A Monte Carlo Study of Multiplicity Fluctuations in Pb-Pb Collisions at LHC Energies
Ramni Gupta
2015-01-15T23:59:59.000Z
With large volumes of data available from LHC, it has become possible to study the multiplicity distributions for the various possible behaviours of the multiparticle production in collisions of relativistic heavy ion collisions, where a system of dense and hot partons has been created. In this context it is important and interesting as well to check how well the Monte Carlo generators can describe the properties or the behaviour of multiparticle production processes. One such possible behaviour is the self-similarity in the particle production, which can be studied with the intermittency studies and further with chaoticity/erraticity, in the heavy ion collisions. We analyse the behaviour of erraticity index in central Pb-Pb collisions at centre of mass energy of 2.76 TeV per nucleon using the AMPT monte carlo event generator, following the recent proposal by R.C. Hwa and C.B. Yang, concerning the local multiplicity fluctuation study as a signature of critical hadronization in heavy-ion collisions. We report the values of erraticity index for the two versions of the model with default settings and their dependence on the size of the phase space region. Results presented here may serve as a reference sample for the experimental data from heavy ion collisions at these energies.
Bias-Variance Techniques for Monte Carlo Optimization: Cross-validation for the CE Method
Rajnarayan, Dev
2008-01-01T23:59:59.000Z
In this paper, we examine the CE method in the broad context of Monte Carlo Optimization (MCO) and Parametric Learning (PL), a type of machine learning. A well-known overarching principle used to improve the performance of many PL algorithms is the bias-variance tradeoff. This tradeoff has been used to improve PL algorithms ranging from Monte Carlo estimation of integrals, to linear estimation, to general statistical estimation. Moreover, as described by, MCO is very closely related to PL. Owing to this similarity, the bias-variance tradeoff affects MCO performance, just as it does PL performance. In this article, we exploit the bias-variance tradeoff to enhance the performance of MCO algorithms. We use the technique of cross-validation, a technique based on the bias-variance tradeoff, to significantly improve the performance of the Cross Entropy (CE) method, which is an MCO algorithm. In previous work we have confirmed that other PL techniques improve the perfomance of other MCO algorithms. We conclude that ...
Physics and Algorithm Enhancements for a Validated MCNP/X Monte Carlo Simulation Tool, Phase VII
McKinney, Gregg W [Los Alamos National Laboratory
2012-07-17T23:59:59.000Z
Currently the US lacks an end-to-end (i.e., source-to-detector) radiation transport simulation code with predictive capability for the broad range of DHS nuclear material detection applications. For example, gaps in the physics, along with inadequate analysis algorithms, make it difficult for Monte Carlo simulations to provide a comprehensive evaluation, design, and optimization of proposed interrogation systems. With the development and implementation of several key physics and algorithm enhancements, along with needed improvements in evaluated data and benchmark measurements, the MCNP/X Monte Carlo codes will provide designers, operators, and systems analysts with a validated tool for developing state-of-the-art active and passive detection systems. This project is currently in its seventh year (Phase VII). This presentation will review thirty enhancements that have been implemented in MCNPX over the last 3 years and were included in the 2011 release of version 2.7.0. These improvements include 12 physics enhancements, 4 source enhancements, 8 tally enhancements, and 6 other enhancements. Examples and results will be provided for each of these features. The presentation will also discuss the eight enhancements that will be migrated into MCNP6 over the upcoming year.
Quantum Monte Carlo algorithms for electronic structure at the petascale; the endstation project.
Kim, J; Ceperley, D M; Purwanto, W; Walter, E J; Krakauer, H; Zhang, S W; Kent, P.R. C; Hennig, R G; Umrigar, C; Bajdich, M; Kolorenc, J; Mitas, L; Srinivasan, A
2008-10-01T23:59:59.000Z
Over the past two decades, continuum quantum Monte Carlo (QMC) has proved to be an invaluable tool for predicting of the properties of matter from fundamental principles. By solving the Schrodinger equation through a stochastic projection, it achieves the greatest accuracy and reliability of methods available for physical systems containing more than a few quantum particles. QMC enjoys scaling favorable to quantum chemical methods, with a computational effort which grows with the second or third power of system size. This accuracy and scalability has enabled scientific discovery across a broad spectrum of disciplines. The current methods perform very efficiently at the terascale. The quantum Monte Carlo Endstation project is a collaborative effort among researchers in the field to develop a new generation of algorithms, and their efficient implementations, which will take advantage of the upcoming petaflop architectures. Some aspects of these developments are discussed here. These tools will expand the accuracy, efficiency and range of QMC applicability and enable us to tackle challenges which are currently out of reach. The methods will be applied to several important problems including electronic and structural properties of water, transition metal oxides, nanosystems and ultracold atoms.
Monte Carlo Simulations of Globular Cluster Evolution. IV. Direct Integration of Strong Interactions
John M. Fregeau; Frederic A. Rasio
2006-12-06T23:59:59.000Z
We study the dynamical evolution of globular clusters containing populations of primordial binaries, using our newly updated Monte Carlo cluster evolution code with the inclusion of direct integration of binary scattering interactions. We describe the modifications we have made to the code, as well as improvements we have made to the core Monte Carlo method. We present several test calculations to verify the validity of the new code, and perform many comparisons with previous analytical and numerical work in the literature. We simulate the evolution of a large grid of models, with a wide range of initial cluster profiles, and with binary fractions ranging from 0 to 1, and compare with observations of Galactic globular clusters. We find that our code yields very good agreement with direct N-body simulations of clusters with primordial binaries, but yields some results that differ significantly from other approximate methods. Notably, the direct integration of binary interactions reduces their energy generation rate relative to the simple recipes used in Paper III, and yields smaller core radii. Our results for the structural parameters of clusters during the binary-burning phase are now in the tail of the range of parameters for observed clusters, implying that either clusters are born significantly more or less centrally concentrated than has been previously considered, or that there are additional physical processes beyond two-body relaxation and binary interactions that affect the structural characteristics of clusters.
Monte Carlo uncertainty reliability and isotope production calculations for a fast reactor
Miles, T.L.
1992-01-01T23:59:59.000Z
Statistical uncertainties in Monte Carlo calculations are typically determined by the first and second moments of the tally. For certain types of calculations, there is concern that the uncertainty estimate is significantly non-conservative. This is typically seen in reactor eigenvalue problems where the uncertainty estimate is aggravated by the generation-to-generation fission source. It has been speculated that optimization of the random walk, through biasing techniques, may increase the non-conservative nature of the uncertainty estimate. A series of calculations are documented here which quantify the reliability of the Monte Carlo Neutron and Photon (MCNP) mean and uncertainty estimates by comparing these estimates to the true mean. These calculations were made with a liquid metal fast reactor model, but every effort was made to isolate the statistical nature of the uncertainty estimates so that the analysis of the reliability of the MCNP estimates should be relevant for small thermal reactors as well. Also, preliminary reactor physics calculations for two different special isotope production test assemblies for irradiation in the Fast Flux Test Facility (FFTF) were performed using MCNP and are documented here. The effect of an yttrium-hydride moderator to tailor the neutron flux incident on the targets to maximize isotope production for different designs in different locations within the reactor is discussed. These calculations also demonstrate the useful application of MCNP in design iterations by utilizing many of the codes features.
Nakano, Y., E-mail: nakano.yuuji@c.mbox.nagoya-u.ac.jp; Yamazaki, A.; Watanabe, K.; Uritani, A. [Graduate School of Engineering, Nagoya University, Nagoya 464-8603 (Japan); Ogawa, K.; Isobe, M. [National Institute for Fusion Science, Toki-city, GIFU 509-5292 (Japan)
2014-11-15T23:59:59.000Z
Neutron monitoring is important to manage safety of fusion experiment facilities because neutrons are generated in fusion reactions. Monte Carlo simulations play an important role in evaluating the influence of neutron scattering from various structures and correcting differences between deuterium plasma experiments and in situ calibration experiments. We evaluated these influences based on differences between the both experiments at Large Helical Device using Monte Carlo simulation code MCNP5. A difference between the both experiments in absolute detection efficiency of the fission chamber between O-ports is estimated to be the biggest of all monitors. We additionally evaluated correction coefficients for some neutron monitors.
Experimental Study and Monte Carlo Modeling of Calcium Borosilicate Glasses Leaching
Arab, Mehdi; Cailleteau, Celine; Angeli, Frederic [CEA/DTCD/SECM/Laboratoire d'etudes du Comportement a Long Terme, CEA Centre Valrho, BP 17171, Bagnols-sur-ceze, 30207 (France); Devreux, Francois [Laboratoire de Physique de la Matiere Condensee, CNRS and Ecole Polytechnique, Palaiseau Cedex, 91128 (France)
2007-07-01T23:59:59.000Z
During aqueous alteration of glass an alteration layer appears on the glass surface. The properties of this alteration layer are of great importance for understanding and predicting the long-term behavior of high-level radioactive waste glasses. Numerical modeling can be very useful for understanding the impact of the glass composition on its aqueous reactivity and long-term properties but it is quite difficult to model these complex glasses. In order to identify the effect of the calcium content on glass alteration, seven oxide glass compositions (57SiO{sub 2} 17B{sub 2}O{sub 3} (22-x)Na{sub 2}O{sub x}CaO 4ZrO{sub 2}; 0 < x < 11) were investigated and a Monte Carlo model was developed to describe their leaching behavior. The specimens were altered at constant temperature (T = 90 deg. C) at a glass-surface-area-to-solution-volume (SA/V) ratio of 15 cm-1 in a buffered solution (pH 9.2). Under these conditions all the variations observed in the leaching behavior are attributable to composition effects. Increasing the calcium content in the glass appears to be responsible for a sharp drop in the final leached boron fraction. In parallel with this experimental work, a Monte Carlo model was developed to investigate the effect of calcium content on the leaching behavior especially on the initial stage of alteration. Monte Carlo simulations performed with this model are in good agreement with the experimental results. The dependence of the alteration rate on the calcium content can be described by a quadratic function: fitting the simulated points gives a minimum alteration rate at about 7.7 mol% calcium. This value is consistent with the figure of 8.2 mol% obtained from the experimental work. The model was also used to investigate the role of calcium in the glass structure and it pointed out that calcium act preferentially as a network modifier rather than a charge compensator in this kind of glasses. (authors)
Lee, Choonsik; Kim, Kwang Pyo; Long, Daniel; Fisher, Ryan; Tien, Chris; Simon, Steven L.; Bouville, Andre; Bolch, Wesley E. [Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institute of Health, Bethesda, Maryland 20852 (United States); Department of Nuclear Engineering, Kyung Hee University, Yongin 446-701 (Korea, Republic of); Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, Florida 32611 (United States); Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institute of Health, Bethesda, Maryland 20852 (United States); Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, Florida 32611 (United States)
2011-03-15T23:59:59.000Z
Purpose: To develop a computed tomography (CT) organ dose estimation method designed to readily provide organ doses in a reference adult male and female for different scan ranges to investigate the degree to which existing commercial programs can reasonably match organ doses defined in these more anatomically realistic adult hybrid phantomsMethods: The x-ray fan beam in the SOMATOM Sensation 16 multidetector CT scanner was simulated within the Monte Carlo radiation transport code MCNPX2.6. The simulated CT scanner model was validated through comparison with experimentally measured lateral free-in-air dose profiles and computed tomography dose index (CTDI) values. The reference adult male and female hybrid phantoms were coupled with the established CT scanner model following arm removal to simulate clinical head and other body region scans. A set of organ dose matrices were calculated for a series of consecutive axial scans ranging from the top of the head to the bottom of the phantoms with a beam thickness of 10 mm and the tube potentials of 80, 100, and 120 kVp. The organ doses for head, chest, and abdomen/pelvis examinations were calculated based on the organ dose matrices and compared to those obtained from two commercial programs, CT-EXPO and CTDOSIMETRY. Organ dose calculations were repeated for an adult stylized phantom by using the same simulation method used for the adult hybrid phantom. Results: Comparisons of both lateral free-in-air dose profiles and CTDI values through experimental measurement with the Monte Carlo simulations showed good agreement to within 9%. Organ doses for head, chest, and abdomen/pelvis scans reported in the commercial programs exceeded those from the Monte Carlo calculations in both the hybrid and stylized phantoms in this study, sometimes by orders of magnitude. Conclusions: The organ dose estimation method and dose matrices established in this study readily provides organ doses for a reference adult male and female for different CT scan ranges and technical parameters. Organ doses from existing commercial programs do not reasonably match organ doses calculated for the hybrid phantoms due to differences in phantom anatomy, as well as differences in organ dose scaling parameters. The organ dose matrices developed in this study will be extended to cover different technical parameters, CT scanner models, and various age groups.
Le Roy, Robert J.
Path-integral Monte Carlo simulation of 3 vibrational shifts for CO2 in ,,He...n clusters critically tests the HeCO2 potential energy surface Hui Li,1 Nicholas Blinov,2,3 Pierre-Nicholas Roy,1 2009; accepted 20 February 2009; published online 9 April 2009 Path-integral Monte Carlo simulations
Application analysis of Monte Carlo to estimate the capacity of geothermal resources in Lawu Mount
Supriyadi, E-mail: supriyadi-uno@yahoo.co.nz [Physics, Faculty of Mathematics and Natural Sciences, University of Jember, Jl. Kalimantan Kampus Bumi Tegal Boto, Jember 68181 (Indonesia); Srigutomo, Wahyu [Complex system and earth physics, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia); Munandar, Arif [Kelompok Program Penelitian Panas Bumi, PSDG, Badan Geologi, Kementrian ESDM, Jl. Soekarno Hatta No. 444 Bandung 40254 (Indonesia)
2014-03-24T23:59:59.000Z
Monte Carlo analysis has been applied in calculation of geothermal resource capacity based on volumetric method issued by Standar Nasional Indonesia (SNI). A deterministic formula is converted into a stochastic formula to take into account the nature of uncertainties in input parameters. The method yields a range of potential power probability stored beneath Lawu Mount geothermal area. For 10,000 iterations, the capacity of geothermal resources is in the range of 139.30-218.24 MWe with the most likely value is 177.77 MWe. The risk of resource capacity above 196.19 MWe is less than 10%. The power density of the prospect area covering 17 km{sup 2} is 9.41 MWe/km{sup 2} with probability 80%.
A new time quantifiable Monte Carlo method in simulating magnetization reversal process
X. Z. Cheng; M. B. A. Jalil; H. K. Lee; Y. Okabe
2005-04-14T23:59:59.000Z
We propose a new time quantifiable Monte Carlo (MC) method to simulate the thermally induced magnetization reversal for an isolated single domain particle system. The MC method involves the determination of density of states, and the use of Master equation for time evolution. We derive an analytical factor to convert MC steps into real time intervals. Unlike a previous time quantified MC method, our method is readily scalable to arbitrarily long time scales, and can be repeated for different temperatures with minimal computational effort. Based on the conversion factor, we are able to make a direct comparison between the results obtained from MC and Langevin dynamics methods, and find excellent agreement between them. An analytical formula for the magnetization reversal time is also derived, which agrees very well with both numerical Langevin and time-quantified MC results, over a large temperature range and for parallel and oblique easy axis orientations.
Monte Carlo study of Lefschetz thimble structure in one-dimensional Thirring model at finite density
Fujii, Hirotsugu; Kikukawa, Yoshio
2015-01-01T23:59:59.000Z
We consider the one-dimensional massive Thirring model formulated on the lattice with staggered fermions and an auxiliary compact vector (link) field, which is exactly solvable and shows a phase transition with increasing the chemical potential of fermion number: the crossover at a finite temperature and the first order transition at zero temperature. We complexify its path-integration on Lefschetz thimbles and examine its phase transition by hybrid Monte Carlo simulations on the single dominant thimble. We observe a discrepancy between the numerical and exact results in the crossover region for small inverse coupling $\\beta$ and/or large lattice size $L$, while they are in good agreement at the lower and higher density regions. We also observe that the discrepancy persists in the continuum limit keeping the temperature finite and it becomes more significant toward the low-temperature limit. This numerical result is consistent with our analytical study of the model's thimble structure. And these results imply...
I. B. Bischofs; U. S. Schwarz
2006-01-16T23:59:59.000Z
Compliant environments can mediate interactions between mechanically active cells like fibroblasts. Starting with a phenomenological model for the behaviour of single cells, we use extensive Monte Carlo simulations to predict non-trivial structure formation for cell communities on soft elastic substrates as a function of elastic moduli, cell density, noise and cell position geometry. In general, we find a disordered structure as well as ordered string-like and ring-like structures. The transition between ordered and disordered structures is controlled both by cell density and noise level, while the transition between string- and ring-like ordered structures is controlled by the Poisson ratio. Similar effects are observed in three dimensions. Our results suggest that in regard to elastic effects, healthy connective tissue usually is in a macroscopically disordered state, but can be switched to a macroscopically ordered state by appropriate parameter variations, in a way that is reminiscent of wound contraction or diseased states like contracture.
Mergers of galaxies in clusters: Monte Carlo simulation of mass and angular momentum distribution
D. S. Krivitsky; V. M. Kontorovich
1997-03-04T23:59:59.000Z
A Monte Carlo simulation of mergers in clusters of galaxies is carried out. An ``explosive'' character of the merging process (an analog of phase transition), suggested earlier by Cavaliere et al. (1991), Kontorovich et al. (1992), is confirmed. In particular, a giant object similar to cD-galaxy is formed in a comparatively short time as a result of mergers. Mass and angular momentum distribution function for galaxies is calculated. An intermediate asymptotics of the mass function is close to a power law with the exponent $\\alpha\\approx2$. It may correspond to recent observational data for steep faint end of the luminosity function. The angular momentum distribution formed by mergers is close to Gaussian, the rms dimensionless angular momentum $S/(GM^3R)^{1/2}$ being approximately independent of mass, which is in accordance with observational data.
Resonating Valence Bond Quantum Monte Carlo: Application to the ozone molecule
Sam Azadi; Ranber Singh; Thomas D. Kühne
2015-02-24T23:59:59.000Z
We study the potential energy surface of the ozone molecule by means of Quantum Monte Carlo simulations based on the resonating valence bond concept. The trial wave function consists of an antisymmetrized geminal power arranged in a single-determinant that is multiplied by a Jastrow correlation factor. Whereas the determinantal part incorporates static correlation effects, the augmented real-space correlation factor accounts for the dynamics electron correlation. The accuracy of this approach is demonstrated by computing the potential energy surface for the ozone molecule in three vibrational states: symmetric, asymmetric and scissoring. We find that the employed wave function provides a detailed description of rather strongly-correlated multi-reference systems, which is in quantitative agreement with experiment.
Validation of GEANT4 Monte Carlo Models with a Highly Granular Scintillator-Steel Hadron Calorimeter
C. Adloff; J. Blaha; J. -J. Blaising; C. Drancourt; A. Espargilière; R. Gaglione; N. Geffroy; Y. Karyotakis; J. Prast; G. Vouters; K. Francis; J. Repond; J. Schlereth; J. Smith; L. Xia; E. Baldolemar; J. Li; S. T. Park; M. Sosebee; A. P. White; J. Yu; T. Buanes; G. Eigen; Y. Mikami; N. K. Watson; G. Mavromanolakis; M. A. Thomson; D. R. Ward; W. Yan; D. Benchekroun; A. Hoummada; Y. Khoulaki; J. Apostolakis; A. Dotti; G. Folger; V. Ivantchenko; V. Uzhinskiy; M. Benyamna; C. Cârloganu; F. Fehr; P. Gay; S. Manen; L. Royer; G. C. Blazey; A. Dyshkant; J. G. R. Lima; V. Zutshi; J. -Y. Hostachy; L. Morin; U. Cornett; D. David; G. Falley; K. Gadow; P. Göttlicher; C. Günter; B. Hermberg; S. Karstensen; F. Krivan; A. -I. Lucaci-Timoce; S. Lu; B. Lutz; S. Morozov; V. Morgunov; M. Reinecke; F. Sefkow; P. Smirnov; M. Terwort; A. Vargas-Trevino; N. Feege; E. Garutti; I. Marchesinik; M. Ramilli; P. Eckert; T. Harion; A. Kaplan; H. -Ch. Schultz-Coulon; W. Shen; R. Stamen; B. Bilki; E. Norbeck; Y. Onel; G. W. Wilson; K. Kawagoe; P. D. Dauncey; A. -M. Magnan; V. Bartsch; M. Wing; F. Salvatore; E. Calvo Alamillo; M. -C. Fouz; J. Puerta-Pelayo; B. Bobchenko; M. Chadeeva; M. Danilov; A. Epifantsev; O. Markin; R. Mizuk; E. Novikov; V. Popov; V. Rusinov; E. Tarkovsky; N. Kirikova; V. Kozlov; P. Smirnov; Y. Soloviev; P. Buzhan; A. Ilyin; V. Kantserov; V. Kaplin; A. Karakash; E. Popova; V. Tikhomirov; C. Kiesling; K. Seidel; F. Simon; C. Soldner; M. Szalay; M. Tesar; L. Weuste; M. S. Amjad; J. Bonis; S. Callier; S. Conforti di Lorenzo; P. Cornebise; Ph. Doublet; F. Dulucq; J. Fleury; T. Frisson; N. van der Kolk; H. Li; G. Martin-Chassard; F. Richard; Ch. de la Taille; R. Pöschl; L. Raux; J. Rouëné; N. Seguin-Moreau; M. Anduze; V. Boudry; J-C. Brient; D. Jeans; P. Mora de Freitas; G. Musat; M. Reinhard; M. Ruan; H. Videau; B. Bulanek; J. Zacek; J. Cvach; P. Gallus; M. Havranek; M. Janata; J. Kvasnicka; D. Lednicky; M. Marcisovsky; I. Polak; J. Popule; L. Tomasek; M. Tomasek; P. Ruzicka; P. Sicho; J. Smolik; V. Vrba; J. Zalesak; B. Belhorma; H. Ghazlane; T. Takeshita; S. Uozumi; M. Götze; O. Hartbrich; J. Sauer; S. Weber; C. Zeitnitz
2014-06-15T23:59:59.000Z
Calorimeters with a high granularity are a fundamental requirement of the Particle Flow paradigm. This paper focuses on the prototype of a hadron calorimeter with analog readout, consisting of thirty-eight scintillator layers alternating with steel absorber planes. The scintillator plates are finely segmented into tiles individually read out via Silicon Photomultipliers. The presented results are based on data collected with pion beams in the energy range from 8GeV to 100GeV. The fine segmentation of the sensitive layers and the high sampling frequency allow for an excellent reconstruction of the spatial development of hadronic showers. A comparison between data and Monte Carlo simulations is presented, concerning both the longitudinal and lateral development of hadronic showers and the global response of the calorimeter. The performance of several GEANT4 physics lists with respect to these observables is evaluated.
Monte-Carlo study of quasiparticle dispersion relation in monolayer graphene
P. V. Buividovich
2013-01-07T23:59:59.000Z
The density of electronic one-particle states in monolayer graphene is studied by performing the Hybrid Monte-Carlo simulations of the tight-binding model for electrons on the pi orbitals of carbon atoms which make up the graphene lattice. Density of states is approximated as a derivative of the number of particles over the chemical potential at sufficiently small temperature. Simulations are performed in the partially quenched approximation, in which virtual particles and holes have zero chemical potential. It is found that the Van Hove singularity becomes much sharper than in the free tight-binding model. Simulation results also suggest that the Fermi velocity increases with interaction strength up to the transition to the phase with spontaneously broken chiral symmetry.
Hybrid Monte-Carlo simulation of interacting tight-binding model of graphene
Dominik Smith; Lorenz von Smekal
2013-11-05T23:59:59.000Z
In this work, results are presented of Hybrid-Monte-Carlo simulations of the tight-binding Hamiltonian of graphene, coupled to an instantaneous long-range two-body potential which is modeled by a Hubbard-Stratonovich auxiliary field. We present an investigation of the spontaneous breaking of the sublattice symmetry, which corresponds to a phase transition from a conducting to an insulating phase and which occurs when the effective fine-structure constant $\\alpha$ of the system crosses above a certain threshold $\\alpha_C$. Qualitative comparisons to earlier works on the subject (which used larger system sizes and higher statistics) are made and it is established that $\\alpha_C$ is of a plausible magnitude in our simulations. Also, we discuss differences between simulations using compact and non-compact variants of the Hubbard field and present a quantitative comparison of distinct discretization schemes of the Euclidean time-like dimension in the Fermion operator.
Introduction to Computational Physics and Monte Carlo Simulations of Matrix Field Theory
Ydri, Badis
2015-01-01T23:59:59.000Z
This book is divided into two parts. In the first part we give an elementary introduction to computational physics consisting of 21 simulations which originated from a formal course of lectures and laboratory simulations delivered since 2010 to physics students at Annaba University. The second part is much more advanced and deals with the problem of how to set up working Monte Carlo simulations of matrix field theories which involve finite dimensional matrix regularizations of noncommutative and fuzzy field theories, fuzzy spaces and matrix geometry. The study of matrix field theory in its own right has also become very important to the proper understanding of all noncommutative, fuzzy and matrix phenomena. The second part, which consists of 9 simulations, was delivered informally to doctoral students who are working on various problems in matrix field theory. Sample codes as well as sample key solutions are also provided for convenience and completness. An appendix containing an executive arabic summary of t...
Thomas, Robert E; Overy, Catherine; Knowles, Peter J; Alavi, Ali; Booth, George H
2015-01-01T23:59:59.000Z
Unbiased stochastic sampling of the one- and two-body reduced density matrices is achieved in full configuration interaction quantum Monte Carlo with the introduction of a second, "replica" ensemble of walkers, whose population evolves in imaginary time independently from the first, and which entails only modest additional computational overheads. The matrices obtained from this approach are shown to be representative of full configuration-interaction quality, and hence provide a realistic opportunity to achieve high-quality results for a range of properties whose operators do not necessarily commute with the hamiltonian. A density-matrix formulated quasi-variational energy estimator having been already proposed and investigated, the present work extends the scope of the theory to take in studies of analytic nuclear forces, molecular dipole moments and polarisabilities, with extensive comparison to exact results where possible. These new results confirm the suitability of the sampling technique and, where suf...
Monte Carlo and Renormalization Group Effective Potentials in Scalar Field Theories
J. R. Shepard; V. Dmitrašinovi?; J. A. McNeil
1994-12-29T23:59:59.000Z
We study constraint effective potentials for various strongly interacting $\\phi^4$ theories. Renormalization group (RG) equations for these quantities are discussed and a heuristic development of a commonly used RG approximation is presented which stresses the relationships among the loop expansion, the Schwinger-Dyson method and the renormalization group approach. We extend the standard RG treatment to account explicitly for finite lattice effects. Constraint effective potentials are then evaluated using Monte Carlo (MC) techniques and careful comparisons are made with RG calculations. Explicit treatment of finite lattice effects is found to be essential in achieving quantitative agreement with the MC effective potentials. Excellent agreement is demonstrated for $d=3$ and $d=4$, O(1) and O(2) cases in both symmetric and broken phases.
Ab-initio molecular dynamics simulation of liquid water by Quantum Monte Carlo
Andrea Zen; Ye Luo; Guglielmo Mazzola; Leonardo Guidoni; Sandro Sorella
2015-04-21T23:59:59.000Z
Although liquid water is ubiquitous in chemical reactions at roots of life and climate on the earth, the prediction of its properties by high-level ab initio molecular dynamics simulations still represents a formidable task for quantum chemistry. In this article we present a room temperature simulation of liquid water based on the potential energy surface obtained by a many-body wave function through quantum Monte Carlo (QMC) methods. The simulated properties are in good agreement with recent neutron scattering and X-ray experiments, particularly concerning the position of the oxygen-oxygen peak in the radial distribution function, at variance of previous Density Functional Theory attempts. Given the excellent performances of QMC on large scale supercomputers, this work opens new perspectives for predictive and reliable ab-initio simulations of complex chemical systems.
Update of the MCSANC Monte Carlo Integrator, v.1.20
A. Arbuzov; D. Bardin; S. Bondarenko; P. Christova; L. Kalinovskaya; U. Klein; V. Kolesnikov; R. Sadykov; A. Sapronov; F. Uskov
2015-09-10T23:59:59.000Z
This article presents new features of the MCSANC v.1.20 program, a Monte Carlo tool for calculation of the next-to-leading order electroweak and QCD corrections to various Standard Model processes. The extensions concern implementation of Drell--Yan-like processes and include a systematic treatment of the photon-induced contribution in proton--proton collisions and electroweak corrections beyond NLO approximation. There are also technical improvements such as calculation of the forward-backward asymmetry for the neutral current Drell--Yan process. The updated code is suitable for studies of the effects due to EW and QCD radiative corrections to Drell--Yan (and several other) processes at the LHC and for forthcoming high energy proton--proton colliders.
Auxiliary field Monte-Carlo simulation of strong coupling lattice QCD for QCD phase diagram
Terukazu Ichihara; Akira Ohnishi; Takashi Z. Nakano
2014-10-07T23:59:59.000Z
We study the QCD phase diagram in the strong coupling limit with fluctuation effects by using the auxiliary field Monte-Carlo method. We apply the chiral angle fixing technique in order to obtain finite chiral condensate in the chiral limit in finite volume. The behavior of order parameters suggests that chiral phase transition is the second order or crossover at low chemical potential and the first order at high chemical potential. Compared with the mean field results, the hadronic phase is suppressed at low chemical potential, and is extended at high chemical potential as already suggested in the monomer-dimer-polymer simulations. We find that the sign problem originating from the bosonization procedure is weakened by the phase cancellation mechanism; a complex phase from one site tends to be canceled by the nearest neighbor site phase as long as low momentum auxiliary field contributions dominate.
The tau leptons theory and experimental data: Monte Carlo, fits, software and systematic errors
Zbigniew Was
2014-12-09T23:59:59.000Z
Status of tau lepton decay Monte Carlo generator TAUOLA is reviewed. Recent efforts on development of new hadronic currents are presented. Multitude new channels for anomalous tau decay modes and parametrization based on defaults used by BaBar collaboration are introduced. Also parametrization based on theoretical considerations are presented as an alternative. Lesson from comparison and fits to the BaBar and Belle data is recalled. It was found that as in the past, in particular at a time of comparisons with CLEO and ALEPH data, proper fitting, to as detailed as possible representation of the experimental data, is essential for appropriate developments of models of tau decays. In the later part of the presentation, use of the TAUOLA program for phenomenology of W,Z,H decays at LHC is adressed. Some new results, relevant for QED bremsstrahlung in such decays are presented as well.
Bianco, Federica B; Oh, Seung Man; Fierroz, David; Liu, Yuqian; Kewley, Lisa; Graur, Or
2015-01-01T23:59:59.000Z
We present the open-source Python code pyMCZ that determines oxygen abundance and its distribution from strong emission lines in the standard metallicity scales, based on the original IDL code of Kewley & Dopita (2002) with updates from Kewley & Ellison (2008), and expanded to include more recently developed scales. The standard strong-line diagnostics have been used to estimate the oxygen abundance in the interstellar medium through various emission line ratios in many areas of astrophysics, including galaxy evolution and supernova host galaxy studies. We introduce a Python implementation of these methods that, through Monte Carlo (MC) sampling, better characterizes the statistical reddening-corrected oxygen abundance confidence region. Given line flux measurements and their uncertainties, our code produces synthetic distributions for the oxygen abundance in up to 13 metallicity scales simultaneously, as well as for E(B-V), and estimates their median values and their 66% confidence regions. In additi...
Quantum Monte Carlo calculation of the equation of state of neutron matter
Gandolfi, S.; Illarionov, A. Yu.; Schmidt, K. E.; Pederiva, F.; Fantoni, S. [International School for Advanced Studies, SISSA Via Beirut 2/4 I-34014 Trieste (Italy) and INFN, Sezione di Trieste, Trieste (Italy); Department of Physics, Arizona State University, Tempe, Arizona 85287 (United States); Dipartimento di Fisica dell'Universita di Trento, via Sommarive 14, I-38050 Povo, Trento (Italy) and INFN, Gruppo Collegato di Trento, Trento (Italy); International School for Advanced Studies, SISSA Via Beirut 2/4 I-34014 Trieste (Italy); INFN, Sezione di Trieste, Trieste, Italy and INFM DEMOCRITOS National Simulation Center, Via Beirut 2/4 I-34014 Trieste (Italy)
2009-05-15T23:59:59.000Z
We calculated the equation of state of neutron matter at zero temperature by means of the auxiliary field diffusion Monte Carlo (AFDMC) method combined with a fixed-phase approximation. The calculation of the energy was carried out by simulating up to 114 neutrons in a periodic box. Special attention was given to reducing finite-size effects at the energy evaluation by adding to the interaction the effect due to the truncation of the simulation box, and by performing several simulations using different numbers of neutrons. The finite-size effects due to kinetic energy were also checked by employing the twist-averaged boundary conditions. We considered a realistic nuclear Hamiltonian containing modern two- and three-body interactions of the Argonne and Urbana family. The equation of state can be used to compare and calibrate other many-body calculations and to predict properties of neutron stars.
Monte Carlo procedure for protein folding in lattice model. Conformational rigidity
Olivier Collet
1999-07-19T23:59:59.000Z
A rigourous Monte Carlo method for protein folding simulation on lattice model is introduced. We show that a parameter which can be seen as the rigidity of the conformations has to be introduced in order to satisfy the detailed balance condition. Its properties are discussed and its role during the folding process is elucidated. This method is applied on small chains on two-dimensional lattice. A Bortz-Kalos-Lebowitz type algorithm which allows to study the kinetic of the chains at very low temperature is implemented in the presented method. We show that the coefficients of the Arrhenius law are in good agreement with the value of the main potential barrier of the system.
Ground State Calculations of Confined Hydrogen Molecule H_2 Using Variational Monte Carlo Method
Doma, S B; Amer, A A
2015-01-01T23:59:59.000Z
The variational Monte Carlo method is used to evaluate the ground-state energy of the confined hydrogen molecule, H_2. Accordingly, we considered the case of hydrogen molecule confined by a hard prolate spheroidal cavity when the nuclear positions are clamped at the foci (on-focus case). Also, the case of off-focus nuclei in which the two nuclei are not clamped to the foci is studied. This case provides flexibility for the treatment of the molecular properties by selecting an arbitrary size and shape of the confining spheroidal box. An accurate trial wave function depending on many variational parameters is used for this purpose. The obtained results are in good agreement with the most recent results.
MaGe - a Geant4-based Monte Carlo framework for low-background experiments
Yuen-Dat Chan; Jason A. Detwiler; Reyco Henning; Victor M. Gehman; Rob A. Johnson; David V. Jordan; Kareem Kazkaz; Markus Knapp; Kevin Kroninger; Daniel Lenz; Jing Liu; Xiang Liu; Michael G. Marino; Akbar Mokhtarani; Luciano Pandola; Alexis G. Schubert; Claudia Tomei
2008-02-06T23:59:59.000Z
A Monte Carlo framework, MaGe, has been developed based on the Geant4 simulation toolkit. Its purpose is to simulate physics processes in low-energy and low-background radiation detectors, specifically for the Majorana and Gerda $^{76}$Ge neutrinoless double-beta decay experiments. This jointly-developed tool is also used to verify the simulation of physics processes relevant to other low-background experiments in Geant4. The MaGe framework contains simulations of prototype experiments and test stands, and is easily extended to incorporate new geometries and configurations while still using the same verified physics processes, tunings, and code framework. This reduces duplication of efforts and improves the robustness of and confidence in the simulation output.
Monte Carlo simulation of the experiment MAMBO I and possible correction of neutron lifetime result
A. P. Serebrov; A. K. Fomin
2009-04-14T23:59:59.000Z
We are discussing the present situation with neutron lifetime measurements. There is a serious discrepancy between the previous experiments and the recent precise experiment [1]. The possible reason of the discrepancy can be connected with a quasi-elastic scattering of UCN on the surface of liquid fomblin which was used for most of the previous experiments. The Monte Carlo simulation of one of the previous experiments [2] shows that the result of this experiment [2] has to be corrected and instead of the previous result 887.6 +/- 3 s the new result 880.4 +/- 3 s has to be claimed. [1] A.P. Serebrov et al., Phys. Lett. B 605 (2005) 72. [2] W. Mampe et al., Phys. Rev. Lett. 63 (1989) 593.
A bottom collider vertex detector design, Monte-Carlo simulation and analysis package
Lebrun, P.
1990-10-01T23:59:59.000Z
A detailed simulation of the BCD vertex detector is underway. Specifications and global design issues are briefly reviewed. The BCD design based on double sided strip detector is described in more detail. The GEANT3-based Monte-Carlo program and the analysis package used to estimate detector performance are discussed in detail. The current status of the expected resolution and signal to noise ratio for the golden'' CP violating mode B{sub d} {yields} {pi}{sup +}{pi}{sup {minus}} is presented. These calculations have been done at FNAL energy ({radical}s = 2.0 TeV). Emphasis is placed on design issues, analysis techniques and related software rather than physics potentials. 20 refs., 46 figs.
Monte Carlo Neutrino Transport Through Remnant Disks from Neutron Star Mergers
Richers, S; O'Connor, Evan; Fernandez, Rodrigo; Ott, Christian
2015-01-01T23:59:59.000Z
We present Sedonu, a new open source, steady-state, special relativistic Monte Carlo (MC) neutrino transport code, available at bitbucket.org/srichers/sedonu. The code calculates the energy- and angle-dependent neutrino distribution function on fluid backgrounds of any number of spatial dimensions, calculates the rates of change of fluid internal energy and electron fraction, and solves for the equilibrium fluid temperature and electron fraction. We apply this method to snapshots from two dimensional simulations of accretion disks left behind by binary neutron star mergers, varying the input physics and comparing to the results obtained with a leakage scheme for the case of a central black hole and a central hypermassive neutron star. Neutrinos are guided away from the densest regions of the disk and escape preferentially around 45 degrees from the equatorial plane. Neutrino heating is strengthened by MC transport a few scale heights above the disk midplane near the innermost stable circular orbit, potentiall...
Validation of the Monte Carlo Criticality Program KENO V. a for highly-enriched uranium systems
Knight, J.R.
1984-11-01T23:59:59.000Z
A series of calculations based on critical experiments have been performed using the KENO V.a Monte Carlo Criticality Program for the purpose of validating KENO V.a for use in evaluating Y-12 Plant criticality problems. The experiments were reflected and unreflected systems of single units and arrays containing highly enriched uranium metal or uranium compounds. Various geometrical shapes were used in the experiments. The SCALE control module CSAS25 with the 27-group ENDF/B-4 cross-section library was used to perform the calculations. Some of the experiments were also calculated using the 16-group Hansen-Roach Library. Results are presented in a series of tables and discussed. Results show that the criteria established for the safe application of the KENO IV program may also be used for KENO V.a results.
Pethes, Ildikó
2015-01-01T23:59:59.000Z
Although liquid water has been studied for many decades by (X-ray and neutron) diffraction measurements, new experimental results keep appearing, virtually every year. The reason for this is that neither X-ray, nor neutron diffraction data are trivial to correct and interpret for this essential substance. Since X-rays are somewhat insensitive to hydrogen, neutron diffraction with (most frequently, H/D) isotopic substitution is vital for investigating the most important feature in water: hydrogen bonding. Here, the two very recent sets of neutron diffraction data are considered, both exploiting the contrast between light and heavy hydrogen, $^1$H and $^2$H, in different ways. Reverse Monte Carlo structural modeling is applied for constructing large structural models that are as consistent as possible with all experimental information, both in real and reciprocal space. The method has also proven to be useful for revealing where possible small inconsistencies appear during primary data processing: for one neutr...
SU-E-T-344: Validation and Clinical Experience of Eclipse Electron Monte Carlo Algorithm (EMC)
Pokharel, S [21st Century Oncology, Fort Myers, FL (United States); Rana, S [Procure Proton Therapy Center, Oklahoma City, OK (United States)
2014-06-01T23:59:59.000Z
Purpose: The purpose of this study is to validate Eclipse Electron Monte Carlo (Algorithm for routine clinical uses. Methods: The PTW inhomogeneity phantom (T40037) with different combination of heterogeneous slabs has been CT-scanned with Philips Brilliance 16 slice scanner. The phantom contains blocks of Rando Alderson materials mimicking lung, Polystyrene (Tissue), PTFE (Bone) and PMAA. The phantom has 30×30×2.5 cm base plate with 2cm recesses to insert inhomogeneity. The detector systems used in this study are diode, tlds and Gafchromic EBT2 films. The diode and tlds were included in CT scans. The CT sets are transferred to Eclipse treatment planning system. Several plans have been created with Eclipse Monte Carlo (EMC) algorithm 11.0.21. Measurements have been carried out in Varian TrueBeam machine for energy from 6–22mev. Results: The measured and calculated doses agreed very well for tissue like media. The agreement was reasonably okay for the presence of lung inhomogeneity. The point dose agreement was within 3.5% and Gamma passing rate at 3%/3mm was greater than 93% except for 6Mev(85%). The disagreement can reach as high as 10% in the presence of bone inhomogeneity. This is due to eclipse reporting dose to the medium as opposed to the dose to the water as in conventional calculation engines. Conclusion: Care must be taken when using Varian Eclipse EMC algorithm for dose calculation for routine clinical uses. The algorithm dose not report dose to water in which most of the clinical experiences are based on rather it just reports dose to medium directly. In the presence of inhomogeneity such as bone, the dose discrepancy can be as high as 10% or even more depending on the location of normalization point or volume. As Radiation oncology as an empirical science, care must be taken before using EMC reported monitor units for clinical uses.
SU-E-T-277: Raystation Electron Monte Carlo Commissioning and Clinical Implementation
Allen, C; Sansourekidou, P; Pavord, D [Health-quest, Poughkeepsie, NY (United States)
2014-06-01T23:59:59.000Z
Purpose: To evaluate the Raystation v4.0 Electron Monte Carlo algorithm for an Elekta Infinity linear accelerator and commission for clinical use. Methods: A total of 199 tests were performed (75 Export and Documentation, 20 PDD, 30 Profiles, 4 Obliquity, 10 Inhomogeneity, 55 MU Accuracy, and 5 Grid and Particle History). Export and documentation tests were performed with respect to MOSAIQ (Elekta AB) and RadCalc (Lifeline Software Inc). Mechanical jaw parameters and cutout magnifications were verified. PDD and profiles for open cones and cutouts were extracted and compared with water tank measurements. Obliquity and inhomogeneity for bone and air calculations were compared to film dosimetry. MU calculations for open cones and cutouts were performed and compared to both RadCalc and simple hand calculations. Grid size and particle histories were evaluated per energy for statistical uncertainty performance. Acceptability was categorized as follows: performs as expected, negligible impact on workflow, marginal impact, critical impact or safety concern, and catastrophic impact of safety concern. Results: Overall results are: 88.8% perform as expected, 10.2% negligible, 2.0% marginal, 0% critical and 0% catastrophic. Results per test category are as follows: Export and Documentation: 100% perform as expected, PDD: 100% perform as expected, Profiles: 66.7% perform as expected, 33.3% negligible, Obliquity: 100% marginal, Inhomogeneity 50% perform as expected, 50% negligible, MU Accuracy: 100% perform as expected, Grid and particle histories: 100% negligible. To achieve distributions with satisfactory smoothness level, 5,000,000 particle histories were used. Calculation time was approximately 1 hour. Conclusion: Raystation electron Monte Carlo is acceptable for clinical use. All of the issues encountered have acceptable workarounds. Known issues were reported to Raysearch and will be resolved in upcoming releases.
Sunny, E. E.; Martin, W. R. [University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor MI 48109 (United States)
2013-07-01T23:59:59.000Z
Current Monte Carlo codes use one of three models to model neutron scattering in the epithermal energy range: (1) the asymptotic scattering model, (2) the free gas scattering model, or (3) the S({alpha},{beta}) model, depending on the neutron energy and the specific Monte Carlo code. The free gas scattering model assumes the scattering cross section is constant over the neutron energy range, which is usually a good approximation for light nuclei, but not for heavy nuclei where the scattering cross section may have several resonances in the epithermal region. Several researchers in the field have shown that using the free gas scattering model in the vicinity of the resonances in the lower epithermal range can under-predict resonance absorption due to the up-scattering phenomenon. Existing methods all involve performing the collision analysis in the center-of-mass frame, followed by a conversion back to the laboratory frame. In this paper, we will present a new sampling methodology that (1) accounts for the energy-dependent scattering cross sections in the collision analysis and (2) acts in the laboratory frame, avoiding the conversion to the center-of-mass frame. The energy dependence of the scattering cross section was modeled with even-ordered polynomials to approximate the scattering cross section in Blackshaw's equations for the moments of the differential scattering PDFs. These moments were used to sample the outgoing neutron speed and angle in the laboratory frame on-the-fly during the random walk of the neutron. Results for criticality studies on fuel pin and fuel assembly calculations using these methods showed very close comparison to results using the reference Doppler-broadened rejection correction (DBRC) scheme. (authors)
SU-E-T-578: MCEBRT, A Monte Carlo Code for External Beam Treatment Plan Verifications
Chibani, O; Ma, C [Fox Chase Cancer Center, Philadelphia, PA (United States); Eldib, A [Fox Chase Cancer Center, Philadelphia, PA (United States); Al-Azhar University, Cairo (Egypt)
2014-06-01T23:59:59.000Z
Purpose: Present a new Monte Carlo code (MCEBRT) for patient-specific dose calculations in external beam radiotherapy. The code MLC model is benchmarked and real patient plans are re-calculated using MCEBRT and compared with commercial TPS. Methods: MCEBRT is based on the GEPTS system (Med. Phys. 29 (2002) 835–846). Phase space data generated for Varian linac photon beams (6 – 15 MV) are used as source term. MCEBRT uses a realistic MLC model (tongue and groove, rounded ends). Patient CT and DICOM RT files are used to generate a 3D patient phantom and simulate the treatment configuration (gantry, collimator and couch angles; jaw positions; MLC sequences; MUs). MCEBRT dose distributions and DVHs are compared with those from TPS in absolute way (Gy). Results: Calculations based on the developed MLC model closely matches transmission measurements (pin-point ionization chamber at selected positions and film for lateral dose profile). See Fig.1. Dose calculations for two clinical cases (whole brain irradiation with opposed beams and lung case with eight fields) are carried out and outcomes are compared with the Eclipse AAA algorithm. Good agreement is observed for the brain case (Figs 2-3) except at the surface where MCEBRT dose can be higher by 20%. This is due to better modeling of electron contamination by MCEBRT. For the lung case an overall good agreement (91% gamma index passing rate with 3%/3mm DTA criterion) is observed (Fig.4) but dose in lung can be over-estimated by up to 10% by AAA (Fig.5). CTV and PTV DVHs from TPS and MCEBRT are nevertheless close (Fig.6). Conclusion: A new Monte Carlo code is developed for plan verification. Contrary to phantombased QA measurements, MCEBRT simulate the exact patient geometry and tissue composition. MCEBRT can be used as extra verification layer for plans where surface dose and tissue heterogeneity are an issue.
Zhang, Pengfei; Wang, Qiang, E-mail: q.wang@colostate.edu [Department of Chemical and Biological Engineering, Colorado State University, Fort Collins, Colorado 80523-1370 (United States)] [Department of Chemical and Biological Engineering, Colorado State University, Fort Collins, Colorado 80523-1370 (United States)
2014-01-28T23:59:59.000Z
Using fast lattice Monte Carlo (FLMC) simulations [Q. Wang, Soft Matter 5, 4564 (2009)] and the corresponding lattice self-consistent field (LSCF) calculations, we studied a model system of grafted homopolymers, in both the brush and mushroom regimes, in an explicit solvent compressed by an impenetrable surface. Direct comparisons between FLMC and LSCF results, both of which are based on the same Hamiltonian (thus without any parameter-fitting between them), unambiguously and quantitatively reveal the fluctuations/correlations neglected by the latter. We studied both the structure (including the canonical-ensemble averages of the height and the mean-square end-to-end distances of grafted polymers) and thermodynamics (including the ensemble-averaged reduced energy density and the related internal energy per chain, the differences in the Helmholtz free energy and entropy per chain from the uncompressed state, and the pressure due to compression) of the system. In particular, we generalized the method for calculating pressure in lattice Monte Carlo simulations proposed by Dickman [J. Chem. Phys. 87, 2246 (1987)], and combined it with the Wang-Landau–Optimized Ensemble sampling [S. Trebst, D. A. Huse, and M. Troyer, Phys. Rev. E 70, 046701 (2004)] to efficiently and accurately calculate the free energy difference and the pressure due to compression. While we mainly examined the effects of the degree of compression, the distance between the nearest-neighbor grafting points, the reduced number of chains grafted at each grafting point, and the system fluctuations/correlations in an athermal solvent, the ?-solvent is also considered in some cases.
Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations
Arampatzis, Georgios, E-mail: garab@math.uoc.gr [Department of Applied Mathematics, University of Crete (Greece) [Department of Applied Mathematics, University of Crete (Greece); Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 (United States); Katsoulakis, Markos A., E-mail: markos@math.umass.edu [Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 (United States)
2014-03-28T23:59:59.000Z
In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.
Taylor, Michael, E-mail: michael.taylor@rmit.edu.au [School of Applied Sciences, College of Science, Engineering and Health, RMIT University, Melbourne, Victoria (Australia); Physical Sciences, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Dunn, Leon; Kron, Tomas; Height, Felicity; Franich, Rick [School of Applied Sciences, College of Science, Engineering and Health, RMIT University, Melbourne, Victoria (Australia); Physical Sciences, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia)
2012-04-01T23:59:59.000Z
Prediction of dose distributions in close proximity to interfaces is difficult. In the context of radiotherapy of lung tumors, this may affect the minimum dose received by lesions and is particularly important when prescribing dose to covering isodoses. The objective of this work is to quantify underdosage in key regions around a hypothetical target using Monte Carlo dose calculation methods, and to develop a factor for clinical estimation of such underdosage. A systematic set of calculations are undertaken using 2 Monte Carlo radiation transport codes (EGSnrc and GEANT4). Discrepancies in dose are determined for a number of parameters, including beam energy, tumor size, field size, and distance from chest wall. Calculations were performed for 1-mm{sup 3} regions at proximal, distal, and lateral aspects of a spherical tumor, determined for a 6-MV and a 15-MV photon beam. The simulations indicate regions of tumor underdose at the tumor-lung interface. Results are presented as ratios of the dose at key peripheral regions to the dose at the center of the tumor, a point at which the treatment planning system (TPS) predicts the dose more reliably. Comparison with TPS data (pencil-beam convolution) indicates such underdosage would not have been predicted accurately in the clinic. We define a dose reduction factor (DRF) as the average of the dose in the periphery in the 6 cardinal directions divided by the central dose in the target, the mean of which is 0.97 and 0.95 for a 6-MV and 15-MV beam, respectively. The DRF can assist clinicians in the estimation of the magnitude of potential discrepancies between prescribed and delivered dose distributions as a function of tumor size and location. Calculation for a systematic set of 'generic' tumors allows application to many classes of patient case, and is particularly useful for interpreting clinical trial data.
Crawford, John R.
Testing for Suspected Impairments and Dissociations in Single-Case Studies in Neuropsychology: Evaluation of Alternatives Using Monte Carlo Simulations and Revised Tests for Dissociations John R. Crawford, a patient is compared with a small control sample. Methods of testing for a deficit on Task X
a full and quasi-full MC simulation with energy threshold of particles of 500 keV for primary energy by the user). Apart from thinning, a number of papers treats about techniques to simulate ultra high energy30TH INTERNATIONAL COSMIC RAY CONFERENCE A Fast and Accurate Monte Carlo EAS Simulation Scheme
Monte Carlo data-driven tight frame for seismic data Shiwei Yu1, Jianwei Ma2 and Stanley Osher3
Ferguson, Thomas S.
. INTRODUCTION In seismic exploration, noise and missing traces are unavoidable. The noise comes from variousMonte Carlo data-driven tight frame for seismic data recovery Shiwei Yu1, Jianwei Ma2 and Stanley been introduced for seismic data denois- ing and interpolation, and its adaptability to seismic data
Monte Carlo study of the CO-poisoning dynamics in a model for the catalytic oxidation of CO
Marro, Joaquín
Monte Carlo study of the CO-poisoning dynamics in a model for the catalytic oxidation of CO The poisoning dynamics of the ZiffGulariBarshad Phys. Rev. Lett. 56, 2553 1986 model, for a monomer absorbing state and close to the coexistence point. Analysis of the average poisoning time ( p) allows us
Tafreshi, Hooman Vahedi
Analytical Monte Carlo Ray Tracing simulation of radiative heat transfer through bimodal fibrous-state radiative heat transfer through fibrous insulation materials. The simulations are conducted in 3-D disor radiation and conduc- tion to be the only modes of heat transfer in fibrous insulation materials
Boas, David
September 1, 2001 / Vol. 26, No. 17 / OPTICS LETTERS 1335 Perturbation Monte Carlo methods to solve with respect to perturbations in background tissue optical properties. We then feed this derivative information to a nonlinear optimization algorithm to determine the optical properties of the tissue heterogeneity under
Int. J. Mod. Phys. C (1999), accepted for publication A Monte Carlo Study of the Specific Heat
Usadel, K. D.
1999-01-01T23:59:59.000Z
Int. J. Mod. Phys. C (1999), accepted for publication A Monte Carlo Study of the Specific Heat is suppressed in the FC case. The specific heat shows a noncritical broad maximum above the transi tion., whereas our interpretation of the data is different. Keywords: Criticalpoint effects, specific heats
Vrugt, Jasper A.
- duce considerable uncertainty in the model parameters and predictions. This is in part due increasingly popular for aquifer and reservoir characteriza- tion, and parameter and model predictive statistical analysis of uncertainty [Kennedy and O'Hagan, 2001], and use Markov chain Monte Carlo (MCMC
Lutzoni, François M.
Bayes or Bootstrap? A Simulation Study Comparing the Performance of Bayesian Markov Chain Monte Carlo Sampling and Bootstrapping in Assessing Phylogenetic Confidence Michael E. Alfaro,* Stefan Zoller of confidence and the most commonly used confidence measure in phylogenetics, the nonparametric bootstrap
Ramos-Mendez, Jose [Benemerita Universidad Autonoma de Puebla, 18 Sur and San Claudio Avenue, Puebla, Puebla 72750 (Mexico); Perl, Joseph [SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025 (United States); Faddegon, Bruce [Department of Radiation Oncology, University of California at San Francisco, California 94143 (United States); Schuemann, Jan; Paganetti, Harald [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States)
2013-04-15T23:59:59.000Z
Purpose: To present the implementation and validation of a geometrical based variance reduction technique for the calculation of phase space data for proton therapy dose calculation. Methods: The treatment heads at the Francis H Burr Proton Therapy Center were modeled with a new Monte Carlo tool (TOPAS based on Geant4). For variance reduction purposes, two particle-splitting planes were implemented. First, the particles were split upstream of the second scatterer or at the second ionization chamber. Then, particles reaching another plane immediately upstream of the field specific aperture were split again. In each case, particles were split by a factor of 8. At the second ionization chamber and at the latter plane, the cylindrical symmetry of the proton beam was exploited to position the split particles at randomly spaced locations rotated around the beam axis. Phase space data in IAEA format were recorded at the treatment head exit and the computational efficiency was calculated. Depth-dose curves and beam profiles were analyzed. Dose distributions were compared for a voxelized water phantom for different treatment fields for both the reference and optimized simulations. In addition, dose in two patients was simulated with and without particle splitting to compare the efficiency and accuracy of the technique. Results: A normalized computational efficiency gain of a factor of 10-20.3 was reached for phase space calculations for the different treatment head options simulated. Depth-dose curves and beam profiles were in reasonable agreement with the simulation done without splitting: within 1% for depth-dose with an average difference of (0.2 {+-} 0.4)%, 1 standard deviation, and a 0.3% statistical uncertainty of the simulations in the high dose region; 1.6% for planar fluence with an average difference of (0.4 {+-} 0.5)% and a statistical uncertainty of 0.3% in the high fluence region. The percentage differences between dose distributions in water for simulations done with and without particle splitting were within the accepted clinical tolerance of 2%, with a 0.4% statistical uncertainty. For the two patient geometries considered, head and prostate, the efficiency gain was 20.9 and 14.7, respectively, with the percentages of voxels with gamma indices lower than unity 98.9% and 99.7%, respectively, using 2% and 2 mm criteria. Conclusions: The authors have implemented an efficient variance reduction technique with significant speed improvements for proton Monte Carlo simulations. The method can be transferred to other codes and other treatment heads.
A Monte Carlo Analysis of Gas Centrifuge Enrichment Plant Process Load Cell Data
Garner, James R [ORNL; Whitaker, J Michael [ORNL
2013-01-01T23:59:59.000Z
As uranium enrichment plants increase in number, capacity, and types of separative technology deployed (e.g., gas centrifuge, laser, etc.), more automated safeguards measures are needed to enable the IAEA to maintain safeguards effectiveness in a fiscally constrained environment. Monitoring load cell data can significantly increase the IAEA s ability to efficiently achieve the fundamental safeguards objective of confirming operations as declared (i.e., no undeclared activities), but care must be taken to fully protect the operator s proprietary and classified information related to operations. Staff at ORNL, LANL, JRC/ISPRA, and University of Glasgow are investigating monitoring the process load cells at feed and withdrawal (F/W) stations to improve international safeguards at enrichment plants. A key question that must be resolved is what is the necessary frequency of recording data from the process F/W stations? Several studies have analyzed data collected at a fixed frequency. This paper contributes to load cell process monitoring research by presenting an analysis of Monte Carlo simulations to determine the expected errors caused by low frequency sampling and its impact on material balance calculations.
A high-fidelity Monte Carlo evaluation of CANDU-6 safety parameters
Kim, Y.; Hartanto, D. [Korea Advanced Inst. of Science and Technology KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon, 305-701 (Korea, Republic of)
2012-07-01T23:59:59.000Z
Important safety parameters such as the fuel temperature coefficient (FTC) and the power coefficient of reactivity (PCR) of the CANDU-6 (CANada Deuterium Uranium) reactor have been evaluated by using a modified MCNPX code. For accurate analysis of the parameters, the DBRC (Doppler Broadening Rejection Correction) scheme was implemented in MCNPX in order to account for the thermal motion of the heavy uranium nucleus in the neutron-U scattering reactions. In this work, a standard fuel lattice has been modeled and the fuel is depleted by using the MCNPX and the FTC value is evaluated for several burnup points including the mid-burnup representing a near-equilibrium core. The Doppler effect has been evaluated by using several cross section libraries such as ENDF/B-VI, ENDF/B-VII, JEFF, JENDLE. The PCR value is also evaluated at mid-burnup conditions to characterize safety features of equilibrium CANDU-6 reactor. To improve the reliability of the Monte Carlo calculations, huge number of neutron histories are considered in this work and the standard deviation of the k-inf values is only 0.5{approx}1 pcm. It has been found that the FTC is significantly enhanced by accounting for the Doppler broadening of scattering resonance and the PCR are clearly improved. (authors)
Ildikó Pethes; László Pusztai
2015-08-25T23:59:59.000Z
Although liquid water has been studied for many decades by (X-ray and neutron) diffraction measurements, new experimental results keep appearing, virtually every year. The reason for this is that neither X-ray, nor neutron diffraction data are trivial to correct and interpret for this essential substance. Since X-rays are somewhat insensitive to hydrogen, neutron diffraction with (most frequently, H/D) isotopic substitution is vital for investigating the most important feature in water: hydrogen bonding. Here, the two very recent sets of neutron diffraction data are considered, both exploiting the contrast between light and heavy hydrogen, $^1$H and $^2$H, in different ways. Reverse Monte Carlo structural modeling is applied for constructing large structural models that are as consistent as possible with all experimental information, both in real and reciprocal space. The method has also proven to be useful for revealing where possible small inconsistencies appear during primary data processing: for one neutron data set, it is the molecular geometry that may not be maintained within reasonable limits, whereas for the other set, it is one of the (composite) radial distribution functions that cannot be modeled at the same (high) level as the other three functions. Nevertheless, details of the local structure around the hydrogen bonds appear very much the same for both data sets: the most probable hydrogen bond angle is straight, and the nearest oxygen neighbours of a central oxygen atom occupy approximately tetrahedral positions.
Full-dispersion Monte Carlo simulation of phonon transport in micron-sized graphene nanoribbons
Mei, S., E-mail: smei4@wisc.edu; Knezevic, I., E-mail: knezevic@engr.wisc.edu [Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States); Maurer, L. N. [Department of Physics, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States); Aksamija, Z. [Department of Electrical and Computer Engineering, University of Massachusetts-Amherst, Amherst, Massachusetts 01003 (United States)
2014-10-28T23:59:59.000Z
We simulate phonon transport in suspended graphene nanoribbons (GNRs) with real-space edges and experimentally relevant widths and lengths (from submicron to hundreds of microns). The full-dispersion phonon Monte Carlo simulation technique, which we describe in detail, involves a stochastic solution to the phonon Boltzmann transport equation with the relevant scattering mechanisms (edge, three-phonon, isotope, and grain boundary scattering) while accounting for the dispersion of all three acoustic phonon branches, calculated from the fourth-nearest-neighbor dynamical matrix. We accurately reproduce the results of several experimental measurements on pure and isotopically modified samples [S. Chen et al., ACS Nano 5, 321 (2011);S. Chen et al., Nature Mater. 11, 203 (2012); X. Xu et al., Nat. Commun. 5, 3689 (2014)]. We capture the ballistic-to-diffusive crossover in wide GNRs: room-temperature thermal conductivity increases with increasing length up to roughly 100??m, where it saturates at a value of 5800?W/m K. This finding indicates that most experiments are carried out in the quasiballistic rather than the diffusive regime, and we calculate the diffusive upper-limit thermal conductivities up to 600?K. Furthermore, we demonstrate that calculations with isotropic dispersions overestimate the GNR thermal conductivity. Zigzag GNRs have higher thermal conductivity than same-size armchair GNRs, in agreement with atomistic calculations.
Abdel-Khalik, Hany S.; Gardner, Robin; Mattingly, John; Sood, Avneet
2014-05-20T23:59:59.000Z
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calulations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10-10 times to properly characterize the few-group cross-sections for deownstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the faborable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.
Tian, Zhen; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun
2015-01-01T23:59:59.000Z
Monte Carlo (MC) simulation is considered as the most accurate method for radiation dose calculations. Accuracy of a source model for a linear accelerator is critical for the overall dose calculation accuracy. In this paper, we presented an analytical source model that we recently developed for GPU-based MC dose calculations. A key concept called phase-space-ring (PSR) was proposed. It contained a group of particles that are of the same type and close in energy and radial distance to the center of the phase-space plane. The model parameterized probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. For a primary photon PSRs, the particle direction is assumed to be from the beam spot. A finite spot size is modeled with a 2D Gaussian distribution. For a scattered photon PSR, multiple Gaussian components were used to model the particle direction. The direction distribution of an electron PSRs was also modeled as a 2D Gaussian distributi...
A kinetic Monte Carlo method for the simulation of massive phase transformations
Bos, C.; Sommer, F.; Mittemeijer, E.J
2004-07-12T23:59:59.000Z
A multi-lattice kinetic Monte Carlo method has been developed for the atomistic simulation of massive phase transformations. Beside sites on the crystal lattices of the parent and product phase, randomly placed sites are incorporated as possible positions. These random sites allow the atoms to take favourable intermediate positions, essential for a realistic description of transformation interfaces. The transformation from fcc to bcc starting from a flat interface with the fcc(1 1 1)//bcc(1 1 0) and fcc[1 1 1-bar]//bcc[0 0 1-bar] orientation in a single component system has been simulated. Growth occurs in two different modes depending on the chosen values of the bond energies. For larger fcc-bcc energy differences, continuous growth is observed with a rough transformation front. For smaller energy differences, plane-by-plane growth is observed. In this growth mode two-dimensional nucleation is required in the next fcc plane after completion of the transformation of the previous fcc plane.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Boscoboinik, A. M.; Manzi, S. J.; Tysoe, W. T.; Pereyra, V. D.; Boscoboinik, J. A.
2015-09-10T23:59:59.000Z
The influence of directing agents in the self-assembly of molecular wires to produce two-dimensional electronic nanoarchitectures is studied here using a Monte Carlo approach to simulate the effect of arbitrarily locating nodal points on a surface, from which the growth of self-assembled molecular wires can be nucleated. This is compared to experimental results reported for the self-assembly of molecular wires when 1,4-phenylenediisocyanide (PDI) is adsorbed on Au(111). The latter results in the formation of (Au-PDI)n organometallic chains, which were shown to be conductive when linked between gold nanoparticles on an insulating substrate. The present study analyzes, by means of stochasticmore »methods, the influence of variables that affect the growth and design of self-assembled conductive nanoarchitectures, such as the distance between nodes, coverage of the monomeric units that leads to the formation of the desired architectures, and the interaction between the monomeric units. This study proposes an approach and sets the stage for the production of complex 2D nanoarchitectures using a bottom-up strategy but including the use of current state-of-the-art top-down technology as an integral part of the self-assembly strategy.« less
Investigating the rotational evolution of young, low mass stars using Monte Carlo simulations
Vasconcelos, M J
2015-01-01T23:59:59.000Z
We investigate the rotational evolution of young stars through Monte Carlo simulations. We simulate 280,000 stars, each of which is assigned a mass, a rotational period, and a mass accretion rate. The mass accretion rate depends on mass and time, following power-laws indices 1.4 and -1.5, respectively. A mass-dependent accretion threshold is defined below which a star is considered as diskless, which results in a distribution of disk lifetimes that matches observations. Stars are evolved at constant angular spin rate while accreting and at constant angular momentum when they become diskless. We recover the bimodal period distribution seen in several young clusters. The short period peak consists mostly of diskless stars and the long period one is mainly populated by accreting stars. Both distributions present a long tail towards long periods and a population of slowly rotating diskless stars is observed at all ages. We reproduce the observed correlations between disk fraction and spin rate, as well as between...
An excited-state approach within full configuration interaction quantum Monte Carlo
Blunt, N S; Booth, George H; Alavi, Ali
2015-01-01T23:59:59.000Z
We present a new approach to calculate excited states with the full configuration interaction quantum Monte Carlo (FCIQMC) method. The approach uses a Gram-Schmidt procedure, instantaneously applied to the stochastically evolving distributions of walkers, to orthogonalize higher energy states against lower energy ones. It can thus be used to study several of the lowest-energy states of a system within the same symmetry. This additional step is particularly simple and computationally inexpensive, requiring only a small change to the underlying FCIQMC algorithm. No trial wave functions or partitioning of the space is needed. The approach should allow excited states to be studied for systems similar to those accessible to the ground-state method, due to a comparable computational cost, while the excited states follow a similar sub-linear scaling of computational effort with system size to converge. As a first application we consider the carbon dimer in basis sets up to quadruple-zeta quality, and compare to exis...
Saha, Krishnendu [Ohio Medical Physics Consulting, Dublin, Ohio 43017 (United States); Straus, Kenneth J.; Glick, Stephen J. [Department of Radiology, University of Massachusetts Medical School, Worcester, Massachusetts 01655 (United States); Chen, Yu. [Department of Radiation Oncology, Columbia University, New York, New York 10032 (United States)
2014-08-28T23:59:59.000Z
To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.
Äkäslompolo, Simppa; Tardini, Giovanni; Kurki-Suonio, Taina
2015-01-01T23:59:59.000Z
The activation probe is a robust tool to measure flux of fusion products from a magnetically confined plasma. A carefully chosen solid sample is exposed to the flux, and the impinging ions transmute the material makig it radioactive. Ultra-low level gamma-ray spectroscopy is used post mortem to measure the activity and, thus, the number of fusion products. This contribution presents the numerical analysis of the first measurement in the ASDEX Upgrade tokamak, which was also the first experiment to measure a single discharge. The ASCOT suite of codes was used to perform adjoint/reverse Monte-Carlo calculations of the fusion products. The analysis facilitated, for the first time, a comparison of numerical and experimental values for absolutely calibrated flux. The results agree to within 40%, which can be considered remarkable considering the fact that all features of the plasma cannot be accounted in the simulations. Also an alternative probe orientation was studied. The results suggest that a better optimized...
Intra-Globular Structures in Multiblock Copolymer Chains from a Monte Carlo Simulation
Krzysztof Lewandowski; Michal Banaszak
2014-10-16T23:59:59.000Z
Multiblock copolymer chains in implicit nonselective solvents are studied by Monte Carlo method which employs a parallel tempering algorithm. Chains consisting of 120 $A$ and 120 $B$ monomers, arranged in three distinct microarchitectures: $(10-10)_{12}$, $(6-6)_{20}$, and $(3-3)_{40}$, collapse to globular states upon cooling, as expected. By varying both the reduced temperature $T^*$ and compatibility between monomers $\\omega$, numerous intra-globular structures are obtained: diclusters (handshake, spiral, torus with a core, etc.), triclusters, and $n$-clusters with $n>3$ (lamellar and other), which are reminiscent of the block copolymer nanophases for spherically confined geometries. Phase diagrams for various chains in the $(T^*, \\omega)$-space are mapped. The structure factor $S(k)$, for a selected microarchitecture and $\\omega$, is calculated. Since $S(k)$ can be measured in scattering experiments, it can be used to relate simulation results to an experiment. Self-assembly in those systems is interpreted in term of competition between minimization of the interfacial area separating different types of monomers and minimization of contacts between chain and solvent. Finally, the relevance of this model to the protein folding is addressed.
Introduction to Computational Physics and Monte Carlo Simulations of Matrix Field Theory
Badis Ydri
2015-06-05T23:59:59.000Z
This book is divided into two parts. In the first part we give an elementary introduction to computational physics consisting of 21 simulations which originated from a formal course of lectures and laboratory simulations delivered since 2010 to physics students at Annaba University. The second part is much more advanced and deals with the problem of how to set up working Monte Carlo simulations of matrix field theories which involve finite dimensional matrix regularizations of noncommutative and fuzzy field theories, fuzzy spaces and matrix geometry. The study of matrix field theory in its own right has also become very important to the proper understanding of all noncommutative, fuzzy and matrix phenomena. The second part, which consists of 9 simulations, was delivered informally to doctoral students who are working on various problems in matrix field theory. Sample codes as well as sample key solutions are also provided for convenience and completness. An appendix containing an executive arabic summary of the first part is added at the end of the book.
Composition PDF/photon Monte Carlo modeling of moderately sooting turbulent jet flames
Mehta, R.S.; Haworth, D.C.; Modest, M.F. [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, University Park, PA 16802 (United States)
2010-05-15T23:59:59.000Z
A comprehensive model for luminous turbulent flames is presented. The model features detailed chemistry, radiation and soot models and state-of-the-art closures for turbulence-chemistry interactions and turbulence-radiation interactions. A transported probability density function (PDF) method is used to capture the effects of turbulent fluctuations in composition and temperature. The PDF method is extended to include soot formation. Spectral gas and soot radiation is modeled using a (particle-based) photon Monte Carlo method coupled with the PDF method, thereby capturing both emission and absorption turbulence-radiation interactions. An important element of this work is that the gas-phase chemistry and soot models that have been thoroughly validated across a wide range of laminar flames are used in turbulent flame simulations without modification. Six turbulent jet flames are simulated with Reynolds numbers varying from 6700 to 15,000, two fuel types (pure ethylene, 90% methane-10% ethylene blend) and different oxygen concentrations in the oxidizer stream (from 21% O{sub 2} to 55% O{sub 2}). All simulations are carried out with a single set of physical and numerical parameters (model constants). Uniformly good agreement between measured and computed mean temperatures, mean soot volume fractions and (where available) radiative fluxes is found across all flames. This demonstrates that with the combination of a systematic approach and state-of-the-art physical models and numerical algorithms, it is possible to simulate a broad range of luminous turbulent flames with a single model. (author)
Byun, H. S.; Pirbadian, S.; Nakano, Aiichiro; Shi, Liang; El-Naggar, Mohamed Y.
2014-09-05T23:59:59.000Z
Microorganisms overcome the considerable hurdle of respiring extracellular solid substrates by deploying large multiheme cytochrome complexes that form 20 nanometer conduits to traffic electrons through the periplasm and across the cellular outer membrane. Here we report the first kinetic Monte Carlo simulations and single-molecule scanning tunneling microscopy (STM) measurements of the Shewanella oneidensis MR-1 outer membrane decaheme cytochrome MtrF, which can perform the final electron transfer step from cells to minerals and microbial fuel cell anodes. We find that the calculated electron transport rate through MtrF is consistent with previously reported in vitro measurements of the Shewanella Mtr complex, as well as in vivo respiration rates on electrode surfaces assuming a reasonable (experimentally verified) coverage of cytochromes on the cell surface. The simulations also reveal a rich phase diagram in the overall electron occupation density of the hemes as a function of electron injection and ejection rates. Single molecule tunneling spectroscopy confirms MtrF's ability to mediate electron transport between an STM tip and an underlying Au(111) surface, but at rates higher than expected from previously calculated heme-heme electron transfer rates for solvated molecules.
Tushar Kanti Bose; Jayashree Saha
2015-03-06T23:59:59.000Z
The realization of a spontaneous macroscopic ferroelectric order in fluids of anisotropic mesogens is a topic of both fundamental and technological interest. Recently, we demonstrated that a system of dipolar achiral disklike ellipsoids can exhibit long-searched ferroelectric liquid crystalline phases of dipolar origin. In the present work, extensive off-lattice Monte Carlo simulations are used to investigate the phase behavior of the system under the influences of the electrostatic boundary conditions that restrict any global polarization. We find that the system develops strongly ferroelectric slablike domains periodically arranged in an antiferroelectric fashion. Exploring the phase behavior at different dipole strengths, we find existence of the ferroelectric nematic and ferroelectric columnar order inside the domains. For higher dipole strengths, a biaxial phase is also obtained with a similar periodic array of ferroelectric slabs of antiparallel polarizations. We have studied the depolarizing effects by using both the Ewald summation and the spherical cut-off techniques. We present and compare the results of the two different approaches of considering the depolarizing effects in this anisotropic system. It is explicitly shown that the domain size increases with the system size as a result of considering longer range of dipolar interactions. The system exhibits pronounced system size effects for stronger dipolar interactions. The results provide strong evidence to the novel understanding that the dipolar interactions are indeed sufficient to produce long range ferroelectric order in anisotropic fluids.
Reconstruction for proton computed tomography by tracing proton trajectories: A Monte Carlo study
Li Tianfang; Liang Zhengrong; Singanallur, Jayalakshmi V.; Satogata, Todd J.; Williams, David C.; Schulte, Reinhard W. [Departments of Radiology, Computer Science, and Physics and Astronomy, State University of New York at Stony Brook, Stony Brook, New York 11794 (United States); Department of Physics, Brookhaven National Laboratory, Upton, New York 11973 (United States); Santa Cruz Institute for Particle Physics, University of California at Santa Cruz, Santa Cruz, California 95064 (United States); Department of Radiation Medicine, Loma Linda University Medical Center, Loma Linda, California 92354 (United States)
2006-03-15T23:59:59.000Z
Proton computed tomography (pCT) has been explored in the past decades because of its unique imaging characteristics, low radiation dose, and its possible use for treatment planning and on-line target localization in proton therapy. However, reconstruction of pCT images is challenging because the proton path within the object to be imaged is statistically affected by multiple Coulomb scattering. In this paper, we employ GEANT4-based Monte Carlo simulations of the two-dimensional pCT reconstruction of an elliptical phantom to investigate the possible use of the algebraic reconstruction technique (ART) with three different path-estimation methods for pCT reconstruction. The first method assumes a straight-line path (SLP) connecting the proton entry and exit positions, the second method adapts the most-likely path (MLP) theoretically determined for a uniform medium, and the third method employs a cubic spline path (CSP). The ART reconstructions showed progressive improvement of spatial resolution when going from the SLP [2 line pairs (lp) cm{sup -1}] to the curved CSP and MLP path estimates (5 lp cm{sup -1}). The MLP-based ART algorithm had the fastest convergence and smallest residual error of all three estimates. This work demonstrates the advantage of tracking curved proton paths in conjunction with the ART algorithm and curved path estimates.
Chatterjee, Abhijit [Los Alamos National Laboratory; Voter, Arthur [Los Alamos National Laboratory
2009-01-01T23:59:59.000Z
We develop a variation of the temperature accelerated dynamics (TAD) method, called the p-TAD method, that efficiently generates an on-the-fly kinetic Monte Carlo (KMC) process catalog with control over the accuracy of the catalog. It is assumed that transition state theory is valid. The p-TAD method guarantees that processes relevant at the timescales of interest to the simulation are present in the catalog with a chosen confidence. A confidence measure associated with the process catalog is derived. The dynamics is then studied using the process catalog with the KMC method. Effective accuracy of a p-TAD calculation is derived when a KMC catalog is reused for conditions different from those the catalog was originally generated for. Different KMC catalog generation strategies that exploit the features of the p-TAD method and ensure higher accuracy and/or computational efficiency are presented. The accuracy and the computational requirements of the p-TAD method are assessed. Comparisons to the original TAD method are made. As an example, we study dynamics in sub-monolayer Ag/Cu(110) at the time scale of seconds using the p-TAD method. It is demonstrated that the p-TAD method overcomes several challenges plaguing the conventional KMC method.
Evaluation of vectorized Monte Carlo algorithms on GPUs for a neutron Eigenvalue problem
Du, X.; Liu, T.; Ji, W.; Xu, X. G. [Nuclear Engineering Program, Rensselaer Polytechnic Institute, Troy, NY 12180 (United States); Brown, F. B. [Monte Carlo Codes Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)
2013-07-01T23:59:59.000Z
Conventional Monte Carlo (MC) methods for radiation transport computations are 'history-based', which means that one particle history at a time is tracked. Simulations based on such methods suffer from thread divergence on the graphics processing unit (GPU), which severely affects the performance of GPUs. To circumvent this limitation, event-based vectorized MC algorithms can be utilized. A versatile software test-bed, called ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - was used for this study. ARCHER facilitates the development and testing of a MC code based on the vectorized MC algorithm implemented on GPUs by using NVIDIA's Compute Unified Device Architecture (CUDA). The ARCHER{sub GPU} code was designed to solve a neutron eigenvalue problem and was tested on a NVIDIA Tesla M2090 Fermi card. We found that although the vectorized MC method significantly reduces the occurrence of divergent branching and enhances the warp execution efficiency, the overall simulation speed is ten times slower than the conventional history-based MC method on GPUs. By analyzing detailed GPU profiling information from ARCHER, we discovered that the main reason was the large amount of global memory transactions, causing severe memory access latency. Several possible solutions to alleviate the memory latency issue are discussed. (authors)
MONTE CARLO SIMULATIONS OF THE PHOTOSPHERIC EMISSION IN GAMMA-RAY BURSTS
Begue, D.; Siutsou, I. A.; Vereshchagin, G. V. [University of Roma ''Sapienza'', I-00185, p.le A. Moro 5, Rome (Italy)
2013-04-20T23:59:59.000Z
We studied the decoupling of photons from ultra-relativistic spherically symmetric outflows expanding with constant velocity by means of Monte Carlo simulations. For outflows with finite widths we confirm the existence of two regimes: photon-thick and photon-thin, introduced recently by Ruffini et al. (RSV). The probability density function of the last scattering of photons is shown to be very different in these two cases. We also obtained spectra as well as light curves. In the photon-thick case, the time-integrated spectrum is much broader than the Planck function and its shape is well described by the fuzzy photosphere approximation introduced by RSV. In the photon-thin case, we confirm the crucial role of photon diffusion, hence the probability density of decoupling has a maximum near the diffusion radius well below the photosphere. The time-integrated spectrum of the photon-thin case has a Band shape that is produced when the outflow is optically thick and its peak is formed at the diffusion radius.
MONTE CARLO SIMULATIONS OF NONLINEAR PARTICLE ACCELERATION IN PARALLEL TRANS-RELATIVISTIC SHOCKS
Ellison, Donald C.; Warren, Donald C. [Physics Department, North Carolina State University, Box 8202, Raleigh, NC 27695 (United States); Bykov, Andrei M., E-mail: don_ellison@ncsu.edu, E-mail: ambykov@yahoo.com [Ioffe Institute for Physics and Technology, 194021 St. Petersburg (Russian Federation)
2013-10-10T23:59:59.000Z
We present results from a Monte Carlo simulation of a parallel collisionless shock undergoing particle acceleration. Our simulation, which contains parameterized scattering and a particular thermal leakage injection model, calculates the feedback between accelerated particles ahead of the shock, which influence the shock precursor and 'smooth' the shock, and thermal particle injection. We show that there is a transition between nonrelativistic shocks, where the acceleration efficiency can be extremely high and the nonlinear compression ratio can be substantially greater than the Rankine-Hugoniot value, and fully relativistic shocks, where diffusive shock acceleration is less efficient and the compression ratio remains at the Rankine-Hugoniot value. This transition occurs in the trans-relativistic regime and, for the particular parameters we use, occurs around a shock Lorentz factor ?{sub 0} = 1.5. We also find that nonlinear shock smoothing dramatically reduces the acceleration efficiency presumed to occur with large-angle scattering in ultra-relativistic shocks. Our ability to seamlessly treat the transition from ultra-relativistic to trans-relativistic to nonrelativistic shocks may be important for evolving relativistic systems, such as gamma-ray bursts and Type Ibc supernovae. We expect a substantial evolution of shock accelerated spectra during this transition from soft early on to much harder when the blast-wave shock becomes nonrelativistic.
Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes
Harrisson, G.; Marleau, G. [Inst. of Nuclear Engineering, Ecole Polytechnique de Montreal (Canada)
2012-07-01T23:59:59.000Z
The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculation performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)
Collapse transitions in thermosensitive multi-block copolymers: A Monte Carlo study
Rissanou, Anastassia N., E-mail: rissanou@tem.uoc.gr [Department of Mathematics and Applied Mathematics, University of Crete, GR-71003 Heraklion Crete, Greece and Archimedes Center for Analysis, Modeling and Computation, University of Crete, P.O. Box 2208, GR-71003 Heraklion Crete (Greece); Tzeli, Despoina S. [Department of Materials Science and Technology, University of Crete, GR-71003 Heraklion Crete (Greece); Anastasiadis, Spiros H. [Department of Chemistry, University of Crete, P.O. Box 2208, 710 03 Heraklion Crete (Greece); Institute of Electronic Structure and Laser, Foundation for Research and Technology-Hellas, GR-71110 Heraklion Crete (Greece); Bitsanis, Ioannis A. [Institute of Electronic Structure and Laser, Foundation for Research and Technology-Hellas, GR-71110 Heraklion Crete (Greece)
2014-05-28T23:59:59.000Z
Monte Carlo simulations are performed on a simple cubic lattice to investigate the behavior of a single linear multiblock copolymer chain of various lengths N. The chain of type (A{sub n}B{sub n}){sub m} consists of alternating A and B blocks, where A are solvophilic and B are solvophobic and N = 2nm. The conformations are classified in five cases of globule formation by the solvophobic blocks of the chain. The dependence of globule characteristics on the molecular weight and on the number of blocks, which participate in their formation, is examined. The focus is on relative high molecular weight blocks (i.e., N in the range of 500–5000 units) and very differing energetic conditions for the two blocks (very good—almost athermal solvent for A and bad solvent for B). A rich phase behavior is observed as a result of the alternating architecture of the multiblock copolymer chain. We trust that thermodynamic equilibrium has been reached for chains of N up to 2000 units; however, for longer chains kinetic entrapments are observed. The comparison among equivalent globules consisting of different number of B-blocks shows that the more the solvophobic blocks constituting the globule the bigger its radius of gyration and the looser its structure. Comparisons between globules formed by the solvophobic blocks of the multiblock copolymer chain and their homopolymer analogs highlight the important role of the solvophilic A-blocks.
Comparison of hybrid and pure Monte Carlo shower generators on an event by event basis
Jeff Allen; Hans-Joachim Drescher; Glennys Farrar
2007-08-21T23:59:59.000Z
SENECA is a hybrid air shower simulation written by H. Drescher that utilizes both Monte Carlo simulation and cascade equations. By using the cascade equations only in the high energy portion of the shower, where the shower is inherently one-dimensional, SENECA is able to utilize the advantages in speed from the cascade equations yet still produce complete, three dimensional particle distributions at ground level which capture the shower to shower variations coming from the early interactions. We present a comparison, on an event by event basis, of SENECA and CORSIKA, a well trusted MC simulation code. By using the same first interaction in both SENECA and CORSIKA, the effect of the cascade equations can be studied within a single shower, rather than averaged over many showers. Our study shows that for showers produced in this manner, SENECA agrees with CORSIKA to a very high accuracy with respect to densities, energies, and timing information for individual species of ground-level particles from both iron and proton primaries with energies between 1 EeV and 100 EeV. Used properly, SENECA produces ground particle distributions virtually indistinguishable from those of CORSIKA in a fraction of the time. For example, for a shower induced by a 10 EeV proton, SENECA is 10 times faster than CORSIKA, with comparable accuracy.
Auxiliary-field quantum Monte Carlo calculations of molecular systems with a Gaussian basis
Al-Saidi, W.A.; Zhang Shiwei; Krakauer, Henry [Department of Physics, College of William and Mary, Williamsburg, Virginia 23187-8795 (United States)
2006-06-14T23:59:59.000Z
We extend the recently introduced phaseless auxiliary-field quantum Monte Carlo (QMC) approach to any single-particle basis and apply it to molecular systems with Gaussian basis sets. QMC methods in general scale favorably with the system size as a low power. A QMC approach with auxiliary fields, in principle, allows an exact solution of the Schroedinger equation in the chosen basis. However, the well-known sign/phase problem causes the statistical noise to increase exponentially. The phaseless method controls this problem by constraining the paths in the auxiliary-field path integrals with an approximate phase condition that depends on a trial wave function. In the present calculations, the trial wave function is a single Slater determinant from a Hartree-Fock calculation. The calculated all-electron total energies show typical systematic errors of no more than a few millihartrees compared to exact results. At equilibrium geometries in the molecules we studied, this accuracy is roughly comparable to that of coupled cluster with single and double excitations and with noniterative triples [CCSD(T)]. For stretched bonds in H{sub 2}O, our method exhibits a better overall accuracy and a more uniform behavior than CCSD(T)
Hsiao-Ping Hsu; Bernd A. Berg; Peter Grassberger
2004-08-26T23:59:59.000Z
Treating realistically the ambient water is one of the main difficulties in applying Monte Carlo methods to protein folding. The solvent-accessible area method, a popular method for treating water implicitly, is investigated by means of Metropolis simulations of the brain peptide Met-Enkephalin. For the phenomenological energy function ECEPP/2 nine atomic solvation parameter (ASP) sets are studied that had been proposed by previous authors. The simulations are compared with each other, with simulations with a distance dependent electrostatic permittivity $\\epsilon (r)$, and with vacuum simulations ($\\epsilon =2$). Parallel tempering and a recently proposed biased Metropolis technique are employed and their performances are evaluated. The measured observables include energy and dihedral probability densities (pds), integrated autocorrelation times, and acceptance rates. Two of the ASP sets turn out to be unsuitable for these simulations. For all other sets, selected configurations are minimized in search of the global energy minima. Unique minima are found for the vacuum and the $\\epsilon(r)$ system, but for none of the ASP models. Other observables show a remarkable dependence on the ASPs. In particular, autocorrelation times vary dramatically with the ASP parameters. Three ASP sets have much smaller autocorrelations at 300 K than the vacuum simulations, opening the possibility that simulations can be speeded up vastly by judiciously chosing details of the force
Feasibility Study of Neutron Dose for Real Time Image Guided Proton Therapy: A Monte Carlo Study
Kim, Jin Sung; Kim, Daehyun; Shin, EunHyuk; Chung, Kwangzoo; Cho, Sungkoo; Ahn, Sung Hwan; Ju, Sanggyu; Chung, Yoonsun; Jung, Sang Hoon; Han, Youngyih
2015-01-01T23:59:59.000Z
Two full rotating gantry with different nozzles (Multipurpose nozzle with MLC, Scanning Dedicated nozzle) with conventional cyclotron system is installed and under commissioning for various proton treatment options at Samsung Medical Center in Korea. The purpose of this study is to investigate neutron dose equivalent per therapeutic dose, H/D, to x-ray imaging equipment under various treatment conditions with monte carlo simulation. At first, we investigated H/D with the various modifications of the beam line devices (Scattering, Scanning, Multi-leaf collimator, Aperture, Compensator) at isocenter, 20, 40, 60 cm distance from isocenter and compared with other research groups. Next, we investigated the neutron dose at x-ray equipments used for real time imaging with various treatment conditions. Our investigation showed the 0.07 ~ 0.19 mSv/Gy at x-ray imaging equipments according to various treatment options and intestingly 50% neutron dose reduction effect of flat panel detector was observed due to multi- lea...
Thermodynamics and quark susceptibilities: a Monte-Carlo approach to the PNJL model
M. Cristoforetti; T. Hell; B. Klein; W. Weise
2010-02-11T23:59:59.000Z
The Monte-Carlo method is applied to the Polyakov-loop extended Nambu--Jona-Lasinio (PNJL) model. This leads beyond the saddle-point approximation in a mean-field calculation and introduces fluctuations around the mean fields. We study the impact of fluctuations on the thermodynamics of the model, both in the case of pure gauge theory and including two quark flavors. In the two-flavor case, we calculate the second-order Taylor expansion coefficients of the thermodynamic grand canonical partition function with respect to the quark chemical potential and present a comparison with extrapolations from lattice QCD. We show that the introduction of fluctuations produces only small changes in the behavior of the order parameters for chiral symmetry restoration and the deconfinement transition. On the other hand, we find that fluctuations are necessary in order to reproduce lattice data for the flavor non-diagonal quark susceptibilities. Of particular importance are pion fields, the contribution of which is strictly zero in the saddle point approximation.
A Monte Carlo simulation study on the wetting behavior of water on graphite surface
Xiongce Zhao
2012-09-20T23:59:59.000Z
This paper is an expanded edition of the rapid communication published several years ago by the author (Phys. Rev. B, v76, 041402(R), 2007) on the simulation of wetting transition of water on graphite, aiming to provide more details on the methodology, parameters, and results of the study which might be of interest to certain readers. We calculate adsorption isotherms of water on graphite using grand canonical Monte Carlo simulations combined with multiple histogram reweighting, based on the empirical potentials of SPC/E for water, the 10-4-3 van der Waals model, and a recently developed induction and multipolar potential for water and graphite. Our results show that wetting transition of water on graphite occurs at 475-480 K, and the prewetting critical temperature lies in the range of 505-510 K. The calculated wetting transition temperature agrees quantitatively with a previously predicted value using a simple model. The observation of the coexistence of stable and metastable states at temperatures between the wetting transition temperature and prewetting critical temperature indicates that the transition is first order.
Asadi, Somayeh; Masoudi, S Farhad; Rahmani, Faezeh
2014-01-01T23:59:59.000Z
Materials of high atomic number such as gold, can provide a high probability for photon interaction by photoelectric effects during radiation therapy. In cancer therapy, the object of brachytherapy as a kind of radiotherapy is to deliver adequate radiation dose to tumor while sparing surrounding healthy tissue. Several studies demonstrated that the preferential accumulation of gold nanoparticles within the tumor can enhance the absorbed dose by the tumor without increasing the radiation dose delivered externally. Accordingly, the required time for tumor irradiation decreases as the estimated adequate radiation dose for tumor is provided following this method. The dose delivered to healthy tissue is reduced when the time of irradiation is decreased. Hear, GNPs effects on choroidal Melanoma dosimetry is discussed by Monte Carlo study. Monte Carlo Ophthalmic brachytherapy dosimetry usually, is studied by simulation of water phantom. Considering the composition and density of eye material instead of water in thes...
Bennett, C.M. [Los Alamos National Lab., NM (United States). Theoretical Div.]|[Oklahoma State Univ., Stillwater, OK (United States). Dept. of Chemistry; Sewell, T.D. [Los Alamos National Lab., NM (United States). Theoretical Div.
1998-12-31T23:59:59.000Z
Isothermal-iosbaric Monte Carlo calculations are used in conjunction with an expression that relates the elastic stiffness tensor to the mean-square fluctuations of the strain tensor to obtain first principles predictions of the Young`s moduli, shear moduli, and Poisson`s ratios for room-temperature crystalline RDX. The results are based on numerical data obtained during previously reported calculations of the hydrostatic compression of RDX over the pressure domain 0 GPa {le} p {le} 4 GPa. Although there are no experimental data available for comparison, the predicted values of the engineering coefficients are in accord with general expectations for brittle molecular crystals. The calculations reported here are preliminary: more extensive Monte Carlo realizations are needed to yield well-converged predictions; these are underway for RDX and {beta}-HMX.
Shulenburger, Luke; Desjarlais, M P
2015-01-01T23:59:59.000Z
Motivated by the disagreement between recent diffusion Monte Carlo calculations and experiments on the phase transition pressure between the ambient and beta-Sn phases of silicon, we present a study of the HCP to BCC phase transition in beryllium. This lighter element provides an oppor- tunity for directly testing many of the approximations required for calculations on silicon and may suggest a path towards increasing the practical accuracy of diffusion Monte Carlo calculations of solids in general. We demonstrate that the single largest approximation in these calculations is the pseudopotential approximation. After removing this we find excellent agreement with experiment for the ambient HCP phase and results similar to careful calculations using density functional theory for the phase transition pressure.
V. Dorvilien; C. N. Patra; L. B. Bhuiyan; C. W. Outhwaite
2013-12-17T23:59:59.000Z
The structure of cylindrical double layers is studied using a modified Poisson Boltzmann theory and the density functional approach. In the model double layer, the electrode is a cylindrical polyion that is infinitely long, impenetrable, and uniformly charged. The polyion is immersed in a sea of equi-sized rigid ions embedded in a dielectric continuum. An in-depth comparison of the theoretically predicted zeta potentials, the mean electrostatic potentials, and the electrode-ion singlet density distributions is made with the corresponding Monte Carlo simulation data. The theories are seen to be consistent in their predictions that include variations in ionic diameters, electrolyte concentrations, and electrode surface charge densities, and are also capable of well reproducing some new and existing Monte Carlo results.
Müller, Florian, E-mail: florian.mueller@sam.math.ethz.ch; Jenny, Patrick, E-mail: jenny@ifd.mavt.ethz.ch; Meyer, Daniel W., E-mail: meyerda@ethz.ch
2013-10-01T23:59:59.000Z
Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley–Leverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.
Williams, M. L.; Gehin, J. C.; Clarno, K. T. [Oak Ridge National Laboratory, Bldg. 5700, P.O. Box 2008, Oak Ridge, TN 37831-6170 (United States)
2006-07-01T23:59:59.000Z
The TSUNAMI computational sequences currently in the SCALE 5 code system provide an automated approach to performing sensitivity and uncertainty analysis for eigenvalue responses, using either one-dimensional discrete ordinates or three-dimensional Monte Carlo methods. This capability has recently been expanded to address eigenvalue-difference responses such as reactivity changes. This paper describes the methodology and presents results obtained for an example advanced CANDU reactor design. (authors)
Approaching the Ground State of a Quantum Spin Glass using a Zero-Temperature Quantum Monte Carlo
Arnab Das; Bikas K. Chakrabarti
2008-03-31T23:59:59.000Z
Here we discuss the annealing behavior of an infinite-range $\\pm J$ Ising spin glass in presence of a transverse field using a zero-temperature quantum Monte Carlo. Within the simulation scheme, we demonstrate that quantum annealing not only helps finding the ground state of a classical spin glass, but can also help simulating the ground state of a quantum spin glass, in particularly, when the transverse field is low, much more efficiently.
The two-phase issue in the O(n) non-linear $?$-model: A Monte Carlo study
B. Alles; A. Buonanno; G. Cella
1996-08-01T23:59:59.000Z
We have performed a high statistics Monte Carlo simulation to investigate whether the two-dimensional O(n) non-linear sigma models are asymptotically free or they show a Kosterlitz- Thouless-like phase transition. We have calculated the mass gap and the magnetic susceptibility in the O(8) model with standard action and the O(3) model with Symanzik action. Our results for O(8) support the asymptotic freedom scenario.
Doma, S B; Amer, A A
2015-01-01T23:59:59.000Z
The ground state energy of hydrogen molecular ion H2+ confined by a hard prolate spheroidal cavity is calculated. The case in which the nuclear positions are clamped at the foci is considered. Our calculations are based on using the variational Monte Carlo method with an accurate trial wave function depending on many variational parameters. The calculations were extended also to include the HeH++ molecular ion. The obtained results are in good agreement with the recent results.
Çatl?, Serap, E-mail: serapcatli@hotmail.com [Gazi University, Faculty of Sciences, 06500 Teknikokullar, Ankara (Turkey); Tan?r, Güne? [Gazi University, Faculty of Sciences, 06500 Teknikokullar, Ankara (Turkey)
2013-10-01T23:59:59.000Z
The present study aimed to investigate the effects of titanium, titanium alloy, and stainless steel hip prostheses on dose distribution based on the Monte Carlo simulation method, as well as the accuracy of the Eclipse treatment planning system (TPS) at 6 and 18 MV photon energies. In the present study the pencil beam convolution (PBC) method implemented in the Eclipse TPS was compared to the Monte Carlo method and ionization chamber measurements. The present findings show that if high-Z material is used in prosthesis, large dose changes can occur due to scattering. The variance in dose observed in the present study was dependent on material type, density, and atomic number, as well as photon energy; as photon energy increased back scattering decreased. The dose perturbation effect of hip prostheses was significant and could not be predicted accurately by the PBC method for hip prostheses. The findings show that for accurate dose calculation the Monte Carlo-based TPS should be used in patients with hip prostheses.
Kim, Beop-Min
1991-01-01T23:59:59.000Z
of the computer time is the Monte-Carlo algorithm. Because the scattering coefficient used in this study is large compared to the absorption coefficient, a photon launched into the tissue experiences more absorption-scattering events. Hence more calculations... are needed, which results in larger computer time. Also, as time passes, because the increment of scattering coefficient is dominant, time needed for running one Monte-Carlo algorithm gradually increases. Due to the large number of calculations needed...
Gu, Heng
2010-01-14T23:59:59.000Z
A NUMERICAL SIMULATION OF THERMAL AND ELECTRICAL PROPERTIES OF NANO-FIBER NETWORK POLYMER COMPOSITES USING PERCOLATION THEORY AND MONTE CARLO METHOD A Thesis by HENG GU Submitted to the Office of Graduate Studies of Texas A... COMPOSITES USING PERCOLATION THEORY AND MONTE CARLO METHOD A Thesis by HENG GU Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Approved by...
Forward treatment planning for modulated electron radiotherapy (MERT) employing Monte Carlo methods
Henzen, D., E-mail: henzen@ams.unibe.ch; Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Lössl, K.; Aebersold, D. M.; Fix, M. K. [Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, CH-3010 Berne (Switzerland)] [Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, CH-3010 Berne (Switzerland); Neuenschwander, H. [Clinic for Radiation-Oncology, Lindenhofspital Bern, CH-3012 Berne (Switzerland)] [Clinic for Radiation-Oncology, Lindenhofspital Bern, CH-3012 Berne (Switzerland); Stampanoni, M. F. M. [Institute for Biomedical Engineering, ETH Zürich and Paul Scherrer Institut, CH-5234 Villigen (Switzerland)] [Institute for Biomedical Engineering, ETH Zürich and Paul Scherrer Institut, CH-5234 Villigen (Switzerland)
2014-03-15T23:59:59.000Z
Purpose: This paper describes the development of a forward planning process for modulated electron radiotherapy (MERT). The approach is based on a previously developed electron beam model used to calculate dose distributions of electron beams shaped by a photon multi leaf collimator (pMLC). Methods: As the electron beam model has already been implemented into the Swiss Monte Carlo Plan environment, the Eclipse treatment planning system (Varian Medical Systems, Palo Alto, CA) can be included in the planning process for MERT. In a first step, CT data are imported into Eclipse and a pMLC shaped electron beam is set up. This initial electron beam is then divided into segments, with the electron energy in each segment chosen according to the distal depth of the planning target volume (PTV) in beam direction. In order to improve the homogeneity of the dose distribution in the PTV, a feathering process (Gaussian edge feathering) is launched, which results in a number of feathered segments. For each of these segments a dose calculation is performed employing the in-house developed electron beam model along with the macro Monte Carlo dose calculation algorithm. Finally, an automated weight optimization of all segments is carried out and the total dose distribution is read back into Eclipse for display and evaluation. One academic and two clinical situations are investigated for possible benefits of MERT treatment compared to standard treatments performed in our clinics and treatment with a bolus electron conformal (BolusECT) method. Results: The MERT treatment plan of the academic case was superior to the standard single segment electron treatment plan in terms of organs at risk (OAR) sparing. Further, a comparison between an unfeathered and a feathered MERT plan showed better PTV coverage and homogeneity for the feathered plan, with V{sub 95%} increased from 90% to 96% and V{sub 107%} decreased from 8% to nearly 0%. For a clinical breast boost irradiation, the MERT plan led to a similar homogeneity in the PTV compared to the standard treatment plan while the mean body dose was lower for the MERT plan. Regarding the second clinical case, a whole breast treatment, MERT resulted in a reduction of the lung volume receiving more than 45% of the prescribed dose when compared to the standard plan. On the other hand, the MERT plan leads to a larger low-dose lung volume and a degraded dose homogeneity in the PTV. For the clinical cases evaluated in this work, treatment plans using the BolusECT technique resulted in a more homogenous PTV and CTV coverage but higher doses to the OARs than the MERT plans. Conclusions: MERT treatments were successfully planned for phantom and clinical cases, applying a newly developed intuitive and efficient forward planning strategy that employs a MC based electron beam model for pMLC shaped electron beams. It is shown that MERT can lead to a dose reduction in OARs compared to other methods. The process of feathering MERT segments results in an improvement of the dose homogeneity in the PTV.
Minibeam radiation therapy for the management of osteosarcomas: A Monte Carlo study
Martínez-Rovira, I.; Prezado, Y., E-mail: prezado@gmail.com [Laboratoire d’Imagerie et Modélisation en Neurobiologie et Cancérologie (IMNC), Centre National de la Recherche Scientifique (CNRS), Campus universitaire, Bât. 440, 1er étage, 15 rue Georges Clemenceau, 91406 Orsay cedex (France)
2014-06-15T23:59:59.000Z
Purpose: Minibeam radiation therapy (MBRT) exploits the well-established tissue-sparing effect provided by the combination of submillimetric field sizes and a spatial fractionation of the dose. The aim of this work is to evaluate the feasibility and potential therapeutic gain of MBRT, in comparison with conventional radiotherapy, for osteosarcoma treatments. Methods: Monte Carlo simulations (PENELOPE/PENEASY code) were used as a method to study the dose distributions resulting from MBRT irradiations of a rat femur and a realistic human femur phantoms. As a figure of merit, peak and valley doses and peak-to-valley dose ratios (PVDR) were assessed. Conversion of absorbed dose to normalized total dose (NTD) was performed in the human case. Several field sizes and irradiation geometries were evaluated. Results: It is feasible to deliver a uniform dose distribution in the target while the healthy tissue benefits from a spatial fractionation of the dose. Very high PVDR values (?20) were achieved in the entrance beam path in the rat case. PVDR values ranged from 2 to 9 in the human phantom. NTD{sub 2.0} of 87 Gy might be reached in the tumor in the human femur while the healthy tissues might receive valley NTD{sub 2.0} lower than 20 Gy. The doses in the tumor and healthy tissues might be significantly higher and lower than the ones commonly delivered used in conventional radiotherapy. Conclusions: The obtained dose distributions indicate that a gain in normal tissue sparing might be expected. This would allow the use of higher (and potentially curative) doses in the tumor. Biological experiments are warranted.
Local and chain dynamics in miscible polymer blends: A Monte Carlo simulation study
Jutta Luettmer-Strathmann; Manjeera Mantina
2005-11-07T23:59:59.000Z
Local chain structure and local environment play an important role in the dynamics of polymer chains in miscible blends. In general, the friction coefficients that describe the segmental dynamics of the two components in a blend differ from each other and from those of the pure melts. In this work, we investigate polymer blend dynamics with Monte Carlo simulations of a generalized bond-fluctuation model, where differences in the interaction energies between non-bonded nearest neighbors distinguish the two components of a blend. Simulations employing only local moves and respecting a non-bond crossing condition were carried out for blends with a range of compositions, densities, and chain lengths. The blends investigated here have long-chain dynamics in the crossover region between Rouse and entangled behavior. In order to investigate the scaling of the self-diffusion coefficients, characteristic chain lengths $N_\\mathrm{c}$ are calculated from the packing length of the chains. These are combined with a local mobility $\\mu$ determined from the acceptance rate and the effective bond length to yield characteristic self-diffusion coefficients $D_\\mathrm{c}=\\mu/N_\\mathrm{c}$. We find that the data for both melts and blends collapse onto a common line in a graph of reduced diffusion coefficients $D/D_\\mathrm{c}$ as a function of reduced chain length $N/N_\\mathrm{c}$. The composition dependence of dynamic properties is investigated in detail for melts and blends with chains of length twenty at three different densities. For these blends, we calculate friction coefficients from the local mobilities and consider their composition and pressure dependence. The friction coefficients determined in this way show many of the characteristics observed in experiments on miscible blends.
Adsorption of branched and dendritic polymers onto flat surfaces: A Monte Carlo study
Sommer, J.-U. [Leibniz Institute of Polymer Research Dresden e. V., 01069 Dresden (Germany) [Leibniz Institute of Polymer Research Dresden e. V., 01069 Dresden (Germany); Institute for Theoretical Physics, Technische Universität Dresden, 01069 Dresden (Germany); K?os, J. S. [Leibniz Institute of Polymer Research Dresden e. V., 01069 Dresden (Germany) [Leibniz Institute of Polymer Research Dresden e. V., 01069 Dresden (Germany); Faculty of Physics, A. Mickiewicz University, Umultowska 85, 61-614 Pozna? (Poland); Mironova, O. N. [Leibniz Institute of Polymer Research Dresden e. V., 01069 Dresden (Germany)] [Leibniz Institute of Polymer Research Dresden e. V., 01069 Dresden (Germany)
2013-12-28T23:59:59.000Z
Using Monte Carlo simulations based on the bond fluctuation model we study the adsorption of starburst dendrimers with flexible spacers onto a flat surface. The calculations are performed for various generation number G and spacer length S in a wide range of the reduced temperature ? as the measure of the interaction strength between the monomers and the surface. Our simulations indicate a two-step adsorption scenario. Below the critical point of adsorption, ?{sub c}, a weakly adsorbed state of the dendrimer is found. Here, the dendrimer retains its shape but sticks to the surface by adsorbed spacers. By lowering the temperature below a spacer-length dependent value, ?*(S) < ?{sub c}, a step-like transition into a strongly adsorbed state takes place. In the flatly adsorbed state the shape of the dendrimer is well described by a mean field model of a dendrimer in two dimensions. We also performed simulations of star-polymers which display a simple crossover-behavior in full analogy to linear chains. By analyzing the order parameter of the adsorption transition, we determine the critical point of adsorption of the dendrimers which is located close to the critical point of adsorption for star-polymers. While the order parameter for the adsorbed spacers displays a critical crossover scaling, the overall order parameter, which combines both critical and discontinuous transition effects, does not display simple scaling. The step-like transition from the weak into the strong adsorbed regime is confirmed by analyzing the shape-anisotropy of the dendrimers. We present a mean-field model based on the concept of spacer adsorption which predicts a discontinuous transition of dendrimers due to an excluded volume barrier. The latter results from an increased density of the dendrimer in the flatly adsorbed state which has to be overcome before this state is thermodynamically stable.
Lombardo, S.J. (California Inst. of Tech., Pasadena, CA (USA). Dept. of Chemical Engineering Lawrence Berkeley Lab., CA (USA))
1990-08-01T23:59:59.000Z
The kinetics of temperature-programmed and isothermal desorption have been simulated with a Monte Carlo model. Included in the model are the elementary steps of adsorption, surface diffusion, and desorption. Interactions between adsorbates and the metal as well as interactions between the adsorbates are taken into account with the Bond-Order-Conservation-Morse-Potential method. The shape, number, and location of the TPD peaks predicted by the simulations is shown to be sensitive to the binding energy, coverage, and coordination of the adsorbates. In addition, the occurrence of lateral interactions between adsorbates is seen to strongly effect the distribution of adsorbates is seen to strongly effect the distribution of adsorbates on the surface. Temperature-programmed desorption spectra of a single type of adsorbate have been simulated for the following adsorbate-metal systems: CO on Pd(100); H{sub 2} on Mo(100); and H{sub 2} on Ni(111). The model predictions are in good agreement with experimental observation. TPD spectra have also been simulated for two species coadsorbed on a surface; the model predictions are in qualitative agreement with the experimental results for H{sub 2} coadsorbed with strongly bound atomic species on Mo(100) and Fe(100) surfaces as well as for CO and H{sub 2} coadsorbed on Ni(100) and Rh(100) surfaces. Finally, the desorption kinetics of CO from Pd(100) and Ni(100) in the presence of gas-phase CO have been examined. The effect of pressure is seen to lead to an increase in the rate of desorption relative to the rate observed in the absence of gas-phase CO. This increase arises as a consequence of higher coverages and therefore stronger lateral interactions between the adsorbed CO molecules.
SU-E-T-238: Monte Carlo Estimation of Cerenkov Dose for Photo-Dynamic Radiotherapy
Chibani, O; Price, R; Ma, C [Fox Chase Cancer Center, Philadelphia, PA (United States); Eldib, A [Fox Chase Cancer Center, Philadelphia, PA (United States); University Cairo (Egypt); Mora, G [de Lisboa, Codex, Lisboa (Portugal)
2014-06-01T23:59:59.000Z
Purpose: Estimation of Cerenkov dose from high-energy megavoltage photon and electron beams in tissue and its impact on the radiosensitization using Protoporphyrine IX (PpIX) for tumor targeting enhancement in radiotherapy. Methods: The GEPTS Monte Carlo code is used to generate dose distributions from 18MV Varian photon beam and generic high-energy (45-MV) photon and (45-MeV) electron beams in a voxel-based tissueequivalent phantom. In addition to calculating the ionization dose, the code scores Cerenkov energy released in the wavelength range 375–425 nm corresponding to the pick of the PpIX absorption spectrum (Fig. 1) using the Frank-Tamm formula. Results: The simulations shows that the produced Cerenkov dose suitable for activating PpIX is 4000 to 5500 times lower than the overall radiation dose for all considered beams (18MV, 45 MV and 45 MeV). These results were contradictory to the recent experimental studies by Axelsson et al. (Med. Phys. 38 (2011) p 4127), where Cerenkov dose was reported to be only two orders of magnitude lower than the radiation dose. Note that our simulation results can be corroborated by a simple model where the Frank and Tamm formula is applied for electrons with 2 MeV/cm stopping power generating Cerenkov photons in the 375–425 nm range and assuming these photons have less than 1mm penetration in tissue. Conclusion: The Cerenkov dose generated by high-energy photon and electron beams may produce minimal clinical effect in comparison with the photon fluence (or dose) commonly used for photo-dynamic therapy. At the present time, it is unclear whether Cerenkov radiation is a significant contributor to the recently observed tumor regression for patients receiving radiotherapy and PpIX versus patients receiving radiotherapy only. The ongoing study will include animal experimentation and investigation of dose rate effects on PpIX response.
Singh, Jayant K.
densities and vapor pressures of select n-alkanes. Surface tension values for butane, hexane, and octane Carlo method (GEMC) by Panagiotopolous1 greatly enhanced our ability to predict the phase behavior energy is enhanced, and the likelihood of molecules overlapping is reduced. Gibbs ensemble Monte Carlo
Nanothermodynamics of large iron clusters by means of a flat histogram Monte Carlo method
Basire, M.; Soudan, J.-M.; Angelié, C., E-mail: christian.angelie@cea.fr [Laboratoire Francis Perrin, CNRS-URA 2453, CEA/DSM/IRAMIS/LIDyL, F-91191 Gif-sur-Yvette Cedex (France)
2014-09-14T23:59:59.000Z
The thermodynamics of iron clusters of various sizes, from 76 to 2452 atoms, typical of the catalyst particles used for carbon nanotubes growth, has been explored by a flat histogram Monte Carlo (MC) algorithm (called the ?-mapping), developed by Soudan et al. [J. Chem. Phys. 135, 144109 (2011), Paper I]. This method provides the classical density of states, g{sub p}(E{sub p}) in the configurational space, in terms of the potential energy of the system, with good and well controlled convergence properties, particularly in the melting phase transition zone which is of interest in this work. To describe the system, an iron potential has been implemented, called “corrected EAM” (cEAM), which approximates the MEAM potential of Lee et al. [Phys. Rev. B 64, 184102 (2001)] with an accuracy better than 3 meV/at, and a five times larger computational speed. The main simplification concerns the angular dependence of the potential, with a small impact on accuracy, while the screening coefficients S{sub ij} are exactly computed with a fast algorithm. With this potential, ergodic explorations of the clusters can be performed efficiently in a reasonable computing time, at least in the upper half of the solid zone and above. Problems of ergodicity exist in the lower half of the solid zone but routes to overcome them are discussed. The solid-liquid (melting) phase transition temperature T{sub m} is plotted in terms of the cluster atom number N{sub at}. The standard N{sub at}{sup ?1/3} linear dependence (Pawlow law) is observed for N{sub at} >300, allowing an extrapolation up to the bulk metal at 1940 ±50 K. For N{sub at} <150, a strong divergence is observed compared to the Pawlow law. The melting transition, which begins at the surface, is stated by a Lindemann-Berry index and an atomic density analysis. Several new features are obtained for the thermodynamics of cEAM clusters, compared to the Rydberg pair potential clusters studied in Paper I.
SU-E-I-28: Evaluating the Organ Dose From Computed Tomography Using Monte Carlo Calculations
Ono, T; Araki, F [Faculty of Life Sciences, Kumamoto University, Kumamoto (Japan)
2014-06-01T23:59:59.000Z
Purpose: To evaluate organ doses from computed tomography (CT) using Monte Carlo (MC) calculations. Methods: A Philips Brilliance CT scanner (64 slice) was simulated using the GMctdospp (IMPS, Germany) based on the EGSnrc user code. The X-ray spectra and a bowtie filter for MC simulations were determined to coincide with measurements of half-value layer (HVL) and off-center ratio (OCR) profile in air. The MC dose was calibrated from absorbed dose measurements using a Farmer chamber and a cylindrical water phantom. The dose distribution from CT was calculated using patient CT images and organ doses were evaluated from dose volume histograms. Results: The HVLs of Al at 80, 100, and 120 kV were 6.3, 7.7, and 8.7 mm, respectively. The calculated HVLs agreed with measurements within 0.3%. The calculated and measured OCR profiles agreed within 3%. For adult head scans (CTDIvol) =51.4 mGy), mean doses for brain stem, eye, and eye lens were 23.2, 34.2, and 37.6 mGy, respectively. For pediatric head scans (CTDIvol =35.6 mGy), mean doses for brain stem, eye, and eye lens were 19.3, 24.5, and 26.8 mGy, respectively. For adult chest scans (CTDIvol=19.0 mGy), mean doses for lung, heart, and spinal cord were 21.1, 22.0, and 15.5 mGy, respectively. For adult abdominal scans (CTDIvol=14.4 mGy), the mean doses for kidney, liver, pancreas, spleen, and spinal cord were 17.4, 16.5, 16.8, 16.8, and 13.1 mGy, respectively. For pediatric abdominal scans (CTDIvol=6.76 mGy), mean doses for kidney, liver, pancreas, spleen, and spinal cord were 8.24, 8.90, 8.17, 8.31, and 6.73 mGy, respectively. In head scan, organ doses were considerably different from CTDIvol values. Conclusion: MC dose distributions calculated by using patient CT images are useful to evaluate organ doses absorbed to individual patients.
Monte Carlo based beam model using a photon MLC for modulated electron radiotherapy
Henzen, D., E-mail: henzen@ams.unibe.ch; Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Vetterli, D.; Chatelain, C.; Fix, M. K. [Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, CH-3010 Berne (Switzerland)] [Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, CH-3010 Berne (Switzerland); Neuenschwander, H. [Clinic for Radiation-Oncology, Lindenhofspital Bern, CH-3012 Berne (Switzerland)] [Clinic for Radiation-Oncology, Lindenhofspital Bern, CH-3012 Berne (Switzerland); Stampanoni, M. F. M. [Institute for Biomedical Engineering, ETH Zürich and Paul Scherrer Institut, CH-5234 Villigen (Switzerland)] [Institute for Biomedical Engineering, ETH Zürich and Paul Scherrer Institut, CH-5234 Villigen (Switzerland)
2014-02-15T23:59:59.000Z
Purpose: Modulated electron radiotherapy (MERT) promises sparing of organs at risk for certain tumor sites. Any implementation of MERT treatment planning requires an accurate beam model. The aim of this work is the development of a beam model which reconstructs electron fields shaped using the Millennium photon multileaf collimator (MLC) (Varian Medical Systems, Inc., Palo Alto, CA) for a Varian linear accelerator (linac). Methods: This beam model is divided into an analytical part (two photon and two electron sources) and a Monte Carlo (MC) transport through the MLC. For dose calculation purposes the beam model has been coupled with a macro MC dose calculation algorithm. The commissioning process requires a set of measurements and precalculated MC input. The beam model has been commissioned at a source to surface distance of 70 cm for a Clinac 23EX (Varian Medical Systems, Inc., Palo Alto, CA) and a TrueBeam linac (Varian Medical Systems, Inc., Palo Alto, CA). For validation purposes, measured and calculated depth dose curves and dose profiles are compared for four different MLC shaped electron fields and all available energies. Furthermore, a measured two-dimensional dose distribution for patched segments consisting of three 18 MeV segments, three 12 MeV segments, and a 9 MeV segment is compared with corresponding dose calculations. Finally, measured and calculated two-dimensional dose distributions are compared for a circular segment encompassed with a C-shaped segment. Results: For 15 × 34, 5 × 5, and 2 × 2 cm{sup 2} fields differences between water phantom measurements and calculations using the beam model coupled with the macro MC dose calculation algorithm are generally within 2% of the maximal dose value or 2 mm distance to agreement (DTA) for all electron beam energies. For a more complex MLC pattern, differences between measurements and calculations are generally within 3% of the maximal dose value or 3 mm DTA for all electron beam energies. For the two-dimensional dose comparisons, the differences between calculations and measurements are generally within 2% of the maximal dose value or 2 mm DTA. Conclusions : The results of the dose comparisons suggest that the developed beam model is suitable to accurately reconstruct photon MLC shaped electron beams for a Clinac 23EX and a TrueBeam linac. Hence, in future work the beam model will be utilized to investigate the possibilities of MERT using the photon MLC to shape electron beams.
Search for New Heavy Higgs Boson in B-L model at the LHC using Monte Carlo Simulation
Hesham Mansour; Nady Bakhet
2013-04-24T23:59:59.000Z
The aim of this work is to search for a new heavy Higgs boson in the B-L extension of the Standard Model at LHC using the data produced from simulated collisions between two protons at different center of mass energies by Monte Carlo event generator programs to find new Higgs boson signatures at the LHC. Also we study the production and decay channels for Higgs boson in this model and its interactions with the other new particles of this model namely the new neutral gauge massive boson and the new fermionic right-handed heavy neutrinos .
Qin, Z.; Shoesmith, D.W. [The University of Western Ontario, London, Ontario, N6A 5B7 (Canada)
2007-07-01T23:59:59.000Z
Based on a probabilistic model previously proposed, a Monte Carlo simulation code (EBSPA) has been developed to predict the lifetime of the engineered barriers system within the Yucca Mountain nuclear waste repository. The degradation modes considered in the EBSPA are general passive corrosion and hydrogen-induced cracking for the drip shield; and general passive corrosion, crevice corrosion and stress corrosion cracking for the waste package. Two scenarios have been simulated using the EBSPA code: (a) a conservative scenario for the conditions thought likely to prevail in the repository, and (b) an aggressive scenario in which the impact of the degradation processes is overstated. (authors)
Leman, S.W.; McCarthy, K.A.; /MIT, MKI; Brink, P.L.; Cabrera, B.; Cherry, M.; /Stanford U., Phys. Dept.; Silva, E.Do Couto E; /SLAC; Figueroa-Feliciano, E.; /MIT, MKI; Kim, P.; /SLAC; Mirabolfathi, N.; /UC, Berkeley; Pyle, M.; /Stanford U., Phys. Dept.; Resch, R.; /SLAC; Sadoulet, B.; Serfass, B.; Sundqvist, K.M.; /UC, Berkeley; Tomada, A.; /Stanford U., Phys. Dept.; Young, B.A.; /Santa Clara U.
2012-06-05T23:59:59.000Z
We present results on phonon quasidiffusion and Transition Edge Sensor (TES) studies in a large, 3-inch diameter, 1-inch thick [100] high purity germanium crystal, cooled to 50 mK in the vacuum of a dilution refrigerator, and exposed with 59.5 keV gamma-rays from an Am-241 calibration source. We compare calibration data with results from a Monte Carlo which includes phonon quasidiffusion and the generation of phonons created by charge carriers as they are drifted across the detector by ionization readout channels. The phonon energy is then parsed into TES based phonon readout channels and input into a TES simulator.
Montoya, M; Rojas, J
2007-01-01T23:59:59.000Z
The mass and kinetic energy distribution of nuclear fragments from thermal neutron induced fission of 235U have been studied using a Monte-Carlo simulation. Besides reproducing the pronounced broadening on the standard deviation of the final fragment kinetic energy distribution $\\sigma_{e}(m)$ around the mass number m = 109, our simulation also produces a second broadening around m = 125, that is in agreement with the experimental data obtained by Belhafaf et al. These results are consequence of the characteristics of the neutron emission, the variation in the primary fragment mean kinetic energy and the yield as a function of the mass.
M. V. Ulybyshev; M. I. Katsnelson
2015-02-04T23:59:59.000Z
We study electronic properties of graphene with finite concentration of vacancies or other resonant scatterers by a straightforward lattice Quantum Monte Carlo calculations. Taking into account realistic long-range Coulomb interaction we calculate distribution of spin density associated to midgap states and demonstrate antiferromagnetic ordering. Energy gap are open due to the interaction effects, both in the bare graphene spectrum and in the vacancy/impurity bands. In the case of 5 % concentration of resonant scatterers the latter gap is estimated as 0.7 eV and 1.1 eV for graphene on boron nitride and freely suspended graphene, respectively.
M. V. Ulybyshev; M. I. Katsnelson
2015-05-22T23:59:59.000Z
We study electronic properties of graphene with finite concentration of vacancies or other resonant scatterers by a straightforward lattice Quantum Monte Carlo calculations. Taking into account realistic long-range Coulomb interaction we calculate distribution of spin density associated to midgap states and demonstrate antiferromagnetic ordering. Energy gaps are open due to the interaction effects, both in the bare graphene spectrum and in the vacancy/impurity bands. In the case of 5 % concentration of resonant scatterers the latter gap is estimated as 0.7 eV and 1.1 eV for graphene on boron nitride and freely suspended graphene, respectively.
Demchik, Vadim
2013-01-01T23:59:59.000Z
The multi-GPU open-source package QCDGPU for lattice Monte Carlo simulations of pure SU(N) gluodynamics in external magnetic field at finite temperature and O(N) model is developed. The code is implemented in OpenCL, tested on AMD and NVIDIA GPUs, AMD and Intel CPUs and may run on other OpenCL-compatible devices. The package contains minimal external library dependencies and is OS platform-independent. It is optimized for heterogeneous computing due to the possibility of dividing the lattice into non-equivalent parts to hide the difference in performances of the devices used. QCDGPU has client-server part for distributed simulations. The package is designed to produce lattice gauge configurations as well as to analyze previously generated ones. QCDGPU may be executed in fault-tolerant mode. Monte Carlo procedure core is based on PRNGCL library for pseudo-random numbers generation on OpenCL-compatible devices, which contains several most popular pseudo-random number generators.
Biondo, Elliott D [ORNL; Ibrahim, Ahmad M [ORNL; Mosher, Scott W [ORNL; Grove, Robert E [ORNL
2015-01-01T23:59:59.000Z
Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNce reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).
Dominik Smith; Lorenz von Smekal
2014-03-14T23:59:59.000Z
We report on Hybrid-Monte-Carlo simulations of the tight-binding model with long-range Coulomb interactions for the electronic properties of graphene. We investigate the spontaneous breaking of sublattice symmetry corresponding to a transition from the semimetal to an antiferromagnetic insulating phase. Our short-range interactions thereby include the partial screening due to electrons in higher energy states from ab initio calculations based on the constrained random phase approximation [T.O.Wehling {\\it et al.}, Phys.Rev.Lett.{\\bf 106}, 236805 (2011)]. In contrast to a similar previous Monte-Carlo study [M.V.Ulybyshev {\\it et al.}, Phys.Rev.Lett.{\\bf 111}, 056801 (2013)] we also include a phenomenological model which describes the transition to the unscreened bare Coulomb interactions of graphene at half filling in the long-wavelength limit. Our results show, however, that the critical coupling for the antiferromagnetic Mott transition is largely insensitive to the strength of these long-range Coulomb tails. They hence confirm the prediction that suspended graphene remains in the semimetal phase when a realistic static screening of the Coulomb interactions is included.
Kawano, T; Weidenmüller, H A
2015-01-01T23:59:59.000Z
Using a random-matrix approach and Monte-Carlo simulations, we generate scattering matrices and cross sections for compound-nucleus reactions. In the absence of direct reactions we compare the average cross sections with the analytic solution given by the Gaussian Orthogonal Ensemble (GOE) triple integral, and with predictions of statistical approaches such as the ones due to Moldauer, to Hofmann, Richert, Tepel, and Weidenm\\"{u}ller, and to Kawai, Kerman, and McVoy. We find perfect agreement with the GOE triple integral and display the limits of validity of the latter approaches. We establish a criterion for the width of the energy-averaging interval such that the relative difference between the ensemble-averaged and the energy-averaged scattering matrices lies below a given bound. Direct reactions are simulated in terms of an energy-independent background matrix. In that case, cross sections averaged over the ensemble of Monte-Carlo simulations fully agree with results from the Engelbrecht-Weidenm\\"{u}ller ...
Matthew G. Baring; Keith Ogilvie; Donald Ellison; Robert Forsyth
1996-10-02T23:59:59.000Z
The most stringent test of theoretical models of the first-order Fermi mechanism at collisionless astrophysical shocks is a comparison of the theoretical predictions with observational data on particle populations. Such comparisons have yielded good agreement between observations at the quasi-parallel portion of the Earth's bow shock and three theoretical approaches, including Monte Carlo kinetic simulations. This paper extends such model testing to the realm of oblique interplanetary shocks: here observations of proton and alpha particle distributions made by the SWICS ion mass spectrometer on Ulysses at nearby interplanetary shocks are compared with test particle Monte Carlo simulation predictions of accelerated populations. The plasma parameters used in the simulation are obtained from measurements of solar wind particles and the magnetic field upstream of individual shocks. Good agreement between downstream spectral measurements and the simulation predictions are obtained for two shocks by allowing the the ratio of the mean-free scattering length to the ionic gyroradius, to vary in an optimization of the fit to the data. Generally small values of this ratio are obtained, corresponding to the case of strong scattering. The acceleration process appears to be roughly independent of the mass or charge of the species.
Axel Hoefer; Oliver Buss; Maik Hennebach; Michael Schmid; Dieter Porsch
2014-11-12T23:59:59.000Z
MOCABA is a combination of Monte Carlo sampling and Bayesian updating algorithms for the prediction of integral functions of nuclear data, such as reactor power distributions or neutron multiplication factors. Similarly to the established Generalized Linear Least Squares (GLLS) methodology, MOCABA offers the capability to utilize integral experimental data to reduce the prior uncertainty of integral observables. The MOCABA approach, however, does not involve any series expansions and, therefore, does not suffer from the breakdown of first-order perturbation theory for large nuclear data uncertainties. This is related to the fact that, in contrast to the GLLS method, the updating mechanism within MOCABA is applied directly to the integral observables without having to "adjust" any nuclear data. A central part of MOCABA is the nuclear data Monte Carlo program NUDUNA, which performs random sampling of nuclear data evaluations according to their covariance information and converts them into libraries for transport code systems like MCNP or SCALE. What is special about MOCABA is that it can be applied to any integral function of nuclear data, and any integral measurement can be taken into account to improve the prediction of an integral observable of interest. In this paper we present two example applications of the MOCABA framework: the prediction of the neutron multiplication factor of a water-moderated PWR fuel assembly based on 21 criticality safety benchmark experiments and the prediction of the power distribution within a toy model reactor containing 100 fuel assemblies.
Griesheimer, D. P. [Bertis Atomic Power Laboratory, P.O. Box 79, West Mifflin, PA 15122 (United States); Stedry, M. H. [Knolls Atomic Power Laboratory, P.O. Box 1072, Schenectady, NY 12301 (United States)
2013-07-01T23:59:59.000Z
A rigorous treatment of energy deposition in a Monte Carlo transport calculation, including coupled transport of all secondary and tertiary radiations, increases the computational cost of a simulation dramatically, making fully-coupled heating impractical for many large calculations, such as 3-D analysis of nuclear reactor cores. However, in some cases, the added benefit from a full-fidelity energy-deposition treatment is negligible, especially considering the increased simulation run time. In this paper we present a generalized framework for the in-line calculation of energy deposition during steady-state Monte Carlo transport simulations. This framework gives users the ability to select among several energy-deposition approximations with varying levels of fidelity. The paper describes the computational framework, along with derivations of four energy-deposition treatments. Each treatment uses a unique set of self-consistent approximations, which ensure that energy balance is preserved over the entire problem. By providing several energy-deposition treatments, each with different approximations for neglecting the energy transport of certain secondary radiations, the proposed framework provides users the flexibility to choose between accuracy and computational efficiency. Numerical results are presented, comparing heating results among the four energy-deposition treatments for a simple reactor/compound shielding problem. The results illustrate the limitations and computational expense of each of the four energy-deposition treatments. (authors)
Paris-Sud XI, Université de
Emission Computed Tomography (SPECT) images is degraded by physical effects, namely photon attenuation datasets are currently under investigation. Keywords : single photon emission computed tomography; Monte Emission Computed Tomography (SPECT), the qualitative and quantitative accuracy of images is degraded
Monte Carlo calculations of electron beam quality conversion factors for several ion chamber types
Muir, B. R., E-mail: Bryan.Muir@nrc-cnrc.gc.ca [Measurement Science and Standards, National Research Council Canada, 1200 Montreal Road, Ottawa, Ontario K1A 0R6 (Canada); Rogers, D. W. O., E-mail: drogers@physics.carleton.ca [Carleton Laboratory for Radiotherapy Physics, Physics Department, Carleton University, 1125 ColonelBy Drive, Ottawa, Ontario K1S 5B6 (Canada)
2014-11-01T23:59:59.000Z
Purpose: To provide a comprehensive investigation of electron beam reference dosimetry using Monte Carlo simulations of the response of 10 plane-parallel and 18 cylindrical ion chamber types. Specific emphasis is placed on the determination of the optimal shift of the chambers’ effective point of measurement (EPOM) and beam quality conversion factors. Methods: The EGSnrc system is used for calculations of the absorbed dose to gas in ion chamber models and the absorbed dose to water as a function of depth in a water phantom on which cobalt-60 and several electron beam source models are incident. The optimal EPOM shifts of the ion chambers are determined by comparing calculations of R{sub 50} converted from I{sub 50} (calculated using ion chamber simulations in phantom) to R{sub 50} calculated using simulations of the absorbed dose to water vs depth in water. Beam quality conversion factors are determined as the calculated ratio of the absorbed dose to water to the absorbed dose to air in the ion chamber at the reference depth in a cobalt-60 beam to that in electron beams. Results: For most plane-parallel chambers, the optimal EPOM shift is inside of the active cavity but different from the shift determined with water-equivalent scaling of the front window of the chamber. These optimal shifts for plane-parallel chambers also reduce the scatter of beam quality conversion factors, k{sub Q}, as a function of R{sub 50}. The optimal shift of cylindrical chambers is found to be less than the 0.5 r{sub cav} recommended by current dosimetry protocols. In most cases, the values of the optimal shift are close to 0.3 r{sub cav}. Values of k{sub ecal} are calculated and compared to those from the TG-51 protocol and differences are explained using accurate individual correction factors for a subset of ion chambers investigated. High-precision fits to beam quality conversion factors normalized to unity in a beam with R{sub 50} = 7.5 cm (k{sub Q}{sup ?}) are provided. These factors avoid the use of gradient correction factors as used in the TG-51 protocol although a chamber dependent optimal shift in the EPOM is required when using plane-parallel chambers while no shift is needed with cylindrical chambers. The sensitivity of these results to parameters used to model the ion chambers is discussed and the uncertainty related to the practical use of these results is evaluated. Conclusions: These results will prove useful as electron beam reference dosimetry protocols are being updated. The analysis of this work indicates that cylindrical ion chambers may be appropriate for use in low-energy electron beams but measurements are required to characterize their use in these beams.
Zen, Andrea; Sorella, Sandro; Guidoni, Leonardo
2013-01-01T23:59:59.000Z
Quantum Monte Carlo methods are accurate and promising many body techniques for electronic structure calculations which, in the last years, are encountering a growing interest thanks to their favorable scaling with the system size and their efficient parallelization, particularly suited for the modern high performance computing facilities. The ansatz of the wave function and its variational flexibility are crucial points for both the accurate description of molecular properties and the capabilities of the method to tackle large systems. In this paper, we extensively analyze, using different variational ansatzes, several properties of the water molecule, namely: the total energy, the dipole and quadrupole momenta, the ionization and atomization energies, the equilibrium configuration, and the harmonic and fundamental frequencies of vibration. The investigation mainly focuses on variational Monte Carlo calculations, although several lattice regularized diffusion Monte Carlo calculations are also reported. Throu...
Zhou, X. W., E-mail: xzhou@sandia.gov [Mechanics of Materials Department, Sandia National Laboratories, Livermore, California 94550 (United States); Yang, N. Y. C. [Energy Nanomaterials Department, Sandia National Laboratories, Livermore, California 94550 (United States)
2014-03-14T23:59:59.000Z
Electronic properties of semiconductor devices are sensitive to defects such as second phase precipitates, grain sizes, and voids. These defects can evolve over time especially under oxidation environments and it is therefore important to understand the resulting aging behavior in order for the reliable applications of devices. In this paper, we propose a kinetic Monte Carlo framework capable of simultaneous simulation of the evolution of second phases, precipitates, grain sizes, and voids in complicated systems involving many species including oxygen. This kinetic Monte Carlo model calculates the energy barriers of various events based directly on the experimental data. As a first step of our model implementation, we incorporate the second phase formation module in the parallel kinetic Monte Carlo codes SPPARKS. Selected aging simulations are performed to examine the formation of second phase precipitates at the eletroplated Au/Bi{sub 2}Te{sub 3} interface under oxygen and oxygen-free environments, and the results are compared with the corresponding experiments.
Monte Carlo Methods for Uncertainty Quantification Mathematical Institute, University of Oxford
Giles, Mike
rates exchange rates weather electricity/gas demand crude oil prices . . . Mike Giles (Oxford) Monte In computational finance, stochastic differential equations are used to model the behaviour of stocks interest31, 2013 5 / 33 #12;SDEs in Finance Examples: Geometric Brownian motion (Black-Scholes model for stock
Monte Carlo Simulation for Elastic Energy Loss of High-Energy Partons in Quark-Gluon Plasma
Jussi Auvinen; Kari J. Eskola; Hannu Holopainen; Thorsten Renk
2011-06-13T23:59:59.000Z
We examine the significance of $2 \\rightarrow 2$ partonic collisions as the suppression mechanism of high-energy partons in the strongly interacting medium formed in ultrarelativistic heavy ion collisions. For this purpose, we have developed a Monte Carlo simulation describing the interactions of perturbatively produced, non-eikonally propagating high-energy partons with the quarks and gluons from the expanding QCD medium. The partonic collision rates are computed in leading-order perturbative QCD (pQCD), while three different hydrodynamical scenarios are used to model the medium. We compare our results with the suppression observed in $\\sqrt{s_{NN}}=200$ GeV Au+Au collisions at the BNL-RHIC. We find the incoherent nature of elastic energy loss incompatible with the measured data and the effect of the initial state fluctuations small.
Sailor, W.C.; Byrd, R.C.; Yariv, Y.
1988-10-01T23:59:59.000Z
The response of organic scintillators to monoenergetic neutrons has been calculated using a Monte Carlo approach. The code TRACE is largely based on the well-tested code of Stanton, except that multi-element capabilities, energy-dependent reaction kinematics, and photon loss through attenuation and reflection are introduced. The modeling assumptions and historical development of the Stanton code are first discussed. Pulse height distributions calculated with this code are given and used to explain the roles of various reaction channels and multiple scattering in determining the detector efficiency. Changes introduced into the code in developing TRACE are summarized. Pulse height spectra and total efficiencies for single-element detectors are calculated with both the Stanton code and with TRACE in the energy range 28 < E/sub n/ < 200MeV, and the results are compared to experimental data obtained with the /sup 7/Li(p,n)/sup 7/Be reaction. 68 refs., 25 figs., 3 tabs.
Garain, Sudip K; Chakrabarti, Sandip K
2013-01-01T23:59:59.000Z
Low and intermediate frequency quasi-periodic oscillations (QPOs) in black hole candidates are believed to be due to oscillations of the Comptonizing regions in an accretion flow. Assuming that the general structure of an accretion disk is a Two Component Advective Flow (TCAF), we numerically simulate the light curves emitted from an accretion disk for different accretion rates and find how the QPO frequencies vary. We use a standard Keplerian disk residing at the equatorial plane as a source of soft photons. These soft photons, after suffering multiple scattering with the hot electrons of the low angular momentum, sub-Keplerian, flow emerge out as hard radiation. The hydrodynamic and thermal properties of the electron cloud is simulated using a Total Variation Diminishing (TVD) code. The TVD code is then coupled with a radiative transfer code which simulates the energy exchange between the electron and radiation using Monte Carlo technique. The resulting localized heating and cooling are included also. We fi...
Study of two- and three-meson decay modes of tau-lepton with Monte Carlo generator TAUOLA
Shekhovtsova, Olga
2015-01-01T23:59:59.000Z
The study of the $\\tau$-lepton decays into hadrons has contributed to a better understanding of non-perturbative QCD and light-quark meson spectroscopy, as well as to the search of new physics beyond the Standard Model. The two- and three-meson decay modes, considering only those permitted by the Standard Model, are the predominant decays and together with the one-pion mode compose more than $85\\%$ of the hadronic $\\tau$-lepton decay width. In this note we review the theoretical results for these modes implemented in the Monte Carlo event generator TAUOLA and present at the same time a comparison with the Belle Collaboration data for the two-pion decay mode and the BaBar preliminary data for the three-pion decay mode as well for the decay mode into two-kaon and one-pion.
Tattersall, W J; Boyle, G J; White, R D
2015-01-01T23:59:59.000Z
We generalize a simple Monte Carlo (MC) model for dilute gases to consider the transport behavior of positrons and electrons in Percus-Yevick model liquids under highly non-equilibrium conditions, accounting rigorously for coherent scattering processes. The procedure extends an existing technique [Wojcik and Tachiya, Chem. Phys. Lett. 363, 3--4 (1992)], using the static structure factor to account for the altered anisotropy of coherent scattering in structured material. We identify the effects of the approximation used in the original method, and develop a modified method that does not require that approximation. We also present an enhanced MC technique that has been designed to improve the accuracy and flexibility of simulations in spatially-varying electric fields. All of the results are found to be in excellent agreement with an independent multi-term Boltzmann equation solution, providing benchmarks for future transport models in liquids and structured systems.
Monte Carlo calculation of the collision density of superthermal produced H atoms in thermal H2 gas
Panarese, A
2011-01-01T23:59:59.000Z
We propose a simple and reliable method to study the collision density of H atoms following their production by chemical mechanisms. The problem is relevant to PDR's, shocks, photospheres, atmospheric entry problems. We show that the thermalization of H atoms can be conveniently studied by a simple method and set the basis for further investigations. Besides our aims are also to review the theoretical basis, the limitation of simpler approaches and address the analogue problems in neutronics. The method adopted is Monte Carlo method including the thermal distri- bution of background molecules. The transport cross section is determined by the inversion of transport data. Plots of the collisions density of H atoms in H2 gas are calculated and discussed also in the context of simple theories. The application of the results to astrophysical problems is outlined.
Structure of Cu64.5Zr35.5 Metallic glass by reverse Monte Carlo simulations
Fang, Xikui W. [Ames Laboratory; Huang, Li [Ames Laboratory; Wang, Cai-Zhuang [Ames Laboratory; Ho, Kai-Ming [Ames Laboratory; Ding, Z. J. [University of Science and Technology of China
2014-02-07T23:59:59.000Z
Reverse Monte Carlo simulations (RMC) have been widely used to generate three dimensional (3D) atomistic models for glass systems. To examine the reliability of the method for metallic glass, we use RMC to predict the atomic configurations of a “known” structure from molecular dynamics (MD) simulations, and then compare the structure obtained from the RMC with the target structure from MD. We show that when the structure factors and partial pair correlation functions from the MD simulations are used as inputs for RMC simulations, the 3D atomistic structure of the glass obtained from the RMC gives the short- and medium-range order in good agreement with those from the target structure by the MD simulation. These results suggest that 3D atomistic structure model of the metallic glass alloys can be reasonably well reproduced by RMC method with a proper choice of input constraints.
Nicolas Puech; Serge Mora; Ty Phou; Gregoire Porte; Jacques Jestin; Julian Oberdisse
2010-12-04T23:59:59.000Z
The effect of silica nanoparticles on transient microemulsion networks made of microemulsion droplets and telechelic copolymer molecules in water is studied, as a function of droplet size and concentration, amount of copolymer, and nanoparticle volume fraction. The phase diagram is found to be affected, and in particular the percolation threshold characterized by rheology is shifted upon addition of nanoparticles, suggesting participation of the particles in the network. This leads to a peculiar reinforcement behaviour of such microemulsion nanocomposites, the silica influencing both the modulus and the relaxation time. The reinforcement is modelled based on nanoparticles connected to the network via droplet adsorption. Contrast-variation Small Angle Neutron Scattering coupled to a reverse Monte Carlo approach is used to analyse the microstructure. The rather surprising intensity curves are shown to be in good agreement with the adsorption of droplets on the nanoparticle surface.
Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation
Pecchia, M.; D'Auria, F. [San Piero A Grado Nuclear Research Group GRNSPG, Univ. of Pisa, via Diotisalvi, 2, 56122 - Pisa (Italy); Mazzantini, O. [Nucleo-electrica Argentina Societad Anonima NA-SA, Buenos Aires (Argentina)
2012-07-01T23:59:59.000Z
Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI for performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)
Hui, Y.Y.; Chang, Y.-R.; Lee, H.-Y.; Chang, H.-C. [Institute of Atomic and Molecular Sciences, Academia Sinica, Taipei 106, Taiwan (China); Lim, T.-S. [Department of Physics, Tunghai University, Taichung 407, Taiwan (China); Fann Wunshain [Institute of Atomic and Molecular Sciences, Academia Sinica, Taipei 106, Taiwan (China); Department of Physics, National Taiwan University, Taipei 106, Taiwan (China)
2009-01-05T23:59:59.000Z
The number of negatively charged nitrogen-vacancy centers (N-V){sup -} in fluorescent nanodiamond (FND) has been determined by photon correlation spectroscopy and Monte Carlo simulations at the single particle level. By taking account of the random dipole orientation of the multiple (N-V){sup -} fluorophores and simulating the probability distribution of their effective numbers (N{sub e}), we found that the actual number (N{sub a}) of the fluorophores is in linear correlation with N{sub e}, with correction factors of 1.8 and 1.2 in measurements using linearly and circularly polarized lights, respectively. We determined N{sub a}=8{+-}1 for 28 nm FND particles prepared by 3 MeV proton irradiation.
Basden, Alastair
2015-01-01T23:59:59.000Z
The performance of a wide-field adaptive optics system depends on input design parameters. Here we investigate the performance of a multi-conjugate adaptive optics system design for the European Extremely Large Telescope, using an end-to-end Monte-Carlo adaptive optics simulation tool, DASP. We consider parameters such as the number of laser guide stars, sodium layer depth, wavefront sensor pixel scale, number of deformable mirrors, mirror conjugation and actuator pitch. We provide potential areas where costs savings can be made, and investigate trade-offs between performance and cost. We conclude that a 6 laser guide star system using 3 DMs seems to be a sweet spot for performance and cost compromise.
Leon, Stephanie M., E-mail: Stephanie.Leon@uth.tmc.edu; Wagner, Louis K. [Department of Diagnostic and Interventional Imaging, University of Texas Medical School at Houston, Houston, Texas 77030 (United States); Brateman, Libby F. [Department of Radiology, University of Florida, Gainesville, Florida 32610 (United States)
2014-11-01T23:59:59.000Z
Purpose: Monte Carlo simulations were performed with the goal of verifying previously published physical measurements characterizing scatter as a function of apparent thickness. A secondary goal was to provide a way of determining what effect tissue glandularity might have on the scatter characteristics of breast tissue. The overall reason for characterizing mammography scatter in this research is the application of these data to an image processing-based scatter-correction program. Methods: MCNPX was used to simulate scatter from an infinitesimal pencil beam using typical mammography geometries and techniques. The spreading of the pencil beam was characterized by two parameters: mean radial extent (MRE) and scatter fraction (SF). The SF and MRE were found as functions of target, filter, tube potential, phantom thickness, and the presence or absence of a grid. The SF was determined by separating scatter and primary by the angle of incidence on the detector, then finding the ratio of the measured scatter to the total number of detected events. The accuracy of the MRE was determined by placing ring-shaped tallies around the impulse and fitting those data to the point-spread function (PSF) equation using the value for MRE derived from the physical measurements. The goodness-of-fit was determined for each data set as a means of assessing the accuracy of the physical MRE data. The effect of breast glandularity on the SF, MRE, and apparent tissue thickness was also considered for a limited number of techniques. Results: The agreement between the physical measurements and the results of the Monte Carlo simulations was assessed. With a grid, the SFs ranged from 0.065 to 0.089, with absolute differences between the measured and simulated SFs averaging 0.02. Without a grid, the range was 0.28–0.51, with absolute differences averaging ?0.01. The goodness-of-fit values comparing the Monte Carlo data to the PSF from the physical measurements ranged from 0.96 to 1.00 with a grid and 0.65 to 0.86 without a grid. Analysis of the data suggested that the nongrid data could be better described by a biexponential function than the single exponential used here. The simulations assessing the effect of breast composition on SF and MRE showed only a slight impact on these quantities. When compared to a mix of 50% glandular/50% adipose tissue, the impact of substituting adipose or glandular breast compositions on the apparent thickness of the tissue was about 5%. Conclusions: The findings show agreement between the physical measurements published previously and the Monte Carlo simulations presented here; the resulting data can therefore be used more confidently for an application such as image processing-based scatter correction. The findings also suggest that breast composition does not have a major impact on the scatter characteristics of breast tissue. Application of the scatter data to the development of a scatter-correction software program can be simplified by ignoring the variations in density among breast tissues.
Sarrut, David, E-mail: david.sarrut@creatis.insa-lyon.fr [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon (France) [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon (France); Université Lyon 1 (France); Centre Léon Bérard (France)] [France; Bardiès, Manuel; Marcatili, Sara; Mauxion, Thibault [Inserm, UMR1037 CRCT, F-31000 Toulouse, France and Université Toulouse III-Paul Sabatier, UMR1037 CRCT, F-31000 Toulouse (France)] [Inserm, UMR1037 CRCT, F-31000 Toulouse, France and Université Toulouse III-Paul Sabatier, UMR1037 CRCT, F-31000 Toulouse (France); Boussion, Nicolas [INSERM, UMR 1101, LaTIM, CHU Morvan, 29609 Brest (France)] [INSERM, UMR 1101, LaTIM, CHU Morvan, 29609 Brest (France); Freud, Nicolas; Létang, Jean-Michel [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Centre Léon Bérard, 69008 Lyon (France)] [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Centre Léon Bérard, 69008 Lyon (France); Jan, Sébastien [CEA/DSV/I2BM/SHFJ, Orsay 91401 (France)] [CEA/DSV/I2BM/SHFJ, Orsay 91401 (France); Loudos, George [Department of Medical Instruments Technology, Technological Educational Institute of Athens, Athens 12210 (Greece)] [Department of Medical Instruments Technology, Technological Educational Institute of Athens, Athens 12210 (Greece); Maigne, Lydia; Perrot, Yann [UMR 6533 CNRS/IN2P3, Université Blaise Pascal, 63171 Aubière (France)] [UMR 6533 CNRS/IN2P3, Université Blaise Pascal, 63171 Aubière (France); Papadimitroulas, Panagiotis [Department of Biomedical Engineering, Technological Educational Institute of Athens, 12210, Athens (Greece)] [Department of Biomedical Engineering, Technological Educational Institute of Athens, 12210, Athens (Greece); Pietrzyk, Uwe [Institut für Neurowissenschaften und Medizin, Forschungszentrum Jülich GmbH, 52425 Jülich, Germany and Fachbereich für Mathematik und Naturwissenschaften, Bergische Universität Wuppertal, 42097 Wuppertal (Germany)] [Institut für Neurowissenschaften und Medizin, Forschungszentrum Jülich GmbH, 52425 Jülich, Germany and Fachbereich für Mathematik und Naturwissenschaften, Bergische Universität Wuppertal, 42097 Wuppertal (Germany); Robert, Charlotte [IMNC, UMR 8165 CNRS, Universités Paris 7 et Paris 11, Orsay 91406 (France)] [IMNC, UMR 8165 CNRS, Universités Paris 7 et Paris 11, Orsay 91406 (France); and others
2014-06-15T23:59:59.000Z
In this paper, the authors' review the applicability of the open-source GATE Monte Carlo simulation platform based on the GEANT4 toolkit for radiation therapy and dosimetry applications. The many applications of GATE for state-of-the-art radiotherapy simulations are described including external beam radiotherapy, brachytherapy, intraoperative radiotherapy, hadrontherapy, molecular radiotherapy, and in vivo dose monitoring. Investigations that have been performed using GEANT4 only are also mentioned to illustrate the potential of GATE. The very practical feature of GATE making it easy to model both a treatment and an imaging acquisition within the same frameworkis emphasized. The computational times associated with several applications are provided to illustrate the practical feasibility of the simulations using current computing facilities.
Zink, K., E-mail: klemens.zink@kmub.thm.de [Institute of Medical Physics and Radiation Protection (IMPS), University of Applied Sciences Giessen, Giessen D-35390, Germany and Department of Radiotherapy and Radiooncology, University Medical Center Giessen-Marburg, Marburg D-35043 (Germany); Czarnecki, D.; Voigts-Rhetz, P. von [Institute of Medical Physics and Radiation Protection (IMPS), University of Applied Sciences Giessen, Giessen D-35390 (Germany); Looe, H. K. [Clinic for Radiation Therapy, Pius-Hospital, Oldenburg D-26129, Germany and WG Medical Radiation Physics, Carl von Ossietzky University, Oldenburg D-26129 (Germany); Harder, D. [Prof. em., Medical Physics and Biophysics, Georg August University, Göttingen D-37073 (Germany)
2014-11-01T23:59:59.000Z
Purpose: The electron fluence inside a parallel-plate ionization chamber positioned in a water phantom and exposed to a clinical electron beam deviates from the unperturbed fluence in water in absence of the chamber. One reason for the fluence perturbation is the well-known “inscattering effect,” whose physical cause is the lack of electron scattering in the gas-filled cavity. Correction factors determined to correct for this effect have long been recommended. However, more recent Monte Carlo calculations have led to some doubt about the range of validity of these corrections. Therefore, the aim of the present study is to reanalyze the development of the fluence perturbation with depth and to review the function of the guard rings. Methods: Spatially resolved Monte Carlo simulations of the dose profiles within gas-filled cavities with various radii in clinical electron beams have been performed in order to determine the radial variation of the fluence perturbation in a coin-shaped cavity, to study the influences of the radius of the collecting electrode and of the width of the guard ring upon the indicated value of the ionization chamber formed by the cavity, and to investigate the development of the perturbation as a function of the depth in an electron-irradiated phantom. The simulations were performed for a primary electron energy of 6 MeV. Results: The Monte Carlo simulations clearly demonstrated a surprisingly large in- and outward electron transport across the lateral cavity boundary. This results in a strong influence of the depth-dependent development of the electron field in the surrounding medium upon the chamber reading. In the buildup region of the depth-dose curve, the in–out balance of the electron fluence is positive and shows the well-known dose oscillation near the cavity/water boundary. At the depth of the dose maximum the in–out balance is equilibrated, and in the falling part of the depth-dose curve it is negative, as shown here the first time. The influences of both the collecting electrode radius and the width of the guard ring are reflecting the deep radial penetration of the electron transport processes into the gas-filled cavities and the need for appropriate corrections of the chamber reading. New values for these corrections have been established in two forms, one converting the indicated value into the absorbed dose to water in the front plane of the chamber, the other converting it into the absorbed dose to water at the depth of the effective point of measurement of the chamber. In the Appendix, the in–out imbalance of electron transport across the lateral cavity boundary is demonstrated in the approximation of classical small-angle multiple scattering theory. Conclusions: The in–out electron transport imbalance at the lateral boundaries of parallel-plate chambers in electron beams has been studied with Monte Carlo simulation over a range of depth in water, and new correction factors, covering all depths and implementing the effective point of measurement concept, have been developed.
Andrea Bianconi
2008-06-05T23:59:59.000Z
In this note I report and discuss the physical scheme and the main approximations used by the event generator code DY\\_AB. This Monte Carlo code is aimed at preliminary simulation, during the stage of apparatus planning, of Drell-Yan events characterized by azimuthal asymmetries, in experiments with moderate center of mass energy $\\sqrt{s}$ $<<$ 100 GeV.
Wilkins, John
Comparison of screened hybrid density functional theory to diffusion Monte Carlo in calculations of total energies of silicon phases and defects Enrique R. Batista,1, * Jochen Heyd,2 Richard G. Hennig,3 for the prediction of defect properties using the Heyd-Scuseria-Ernzerhof HSE screened-exchange hybrid functional
Guidoni, Leonardo
Reaction pathways by quantum Monte Carlo: Insight on the torsion barrier of 1,3-butadiene: Insight on the torsion barrier of 1,3-butadiene, and the conrotatory ring opening of cyclobutene Matteo to investigate the intramolecular reaction pathways of 1,3-butadiene. The ground state geometries of the three
Asher, Sanford A.
Melting of colloidal crystals: A Monte Carlo study James C. Zahorchak, FL Kesavamoorthy, @vb)Rob D) Electrostatically stabilized colloidal crystals show phase transitions into liquid and gaslike states as the ionic of four colloidal crystals (two fee crystals and two bee crystals) which have also been examined
Danon, Yaron
2011-01-01T23:59:59.000Z
owing to the multiple scattering from ambient neutrons and from energy cuts in the detection efficiencyPHYSICAL REVIEW C 83, 064612 (2011) Advanced Monte Carlo modeling of prompt fission neutrons for thermal and fast neutron-induced fission reactions on 239 Pu P. Talou,1,* B. Becker,2 T. Kawano,1 M. B
Meirovitch, Hagai
Lower and upper bounds for the absolute free energy by the hypothetical scanning Monte Carlo method The hypothetical scanning HS method is a general approach for calculating the absolute entropy S and free energy F to provide the free energy through the analysis of a single configuration. © 2004 American Institute
The ATLAS collaboration
2015-01-01T23:59:59.000Z
This note summarizes some of the latest Monte Carlo generator studies using ttbar events in ATLAS. Variations of the h_damp parameters and PDFs in the Powheg+Pythia8 setup are compared to ATLAS measurements of ttbar production. In addition, Powheg+Pythia6, Powheg+Herwig++ and Sherpa MEPS@NLO are also compared to the same measurements.
Benmakhlouf, Hamza, E-mail: hamza.benmakhlouf@karolinska.se [Department of Medical Physics, Karolinska University Hospital, SE-171 76 Stockholm, Sweden, and Department of Physics, Medical Radiation Physics, Stockholm University and Karolinska Institute, SE-171 76 Stockholm (Sweden)] [Department of Medical Physics, Karolinska University Hospital, SE-171 76 Stockholm, Sweden, and Department of Physics, Medical Radiation Physics, Stockholm University and Karolinska Institute, SE-171 76 Stockholm (Sweden); Sempau, Josep [Institut de Tècniques Energètiques, Universitat Politècnica de Catalunya, Diagonal 647, E-08028, Barcelona (Spain)] [Institut de Tècniques Energètiques, Universitat Politècnica de Catalunya, Diagonal 647, E-08028, Barcelona (Spain); Andreo, Pedro [Department of Physics, Medical Radiation Physics, Stockholm University and Karolinska Institute, SE-171 76 Stockholm (Sweden)] [Department of Physics, Medical Radiation Physics, Stockholm University and Karolinska Institute, SE-171 76 Stockholm (Sweden)
2014-04-15T23:59:59.000Z
Purpose: To determine detector-specific output correction factors,k{sub Q} {sub c{sub l{sub i{sub n}}}} {sub ,Q} {sub m{sub s{sub r}}} {sup f{sub {sup {sub c}{sub l}{sub i}{sub n}{sub {sup ,f{sub {sup {sub m}{sub s}{sub r}{sub ,}}}}}}}} in 6 MV small photon beams for air and liquid ionization chambers, silicon diodes, and diamond detectors from two manufacturers. Methods: Field output factors, defined according to the international formalism published byAlfonso et al. [Med. Phys. 35, 5179–5186 (2008)], relate the dosimetry of small photon beams to that of the machine-specific reference field; they include a correction to measured ratios of detector readings, conventionally used as output factors in broad beams. Output correction factors were calculated with the PENELOPE Monte Carlo (MC) system with a statistical uncertainty (type-A) of 0.15% or lower. The geometries of the detectors were coded using blueprints provided by the manufacturers, and phase-space files for field sizes between 0.5 × 0.5 cm{sup 2} and 10 × 10 cm{sup 2} from a Varian Clinac iX 6 MV linac used as sources. The output correction factors were determined scoring the absorbed dose within a detector and to a small water volume in the absence of the detector, both at a depth of 10 cm, for each small field and for the reference beam of 10 × 10 cm{sup 2}. Results: The Monte Carlo calculated output correction factors for the liquid ionization chamber and the diamond detector were within about ±1% of unity even for the smallest field sizes. Corrections were found to be significant for small air ionization chambers due to their cavity dimensions, as expected. The correction factors for silicon diodes varied with the detector type (shielded or unshielded), confirming the findings by other authors; different corrections for the detectors from the two manufacturers were obtained. The differences in the calculated factors for the various detectors were analyzed thoroughly and whenever possible the results were compared to published data, often calculated for different accelerators and using the EGSnrc MC system. The differences were used to estimate a type-B uncertainty for the correction factors. Together with the type-A uncertainty from the Monte Carlo calculations, an estimation of the combined standard uncertainty was made, assigned to the mean correction factors from various estimates. Conclusions: The present work provides a consistent and specific set of data for the output correction factors of a broad set of detectors in a Varian Clinac iX 6 MV accelerator and contributes to improving the understanding of the physics of small photon beams. The correction factors cannot in general be neglected for any detector and, as expected, their magnitude increases with decreasing field size. Due to the reduced number of clinical accelerator types currently available, it is suggested that detector output correction factors be given specifically for linac models and field sizes, rather than for a beam quality specifier that necessarily varies with the accelerator type and field size due to the different electron spot dimensions and photon collimation systems used by each accelerator model.
Chibani, Omar, E-mail: omar.chibani@fccc.edu; C-M Ma, Charlie [Fox Chase Cancer Center, Philadelphia, Pennsylvania 19111 (United States)] [Fox Chase Cancer Center, Philadelphia, Pennsylvania 19111 (United States)
2014-05-15T23:59:59.000Z
Purpose: To present a new accelerated Monte Carlo code for CT-based dose calculations in high dose rate (HDR) brachytherapy. The new code (HDRMC) accounts for both tissue and nontissue heterogeneities (applicator and contrast medium). Methods: HDRMC uses a fast ray-tracing technique and detailed physics algorithms to transport photons through a 3D mesh of voxels representing the patient anatomy with applicator and contrast medium included. A precalculated phase space file for the{sup 192}Ir source is used as source term. HDRM is calibrated to calculated absolute dose for real plans. A postprocessing technique is used to include the exact density and composition of nontissue heterogeneities in the 3D phantom. Dwell positions and angular orientations of the source are reconstructed using data from the treatment planning system (TPS). Structure contours are also imported from the TPS to recalculate dose-volume histograms. Results: HDRMC was first benchmarked against the MCNP5 code for a single source in homogenous water and for a loaded gynecologic applicator in water. The accuracy of the voxel-based applicator model used in HDRMC was also verified by comparing 3D dose distributions and dose-volume parameters obtained using 1-mm{sup 3} versus 2-mm{sup 3} phantom resolutions. HDRMC can calculate the 3D dose distribution for a typical HDR cervix case with 2-mm resolution in 5 min on a single CPU. Examples of heterogeneity effects for two clinical cases (cervix and esophagus) were demonstrated using HDRMC. The neglect of tissue heterogeneity for the esophageal case leads to the overestimate of CTV D90, CTV D100, and spinal cord maximum dose by 3.2%, 3.9%, and 3.6%, respectively. Conclusions: A fast Monte Carlo code for CT-based dose calculations which does not require a prebuilt applicator model is developed for those HDR brachytherapy treatments that use CT-compatible applicators. Tissue and nontissue heterogeneities should be taken into account in modern HDR brachytherapy planning.
Torres, J; Almansa, J F; Guerrero, R; Lallena, A M; Torres, Javier; Buades, Manuel J.; Almansa, Julio F.; Guerrero, Rafael; Lallena, Antonio M.
2003-01-01T23:59:59.000Z
Monte Carlo calculations using the codes PENELOPE and GEANT4 have been performed to characterize the dosimetric parameters of the new 20 mm long catheter based $^{32}$P beta source manufactured by Guidant Corporation. The dose distribution along the transverse axis and the two dimensional dose rate table have been calculated. Also, the dose rate at the reference point, the radial dose function and the anisotropy function were evaluated according to the adapted TG-60 formalism for cylindrical sources. PENELOPE and GEANT4 codes were first verified against previous results corresponding to the old 27 mm Guidant $^{32}$P beta source. The dose rate at the reference point for the unsheathed 27 mm source in water was calculated to be $0.215 \\pm 0.001$ cGy s$^{-1}$ mCi$^{-1}$, for PENELOPE, and $0.2312 \\pm 0.0008$ cGy s$^{-1}$ mCi$^{-1}$, for GEANT4. For the unsheathed 20 mm source these values were $0.2908 \\pm 0.0009$ cGy s$^{-1}$ mCi$^{-1}$ and $0.311 \\pm 0.001$ cGy s$^{-1}$ mCi$^{-1}$, respectively. Also, a compar...
Kuss, M.; Markel, T.; Kramer, W.
2011-01-01T23:59:59.000Z
Concentrated purchasing patterns of plug-in vehicles may result in localized distribution transformer overload scenarios. Prolonged periods of transformer overloading causes service life decrements, and in worst-case scenarios, results in tripped thermal relays and residential service outages. This analysis will review distribution transformer load models developed in the IEC 60076 standard, and apply the model to a neighborhood with plug-in hybrids. Residential distribution transformers are sized such that night-time cooling provides thermal recovery from heavy load conditions during the daytime utility peak. It is expected that PHEVs will primarily be charged at night in a residential setting. If not managed properly, some distribution transformers could become overloaded, leading to a reduction in transformer life expectancy, thus increasing costs to utilities and consumers. A Monte-Carlo scheme simulated each day of the year, evaluating 100 load scenarios as it swept through the following variables: number of vehicle per transformer, transformer size, and charging rate. A general method for determining expected transformer aging rate will be developed, based on the energy needs of plug-in vehicles loading a residential transformer.
Lin, J. Y. Y. [California Institute of Technology, Pasadena] [California Institute of Technology, Pasadena; Aczel, Adam A [ORNL] [ORNL; Abernathy, Douglas L [ORNL] [ORNL; Nagler, Stephen E [ORNL] [ORNL; Buyers, W. J. L. [National Research Council of Canada] [National Research Council of Canada; Granroth, Garrett E [ORNL] [ORNL
2014-01-01T23:59:59.000Z
Recently an extended series of equally spaced vibrational modes was observed in uranium nitride (UN) by performing neutron spectroscopy measurements using the ARCS and SEQUOIA time-of- flight chopper spectrometers [A.A. Aczel et al, Nature Communications 3, 1124 (2012)]. These modes are well described by 3D isotropic quantum harmonic oscillator (QHO) behavior of the nitrogen atoms, but there are additional contributions to the scattering that complicate the measured response. In an effort to better characterize the observed neutron scattering spectrum of UN, we have performed Monte Carlo ray tracing simulations of the ARCS and SEQUOIA experiments with various sample kernels, accounting for the nitrogen QHO scattering, contributions that arise from the acoustic portion of the partial phonon density of states (PDOS), and multiple scattering. These simulations demonstrate that the U and N motions can be treated independently, and show that multiple scattering contributes an approximate Q-independent background to the spectrum at the oscillator mode positions. Temperature dependent studies of the lowest few oscillator modes have also been made with SEQUOIA, and our simulations indicate that the T-dependence of the scattering from these modes is strongly influenced by the uranium lattice.
Wes Armour; Simon Hands; Costas Strouthos
2013-02-07T23:59:59.000Z
We formulate a model of N_f=4 flavors of relativistic fermion in 2+1d in the presence of a chemical potential mu coupled to two flavor doublets with opposite sign, akin to isopsin chemical potential in QCD. This is argued to be an effective theory for low energy electronic excitations in bilayer graphene, in which an applied voltage between the layers ensures equal populations of particles on one layer and holes on the other. The model is then reformulated on a spacetime lattice using staggered fermions, and in the absence of a sign problem, simulated using an orthodox hybrid Monte Carlo algorithm. With the coupling strength chosen to be close to a quantum critical point believed to exist for N_f
Y. Ishisaki; Y. Maeda; R. Fujimoto; M. Ozaki; K. Ebisawa; T. Takahashi; Y. Ueda; Y. Ogasaka; A. Ptak; K. Mukai; K. Hamaguchi; M. Hirayama; T. Kotani; H. Kubo; R. Shibata; M. Ebara; A. Furuzawa; R. Iizuka; H. Inoue; H. Mori; S. Okada; Y. Yokoyama; H. Matsumoto; H. Nakajima; H. Yamaguchi; N. Anabuki; N. Tawa; M. Nagai; S. Katsuda; K. Hayashida; A. Bamba; E. D. Miller; K. Sato; N. Y. Yamasaki
2006-10-04T23:59:59.000Z
We have developed a framework for the Monte-Carlo simulation of the X-Ray Telescopes (XRT) and the X-ray Imaging Spectrometers (XIS) onboard Suzaku, mainly for the scientific analysis of spatially and spectroscopically complex celestial sources. A photon-by-photon instrumental simulator is built on the ANL platform, which has been successfully used in ASCA data analysis. The simulator has a modular structure, in which the XRT simulation is based on a ray-tracing library, while the XIS simulation utilizes a spectral "Redistribution Matrix File" (RMF), generated separately by other tools. Instrumental characteristics and calibration results, e.g., XRT geometry, reflectivity, mutual alignments, thermal shield transmission, build-up of the contamination on the XIS optical blocking filters (OBF), are incorporated as completely as possible. Most of this information is available in the form of the FITS (Flexible Image Transport System) files in the standard calibration database (CALDB). This simulator can also be utilized to generate an "Ancillary Response File" (ARF), which describes the XRT response and the amount of OBF contamination. The ARF is dependent on the spatial distribution of the celestial target and the photon accumulation region on the detector, as well as observing conditions such as the observation date and satellite attitude. We describe principles of the simulator and the ARF generator, and demonstrate their performance in comparison with in-flight data.
Sarkadi, L
2015-01-01T23:59:59.000Z
The three-body dynamics of the ionization of the atomic hydrogen by 30 keV antiproton impact has been investigated by calculation of fully differential cross sections (FDCS) using the classical trajectory Monte Carlo (CTMC) method. The results of the calculations are compared with the predictions of quantum mechanical descriptions: The semi-classical time-dependent close-coupling theory, the fully quantal, time-independent close-coupling theory, and the continuum-distorted-wave-eikonal-initial-state model. In the analysis particular emphasis was put on the role of the nucleus-nucleus (NN) interaction played in the ionization process. For low-energy electron ejection CTMC predicts a large NN interaction effect on FDCS, in agreement with the quantum mechanical descriptions. By examining individual particle trajectories it was found that the relative motion between the electron and the nuclei is coupled very weakly with that between the nuclei, consequently the two motions can be treated independently. A simple ...
Duan, Zhe; Barber, Desmond P; Qin, Qing
2015-01-01T23:59:59.000Z
With the recently emerging global interest in building a next generation of circular electron-positron colliders to study the properties of the Higgs boson, and other important topics in particle physics at ultra-high beam energies, it is also important to pursue the possibility of implementing polarized beams at this energy scale. It is therefore necessary to set up simulation tools to evaluate the beam polarization at these ultra-high beam energies. In this paper, a Monte-Carlo simulation of the equilibrium beam polarization based on the Polymorphic Tracking Code(PTC) is described. The simulations are for a model storage ring with parameters similar to those of proposed circular colliders in this energy range, and they are compared with the suggestion that there are different regimes for the spin dynamics underlying the polarization of a beam in the presence of synchrotron radiation at ultra-high beam energies. In particular, it has been suggested that the so-called "correlated" crossing of spin resonances ...
Asadi, S; Vahidian, M; Marghchouei, M; Masoudi, S Farhad
2015-01-01T23:59:59.000Z
The aim of the present Monte Carlo study is to evaluate the variation of energy deposition in healthy tissues in the human eye which is irradiated by brachytherapy sources in comparison with the resultant dose increase in the gold nanoparticle(GNP)-loaded choroidal melanoma. The effects of these nanoparticles on normal tissues are compared between 103Pd and 125I as two ophthalmic brachytherapy sources. Dose distribution in the tumor and healthy tissues have been taken into account for both mentioned brachytherapy sources. Also, in a certain point of the eye, the ratio of the absorbed dose by the normal tissue in the presence of GNPs to the absorbed dose by the same point in the absence of GNPs has been calculated. In addition, differences observed in the comparison of simple water phantom and actual simulated human eye in presence of GNPs are also a matter of interest that have been considered in the present work. The results show that the calculated dose enhancement factor in the tumor for 125I is higher tha...
Fabio L. Pedrocchi; N. E. Bonesteel; David P. DiVincenzo
2015-07-03T23:59:59.000Z
The Majorana code is an example of a stabilizer code where the quantum information is stored in a system supporting well-separated Majorana Bound States (MBSs). We focus on one-dimensional realizations of the Majorana code, as well as networks of such structures, and investigate their lifetime when coupled to a parity-preserving thermal environment. We apply the Davies prescription, a standard method that describes the basic aspects of a thermal environment, and derive a master equation in the Born-Markov limit. We first focus on a single wire with immobile MBSs and perform error correction to annihilate thermal excitations. In the high-temperature limit, we show both analytically and numerically that the lifetime of the Majorana qubit grows logarithmically with the size of the wire. We then study a trijunction with four MBSs when braiding is executed. We study the occurrence of dangerous error processes that prevent the lifetime of the Majorana code from growing with the size of the trijunction. The origin of the dangerous processes is the braiding itself, which separates pairs of excitations and renders the noise nonlocal; these processes arise from the basic constraints of moving MBSs in 1D structures. We confirm our predictions with Monte Carlo simulations in the low-temperature regime, i.e. the regime of practical relevance. Our results put a restriction on the degree of self-correction of this particular 1D topological quantum computing architecture.
Weinman, J.P. [Lockheed Martin Corp., Schenectady, NY (United States)
1998-06-01T23:59:59.000Z
The purpose of this study is to investigate the eigenvalue sensitivity to new {sup 235}U, hydrogen, and oxygen cross section data sets by comparing RACER Monte Carlo calculations for several thermal and intermediate spectrum critical experiments. The new {sup 235}U library (Version 107) was derived by L. Leal and H. Derrien by fitting differential experimental data for {sup 235}U while constraining the fit to match experimental capture and fission resonance integrals and Maxwellian averaged thermal K1 (v fission minus absorption). The new hydrogen library (Version 45) consists of the ENDF/B-VI release 3 data with a 332.0 mb 2,200 m/s cross section which replaces the value of 332.6 mb in the current library. The new oxygen library (Version 39) is based on a recent evaluation of {sup 16}O by E. Caro. Nineteen Oak Ridge and Rocky Flats thermal solution benchmark critical assemblies that span a range of hydrogen-to-{sup 235}U (H/U) concentrations (2,052 to 27.1) and above-thermal neutron leakage fractions (0.555 to 0.011) were analyzed. In addition, three intermediate spectrum critical assemblies (UH3-UR, UH3-NI, and HISS-HUG) were studied.
Kim, Sung Jin; Kim, Sung Kyu
2015-01-01T23:59:59.000Z
Treatment planning system calculations in inhomogeneous regions may present significant inaccuracies due to loss of electronic equilibrium. In this study, three different dose calculation algorithms, pencil beam, collapsed cone, and Monte-Carlo, provided by our planning system were compared to assess their impact on the three-dimensional planning of lung and breast cases. A total of five breast and five lung cases were calculated using the PB, CC, and MC algorithms. Planning treatment volume and organs at risk delineation was performed according to our institutions protocols on the Oncentra MasterPlan image registration module, on 0.3 to 0.5 cm computed tomography slices taken under normal respiration conditions. Four intensity-modulated radiation therapy plans were calculated according to each algorithm for each patient. The plans were conducted on the Oncentra MasterPlan and CMS Monaco treatment planning systems, for 6 MV. The plans were compared in terms of the dose distribution in target, OAR volumes, and...
Kolbe, E.; Vasiliev, A.; Zimmermann, M. A. [Laboratory for Reactor Physics and Systems Behaviour, Paul Scherrer Institut, CH 5232 Villigen PSI (Switzerland)
2006-07-01T23:59:59.000Z
This study addresses the assessment of standard continuous-energy neutron data libraries using the Monte Carlo radiation transport code MCNPX for light water reactor criticality safety applications based on a suite of low-enriched, thermal, compound uranium benchmarks and represents a continuation of previously performed analysis using the JEF-2.2 and JENDL-3.3 nuclear data libraries. The new work enhancing the previous study includes the application of the ENDF/B-6.8 neutron data library and employs the most recent official release of the code (MCNPX-2.5.0) with an improved S({alpha}, {beta}) thermal neutron scattering treatment. Particular attention is paid to the analysis of the spectrum-related characteristics of the modeled critical experimental configurations to define the range of applicability of the reported estimates of lower tolerance bounds for k{sub eff}. Inspection of trends in k{sub eff} versus the spectrum-related characteristics or design parameters has also been performed. (authors)
Almansa, J F; Anguiano, M; Guerrero, R; Lallena, A M; Al-Dweri, Feras M.O.; Almansa, Julio F.; Guerrero, Rafael
2006-01-01T23:59:59.000Z
Monte Carlo calculations using the codes PENELOPE and GEANT4 have been performed to characterize the dosimetric properties of monoenergetic photon point sources in water. The dose rate in water has been calculated for energies of interest in brachytherapy, ranging between 10 keV and 2 MeV. A comparison of the results obtained using the two codes with the available data calculated with other Monte Carlo codes is carried out. A chi2-like statistical test is proposed for these comparisons. PENELOPE and GEANT4 show a reasonable agreement for all energies analyzed and distances to the source larger than 1 cm. Significant differences are found at distances from the source up to 1 cm. A similar situation occurs between PENELOPE and EGS4.
Kim, Jeongnim [ORNL] [ORNL; Reboredo, Fernando A [ORNL] [ORNL
2014-01-01T23:59:59.000Z
The self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo J. Chem. Phys. {\\bf 136}, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. {\\bf 89}, 6316 (1988)] are blended to obtain a method for the calculation of thermodynamic properties of many-body systems at low temperatures. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric trial wave functions. A statistical method is derived for the calculation of finite temperature properties of many-body systems near the ground state. In the process we also obtain a parallel algorithm that optimizes the many-body basis of a small subspace of the many-body Hilbert space. This small subspace is optimized to have maximum overlap with the one expanded by the lower energy eigenstates of a many-body Hamiltonian. We show in a model system that the Helmholtz free energy is minimized within this subspace as the iteration number increases. We show that the subspace expanded by the small basis systematically converges towards the subspace expanded by the lowest energy eigenstates. Possible applications of this method to calculate the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can be also used to accelerate the calculation of the ground or excited states with Quantum Monte Carlo.
Reboredo, Fernando A.; Kim, Jeongnim [Materials Science and Technology Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States)] [Materials Science and Technology Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States)
2014-02-21T23:59:59.000Z
A statistical method is derived for the calculation of thermodynamic properties of many-body systems at low temperatures. This method is based on the self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo, J. Chem. Phys. 136, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. 89, 6316 (1988)]. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric guiding wave functions. In the process we obtain a parallel algorithm that optimizes a small subspace of the many-body Hilbert space to provide maximum overlap with the subspace spanned by the lowest-energy eigenstates of a many-body Hamiltonian. We show in a model system that the partition function is progressively maximized within this subspace. We show that the subspace spanned by the small basis systematically converges towards the subspace spanned by the lowest energy eigenstates. Possible applications of this method for calculating the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can also be used to accelerate the calculation of the ground or excited states with quantum Monte Carlo.
Andrea Zen; Ye Luo; Sandro Sorella; Leonardo Guidoni
2013-09-02T23:59:59.000Z
Quantum Monte Carlo methods are accurate and promising many body techniques for electronic structure calculations which, in the last years, are encountering a growing interest thanks to their favorable scaling with the system size and their efficient parallelization, particularly suited for the modern high performance computing facilities. The ansatz of the wave function and its variational flexibility are crucial points for both the accurate description of molecular properties and the capabilities of the method to tackle large systems. In this paper, we extensively analyze, using different variational ansatzes, several properties of the water molecule, namely: the total energy, the dipole and quadrupole momenta, the ionization and atomization energies, the equilibrium configuration, and the harmonic and fundamental frequencies of vibration. The investigation mainly focuses on variational Monte Carlo calculations, although several lattice regularized diffusion Monte Carlo calculations are also reported. Through a systematic study, we provide a useful guide to the choice of the wave function, the pseudo potential, and the basis set for QMC calculations. We also introduce a new strategy for the definition of the atomic orbitals involved in the Jastrow - Antisymmetrised Geminal power wave function, in order to drastically reduce the number of variational parameters. This scheme significantly improves the efficiency of QMC energy minimization in case of large basis sets.
Klaus M. Pontoppidan; Cornelis P. Dullemond; Ewine F. van Dishoeck; Geoffrey A. Blake; Adwin C. A. Boogert; Neal J. Evans II; Jacqueline E. Kessler-Silacci; Fred Lahuis
2004-11-13T23:59:59.000Z
We present 5.2-37.2 micron spectroscopy of the edge-on circumstellar disk CRBR 2422.8-3423 obtained using the InfraRed Spectrograph (IRS) of the Spitzer Space Telescope. The IRS spectrum is combined with ground-based 3-5 micron spectroscopy to obtain a complete inventory of solid state material present along the line of sight toward the source. We model the object with a 2D axisymmetric (effectively 3D) Monte Carlo radiative transfer code. It is found that the model disk, assuming a standard flaring structure, is too warm to contain the very large observed column density of pure CO ice, but is possibly responsible for up to 50% of the water, CO2 and minor ice species. In particular the 6.85 micron band, tentatively due to NH4+, exhibits a prominent red wing, indicating a significant contribution from warm ice in the disk. It is argued that the pure CO ice is located in the dense core Oph-F in front of the source seen in the submillimeter imaging, with the CO gas in the core highly depleted. The model is used to predict which circumstances are most favourable for direct observations of ices in edge-on circumstellar disks. Ice bands will in general be deepest for inclinations similar to the disk opening angle, i.e. ~70 degrees. Due to the high optical depths of typical disk mid-planes, ice absorption bands will often probe warmer ice located in the upper layers of nearly edge-on disks. The ratios between different ice bands are found to vary by up to an order of magnitude depending on disk inclination due to radiative transfer effects caused by the 2D structure of the disk. Ratios between ice bands of the same species can therefore be used to constrain the location of the ices in a circumstellar disk. [Abstract abridged
Watanabe, Y; Dahlman, E [University of Minnesota, Minneapolis, MN (United States)
2014-06-01T23:59:59.000Z
Purpose: To evaluate the analytic formula of the cell death probability after single fraction dose. Methods: Cancer cells endlessly divide, but radiation causes the cancer cells to die. Not all cells die right away after irradiation. Instead, they continue dividing for next few cell cycles before they stop dividing and die. At the end of every cell cycle, the cell decides if it undertakes the mitotic process with a certain probability, Pdiv, which is altered by the radiation. Previously, by using a simple analytic model of radiobiology experiments, we obtained a formula of Pdeath (= 1 ? Pdiv). A question is if the proposed probability can reproduce the well-known survival data of the LQ model. In this study, we evaluated the formula by doing a Monte Carlo simulation of the cell proliferation process. Starting with Ns seed cells, the cell proliferation process was simulated for N generations or until all cells die. We counted the number of living cells at the end. Assuming that the cell colony survived when more than Nc cells were still alive, the surviving fraction S was estimated. We compared the S vs. dose, or S-D curve, with the LQ model. Results: The results indicated that our formula does not reproduce the experimentally observed S-D curve without selecting appropriate ? and ?/?. With parameter optimization, there was a fair agreement between the MC result and the LQ curve of dose lower than 20Gy. However, the survival fraction of MC decreased much faster in comparison to the LQ data for doses higher than 20 Gy. Conclusion: This study showed that the previously derived probability of cell death per cell cycle is not sufficiently accurate to replicate common radiobiological experiments. The formula must be modified by considering its cell cycle dependence and some other unknown effects.
Interpretation of 3D void measurements with Tripoli4.6/JEFF3.1.1 Monte Carlo code
Blaise, P.; Colomba, A. [CEA, DEN, DER/SPRC/LEPh, F-13108 Saint Paul-Lez-Durance (France)
2012-07-01T23:59:59.000Z
The present work details the first analysis of the 3D void phase conducted during the EPICURE/UM17x17/7% mixed UOX/MOX configuration. This configuration is composed of a homogeneous central 17x17 MOX-7% assembly, surrounded by portions of 17x17 1102 assemblies with guide-tubes. The void bubble is modelled by a small waterproof 5x5 fuel pin parallelepiped box of 11 cm height, placed in the centre of the MOX assembly. This bubble, initially placed at the core mid-plane, is then moved in different axial positions to study the evolution in the core of the axial perturbation. Then, to simulate the growing of this bubble in order to understand the effects of increased void fraction along the fuel pin, 3 and 5 bubbles have been stacked axially, from the core mid-plane. The C/E comparison obtained with the Monte Carlo code Tripoli4 for both radial and axial fission rate distributions, and in particular the reproduction of the very important flux gradients at the void/water interfaces, changing as the bubble is displaced along the z-axis are very satisfactory. It demonstrates both the capability of the code and its library to reproduce this kind of situation, as the very good quality of the experimental results, confirming the UM-17x17 as an excellent experimental benchmark for 3D code validation. This work has been performed within the frame of the V and V program for the future APOLL03 deterministic code of CEA starting in 2012, and its V and V benchmarking database. (authors)
SU-D-19A-04: Parameter Characterization of Electron Beam Monte Carlo Phase Space of TrueBeam Linacs
Rodrigues, A; Yin, F; Wu, Q [Duke University Medical Center, Durham, NC (United States); Medical Physics Graduate Program, Duke University Medical Center, Durham, NC (United States); Sawkey, D [Varian Medical Systems, Palo Alto, CA (United States)
2014-06-01T23:59:59.000Z
Purpose: For TrueBeam Monte Carlo simulations, Varian does not distribute linac head geometry and material compositions, instead providing a phase space file (PSF) for the users. The PSF has a finite number of particle histories and can have very large file size, yet still contains inherent statistical noises. The purpose of this study is to characterize the electron beam PSF with parameters. Methods: The PSF is a snapshot of all particles' information at a given plane above jaws including type, energy, position, and directions. This study utilized a preliminary TrueBeam PSF, of which validation against measurement is presented in another study. To characterize the PSF, distributions of energy, position, and direction of all particles are analyzed as piece-wise parameterized functions of radius and polar angle. Subsequently, a pseudo PSF was generated based on this characterization. Validation was assessed by directly comparing the true and pseudo PSFs, and by using both PSFs in the down-stream MC simulations (BEAMnrc/DOSXYZnrc) and comparing dose distributions for 3 applicators at 15 MeV. Statistical uncertainty of 4% was limited by the number of histories in the original PSF. Percent depth dose (PDD) and orthogonal (PRF) profiles at various depths were evaluated. Results: Preliminary results showed that this PSF parameterization was accurate, with no visible differences between original and pseudo PSFs except at the edge (6 cm off axis), which did not impact dose distributions in phantom. PDD differences were within 1 mm for R{sub 7} {sub 0}, R{sub 5} {sub 0}, R{sub 3} {sub 0}, and R{sub 1} {sub 0}, and PRF field size and penumbras were within 2 mm. Conclusion: A PSF can be successfully characterized by distributions for energy, position, and direction as parameterized functions of radius and polar angles; this facilitates generating sufficient particles at any statistical precision. Analyses for all other electron energies are under way and results will be included in the presentation.
Su, L.; Du, X.; Liu, T.; Xu, X. G. [Nuclear Engineering Program, Rensselaer Polytechnic Institute, Troy, NY 12180 (United States)
2013-07-01T23:59:59.000Z
An electron-photon coupled Monte Carlo code ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - is being developed at Rensselaer Polytechnic Institute as a software test bed for emerging heterogeneous high performance computers that utilize accelerators such as GPUs. In this paper, the preliminary results of code development and testing are presented. The electron transport in media was modeled using the class-II condensed history method. The electron energy considered ranges from a few hundred keV to 30 MeV. Moller scattering and bremsstrahlung processes above a preset energy were explicitly modeled. Energy loss below that threshold was accounted for using the Continuously Slowing Down Approximation (CSDA). Photon transport was dealt with using the delta tracking method. Photoelectric effect, Compton scattering and pair production were modeled. Voxelised geometry was supported. A serial ARHCHER-CPU was first written in C++. The code was then ported to the GPU platform using CUDA C. The hardware involved a desktop PC with an Intel Xeon X5660 CPU and six NVIDIA Tesla M2090 GPUs. ARHCHER was tested for a case of 20 MeV electron beam incident perpendicularly on a water-aluminum-water phantom. The depth and lateral dose profiles were found to agree with results obtained from well tested MC codes. Using six GPU cards, 6x10{sup 6} histories of electrons were simulated within 2 seconds. In comparison, the same case running the EGSnrc and MCNPX codes required 1645 seconds and 9213 seconds, respectively, on a CPU with a single core used. (authors)
Liu, T.; Ding, A.; Ji, W.; Xu, X. G. [Nuclear Engineering and Engineering Physics, Rensselaer Polytechnic Inst., Troy, NY 12180 (United States); Carothers, C. D. [Dept. of Computer Science, Rensselaer Polytechnic Inst. RPI (United States); Brown, F. B. [Los Alamos National Laboratory (LANL) (United States)
2012-07-01T23:59:59.000Z
Monte Carlo (MC) method is able to accurately calculate eigenvalues in reactor analysis. Its lengthy computation time can be reduced by general-purpose computing on Graphics Processing Units (GPU), one of the latest parallel computing techniques under development. The method of porting a regular transport code to GPU is usually very straightforward due to the 'embarrassingly parallel' nature of MC code. However, the situation becomes different for eigenvalue calculation in that it will be performed on a generation-by-generation basis and the thread coordination should be explicitly taken care of. This paper presents our effort to develop such a GPU-based MC code in Compute Unified Device Architecture (CUDA) environment. The code is able to perform eigenvalue calculation under simple geometries on a multi-GPU system. The specifics of algorithm design, including thread organization and memory management were described in detail. The original CPU version of the code was tested on an Intel Xeon X5660 2.8 GHz CPU, and the adapted GPU version was tested on NVIDIA Tesla M2090 GPUs. Double-precision floating point format was used throughout the calculation. The result showed that a speedup of 7.0 and 33.3 were obtained for a bare spherical core and a binary slab system respectively. The speedup factor was further increased by a factor of {approx}2 on a dual GPU system. The upper limit of device-level parallelism was analyzed, and a possible method to enhance the thread-level parallelism was proposed. (authors)
Talamo, A.; Gohar, Y. (Nuclear Engineering Division) [Nuclear Engineering Division
2011-05-12T23:59:59.000Z
This study investigates the performance of the YALINA Booster subcritical assembly, located in Belarus, during operation with high (90%), medium (36%), and low (21%) enriched uranium fuels in the assembly's fast zone. The YALINA Booster is a zero-power, subcritical assembly driven by a conventional neutron generator. It was constructed for the purpose of investigating the static and dynamic neutronics properties of accelerator driven subcritical systems, and to serve as a fast neutron source for investigating the properties of nuclear reactions, in particular transmutation reactions involving minor-actinides. The first part of this study analyzes the assembly's performance with several fuel types. The MCNPX and MONK Monte Carlo codes were used to determine effective and source neutron multiplication factors, effective delayed neutron fraction, prompt neutron lifetime, neutron flux profiles and spectra, and neutron reaction rates produced from the use of three neutron sources: californium, deuterium-deuterium, and deuterium-tritium. In the latter two cases, the external neutron source operates in pulsed mode. The results discussed in the first part of this report show that the use of low enriched fuel in the fast zone of the assembly diminishes neutron multiplication. Therefore, the discussion in the second part of the report focuses on finding alternative fuel loading configurations that enhance neutron multiplication while using low enriched uranium fuel. It was found that arranging the interface absorber between the fast and the thermal zones in a circular rather than a square array is an effective method of operating the YALINA Booster subcritical assembly without downgrading neutron multiplication relative to the original value obtained with the use of the high enriched uranium fuels in the fast zone.
Amoush, Ahmad, E-mail: amousha@ccf.org [Department of Radiation Oncology, University of Cincinnati College of Medicine, Cincinnati, OH (United States); Luckstead, Marcus; Lamba, Michael; Elson, Howard; Kassing, William [Department of Radiation Oncology, University of Cincinnati College of Medicine, Cincinnati, OH (United States)
2013-07-01T23:59:59.000Z
This study aimed to investigate the high-dose rate Iridium-192 brachytherapy, including near source dosimetry, of a catheter-based applicator from 0.5 mm to 1 cm along the transverse axis. Radiochromic film and Monte Carlo (MC) simulation were used to generate absolute dose for the catheter-based applicator. Results from radiochromic film and MC simulation were compared directly to the treatment planning system (TPS) based on the American Association of Physicists in Medicine Updated Task Group 43 (TG-43U1) dose calculation formalism. The difference between dose measured using radiochromic film along the transverse plane at 0.5 mm from the surface and the predicted dose by the TPS was 24%±13%. The dose difference between the MC simulation along the transverse plane at 0.5 mm from the surface and the predicted dose by the TPS was 22.1%±3%. For distances from 1.5 mm to 1 cm from the surface, radiochromic film and MC simulation agreed with TPS within an uncertainty of 3%. The TPS under-predicts the dose at the surface of the applicator, i.e., 0.5 mm from the catheter surface, as compared to the measured and MC simulation predicted dose. MC simulation results demonstrated that 15% of this error is due to neglecting the beta particles and discrete electrons emanating from the sources and not considered by the TPS, and 7% of the difference was due to the photon alone, potentially due to the differences in MC dose modeling, photon spectrum, scoring techniques, and effect of the presence of the catheter and the air gap. Beyond 1 mm from the surface, the TPS dose algorithm agrees with the experimental and MC data within 3%.
Andrea Zen; Emanuele Coccia; Ye Luo; Sandro Sorella; Leonardo Guidoni
2014-06-17T23:59:59.000Z
Diradical molecules are essential species involved in many organic and inorganic chemical reactions. The computational study of their electronic structure is often challenging, because a reliable description of the correlation, and in particular of the static one, requires multi-reference techniques. The Jastrow correlated Antisymmetrized Geminal Power (JAGP) is a compact and efficient wave function ansatz, based on the valence-bond representation, which can be used within Quantum Monte Carlo (QMC) approaches. The AGP part can be rewritten in terms of molecular orbitals, obtaining a multi-determinant expansion with zero-seniority number. In the present work we demonstrate the capability of the JAGP ansatz to correctly describe the electronic structure of two diradical prototypes: the orthogonally twisted ethylene, C2H4, and the methylene, CH2, representing respectively a homosymmetric and heterosymmetric system. On the other hand, we show that the simple ansatz of a Jastrow correlated Single Determinant (JSD) wave function is unable to provide an accurate description of the electronic structure in these diradical molecules, both at variational level and, more remarkably, in the fixed-nodes projection schemes showing that a poor description of the static correlation yields an inaccurate nodal surface. The suitability of JAGP to correctly describe diradicals with a computational cost comparable with that of a JSD calculation, in combination with a favorable scalability of QMC algorithms with the system size, opens new perspectives in the ab initio study of large diradical systems, like the transition states in cycloaddition reactions and the thermal isomerization of biological chromophores.
Q. Chang; H. M. Cuppen; E. Herbst
2007-05-24T23:59:59.000Z
AIM: We have recently developed a microscopic Monte Carlo approach to study surface chemistry on interstellar grains and the morphology of ice mantles. The method is designed to eliminate the problems inherent in the rate-equation formalism to surface chemistry. Here we report the first use of this method in a chemical model of cold interstellar cloud cores that includes both gas-phase and surface chemistry. The surface chemical network consists of a small number of diffusive reactions that can produce molecular oxygen, water, carbon dioxide, formaldehyde, methanol and assorted radicals. METHOD: The simulation is started by running a gas-phase model including accretion onto grains but no surface chemistry or evaporation. The starting surface consists of either flat or rough olivine. We introduce the surface chemistry of the three species H, O and CO in an iterative manner using our stochastic technique. Under the conditions of the simulation, only atomic hydrogen can evaporate to a significant extent. Although it has little effect on other gas-phase species, the evaporation of atomic hydrogen changes its gas-phase abundance, which in turn changes the flux of atomic hydrogen onto grains. The effect on the surface chemistry is treated until convergence occurs. We neglect all non-thermal desorptive processes. RESULTS: We determine the mantle abundances of assorted molecules as a function of time through 2x10^5 yr. Our method also allows determination of the abundance of each molecule in specific monolayers. The mantle results can be compared with observations of water, carbon dioxide, carbon monoxide, and methanol ices in the sources W33A and Elias 16. Other than a slight underproduction of mantle CO, our results are in very good agreement with observations.
Wang, L; Fourkal, E; Hayes, S; Jin, L; Ma, C [Fox Chase Cancer Center, Philadelphia, PA (United States)
2014-06-01T23:59:59.000Z
Purpose: To study the dosimetric difference resulted in using the pencil beam algorithm instead of Monte Carlo (MC) methods for tumors adjacent to the skull. Methods: We retrospectively calculated the dosimetric differences between RT and MC algorithms for brain tumors treated with CyberKnife located adjacent to the skull for 18 patients (total of 27 tumors). The median tumor sizes was 0.53-cc (range 0.018-cc to 26.2-cc). The absolute mean distance from the tumor to the skull was 2.11 mm (range - 17.0 mm to 9.2 mm). The dosimetric variables examined include the mean, maximum, and minimum doses to the target, the target coverage (TC) and conformality index. The MC calculation used the same MUs as the RT dose calculation without further normalization and 1% statistical uncertainty. The differences were analyzed by tumor size and distance from the skull. Results: The TC was generally reduced with the MC calculation (24 out of 27 cases). The average difference in TC between RT and MC was 3.3% (range 0.0% to 23.5%). When the TC was deemed unacceptable, the plans were re-normalized in order to increase the TC to 99%. This resulted in a 6.9% maximum change in the prescription isodose line. The maximum changes in the mean, maximum, and minimum doses were 5.4 %, 7.7%, and 8.4%, respectively, before re-normalization. When the TC was analyzed with regards to target size, it was found that the worst coverage occurred with the smaller targets (0.018-cc). When the TC was analyzed with regards to the distance to the skull, there was no correlation between proximity to the skull and TC between the RT and MC plans. Conclusions: For smaller targets (< 4.0-cc), MC should be used to re-evaluate the dose coverage after RT is used for the initial dose calculation in order to ensure target coverage.
Muir, B. R., E-mail: bmuir@physics.carleton.ca; Rogers, D. W. O., E-mail: drogers@physics.carleton.ca [Physics Department, Carleton Laboratory for Radiotherapy Physics, Carleton University, 1125 Colonel By Drive, Ottawa, Ontario K1S 5B6 (Canada)] [Physics Department, Carleton Laboratory for Radiotherapy Physics, Carleton University, 1125 Colonel By Drive, Ottawa, Ontario K1S 5B6 (Canada)
2013-12-15T23:59:59.000Z
Purpose: To investigate recommendations for reference dosimetry of electron beams and gradient effects for the NE2571 chamber and to provide beam quality conversion factors using Monte Carlo simulations of the PTW Roos and NE2571 ion chambers. Methods: The EGSnrc code system is used to calculate the absorbed dose-to-water and the dose to the gas in fully modeled ion chambers as a function of depth in water. Electron beams are modeled using realistic accelerator simulations as well as beams modeled as collimated point sources from realistic electron beam spectra or monoenergetic electrons. Beam quality conversion factors are calculated with ratios of the doses to water and to the air in the ion chamber in electron beams and a cobalt-60 reference field. The overall ion chamber correction factor is studied using calculations of water-to-air stopping power ratios. Results: The use of an effective point of measurement shift of 1.55 mm from the front face of the PTW Roos chamber, which places the point of measurement inside the chamber cavity, minimizes the difference betweenR{sub 50}, the beam quality specifier, calculated from chamber simulations compared to that obtained using depth-dose calculations in water. A similar shift minimizes the variation of the overall ion chamber correction factor with depth to the practical range and reduces the root-mean-square deviation of a fit to calculated beam quality conversion factors at the reference depth as a function of R{sub 50}. Similarly, an upstream shift of 0.34 r{sub cav} allows a more accurate determination of R{sub 50} from NE2571 chamber calculations and reduces the variation of the overall ion chamber correction factor with depth. The determination of the gradient correction using a shift of 0.22 r{sub cav} optimizes the root-mean-square deviation of a fit to calculated beam quality conversion factors if all beams investigated are considered. However, if only clinical beams are considered, a good fit to results for beam quality conversion factors is obtained without explicitly correcting for gradient effects. The inadequacy of R{sub 50} to uniquely specify beam quality for the accurate selection of k{sub Q} factors is discussed. Systematic uncertainties in beam quality conversion factors are analyzed for the NE2571 chamber and amount to between 0.4% and 1.2% depending on assumptions used. Conclusions: The calculated beam quality conversion factors for the PTW Roos chamber obtained here are in good agreement with literature data. These results characterize the use of an NE2571 ion chamber for reference dosimetry of electron beams even in low-energy beams.
Sutherland, J. G. H.; Miksys, N.; Thomson, R. M., E-mail: rthomson@physics.carleton.ca [Carleton Laboratory for Radiotherapy Physics, Department of Physics, Carleton University, Ottawa, Ontario K1S 5B6 (Canada); Furutani, K. M. [Department of Radiation Oncology, Mayo Clinic College of Medicine, Rochester, Minnesota 55905 (United States)] [Department of Radiation Oncology, Mayo Clinic College of Medicine, Rochester, Minnesota 55905 (United States)
2014-01-15T23:59:59.000Z
Purpose: To investigate methods of generating accurate patient-specific computational phantoms for the Monte Carlo calculation of lung brachytherapy patient dose distributions. Methods: Four metallic artifact mitigation methods are applied to six lung brachytherapy patient computed tomography (CT) images: simple threshold replacement (STR) identifies high CT values in the vicinity of the seeds and replaces them with estimated true values; fan beam virtual sinogram replaces artifact-affected values in a virtual sinogram and performs a filtered back-projection to generate a corrected image; 3D median filter replaces voxel values that differ from the median value in a region of interest surrounding the voxel and then applies a second filter to reduce noise; and a combination of fan beam virtual sinogram and STR. Computational phantoms are generated from artifact-corrected and uncorrected images using several tissue assignment schemes: both lung-contour constrained and unconstrained global schemes are considered. Voxel mass densities are assigned based on voxel CT number or using the nominal tissue mass densities. Dose distributions are calculated using the EGSnrc user-code BrachyDose for{sup 125}I, {sup 103}Pd, and {sup 131}Cs seeds and are compared directly as well as through dose volume histograms and dose metrics for target volumes surrounding surgical sutures. Results: Metallic artifact mitigation techniques vary in ability to reduce artifacts while preserving tissue detail. Notably, images corrected with the fan beam virtual sinogram have reduced artifacts but residual artifacts near sources remain requiring additional use of STR; the 3D median filter removes artifacts but simultaneously removes detail in lung and bone. Doses vary considerably between computational phantoms with the largest differences arising from artifact-affected voxels assigned to bone in the vicinity of the seeds. Consequently, when metallic artifact reduction and constrained tissue assignment within lung contours are employed in generated phantoms, this erroneous assignment is reduced, generally resulting in higher doses. Lung-constrained tissue assignment also results in increased doses in regions of interest due to a reduction in the erroneous assignment of adipose to voxels within lung contours. Differences in dose metrics calculated for different computational phantoms are sensitive to radionuclide photon spectra with the largest differences for{sup 103}Pd seeds and smallest but still considerable differences for {sup 131}Cs seeds. Conclusions: Despite producing differences in CT images, dose metrics calculated using the STR, fan beam + STR, and 3D median filter techniques produce similar dose metrics. Results suggest that the accuracy of dose distributions for permanent implant lung brachytherapy is improved by applying lung-constrained tissue assignment schemes to metallic artifact corrected images.
Harris, S; Dave Dunn, D
2009-03-01T23:59:59.000Z
The sensitivity of two specific types of radionuclide detectors for conducting an on-board search in the maritime environment was evaluated using Monte Carlo simulation implemented in AVERT{reg_sign}. AVERT{reg_sign}, short for the Automated Vulnerability Evaluation for Risk of Terrorism, is personal computer based vulnerability assessment software developed by the ARES Corporation. The sensitivity of two specific types of radionuclide detectors for conducting an on-board search in the maritime environment was evaluated using Monte Carlo simulation. The detectors, a RadPack and also a Personal Radiation Detector (PRD), were chosen from the class of Human Portable Radiation Detection Systems (HPRDS). Human Portable Radiation Detection Systems (HPRDS) serve multiple purposes. In the maritime environment, there is a need to detect, localize, characterize, and identify radiological/nuclear (RN) material or weapons. The RadPack is a commercially available broad-area search device used for gamma and also for neutron detection. The PRD is chiefly used as a personal radiation protection device. It is also used to detect contraband radionuclides and to localize radionuclide sources. Neither device has the capacity to characterize or identify radionuclides. The principal aim of this study was to investigate the sensitivity of both the RadPack and the PRD while being used under controlled conditions in a simulated maritime environment for detecting hidden RN contraband. The detection distance varies by the source strength and the shielding present. The characterization parameters of the source are not indicated in this report so the results summarized are relative. The Monte Carlo simulation results indicate the probability of detection of the RN source at certain distances from the detector which is a function of transverse speed and instrument sensitivity for the specified RN source.
B. M. Abramov; P. N. Alexeev; Yu. A. Borodin; S. A. Bulychjov; I. A. Dukhovskoy; A. P. Krutenkova; V. V. Kulikov; M. A. Martemianov; M. A. Matsyuk; E. N. Turdakina; A. I. Khanov; S. G. Mashnik
2015-02-05T23:59:59.000Z
Momentum spectra of hydrogen isotopes have been measured at 3.5 deg from C12 fragmentation on a Be target. Momentum spectra cover both the region of fragmentation maximum and the cumulative region. Differential cross sections span five orders of magnitude. The data are compared to predictions of four Monte Carlo codes: QMD, LAQGSM, BC, and INCL++. There are large differences between the data and predictions of some models in the high momentum region. The INCL++ code gives the best and almost perfect description of the data.
Fang, Yuan, E-mail: yuan.fang@fda.hhs.gov [Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993-0002 and Department of Electrical and Computer Engineering, The University of Waterloo, Waterloo, Ontario N2L 3G1 (Canada)] [Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993-0002 and Department of Electrical and Computer Engineering, The University of Waterloo, Waterloo, Ontario N2L 3G1 (Canada); Karim, Karim S. [Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario N2L 3G1 (Canada)] [Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario N2L 3G1 (Canada); Badano, Aldo [Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993-0002 (United States)] [Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993-0002 (United States)
2014-01-15T23:59:59.000Z
Purpose: The authors describe the modification to a previously developed Monte Carlo model of semiconductor direct x-ray detector required for studying the effect of burst and recombination algorithms on detector performance. This work provides insight into the effect of different charge generation models for a-Se detectors on Swank noise and recombination fraction. Methods: The proposed burst and recombination models are implemented in the Monte Carlo simulation package, ARTEMIS, developed byFang et al. [“Spatiotemporal Monte Carlo transport methods in x-ray semiconductor detectors: Application to pulse-height spectroscopy in a-Se,” Med. Phys. 39(1), 308–319 (2012)]. The burst model generates a cloud of electron-hole pairs based on electron velocity, energy deposition, and material parameters distributed within a spherical uniform volume (SUV) or on a spherical surface area (SSA). A simple first-hit (FH) and a more detailed but computationally expensive nearest-neighbor (NN) recombination algorithms are also described and compared. Results: Simulated recombination fractions for a single electron-hole pair show good agreement with Onsager model for a wide range of electric field, thermalization distance, and temperature. The recombination fraction and Swank noise exhibit a dependence on the burst model for generation of many electron-hole pairs from a single x ray. The Swank noise decreased for the SSA compared to the SUV model at 4 V/?m, while the recombination fraction decreased for SSA compared to the SUV model at 30 V/?m. The NN and FH recombination results were comparable. Conclusions: Results obtained with the ARTEMIS Monte Carlo transport model incorporating drift and diffusion are validated with the Onsager model for a single electron-hole pair as a function of electric field, thermalization distance, and temperature. For x-ray interactions, the authors demonstrate that the choice of burst model can affect the simulation results for the generation of many electron-hole pairs. The SSA model is more sensitive to the effect of electric field compared to the SUV model and that the NN and FH recombination algorithms did not significantly affect simulation results.
Indium-Gallium Segregation in CuIn$_{x}$Ga$_{1-x}$Se$_2$: An ab initio based Monte Carlo Study
Ludwig, Christian D R; Felser, Claudia; Schilling, Tanja; Windeln, Johannes; Kratzer, Peter
2010-01-01T23:59:59.000Z
Thin-film solar cells with CuIn$_x$Ga$_{1-x}$Se$_2$ (CIGS) absorber are still far below their efficiency limit, although lab cells reach already 19.9%. One important aspect is the homogeneity of the alloy. Large-scale simulations combining Monte Carlo and density functional calculations show that two phases coexist in thermal equilibrium below room temperature. Only at higher temperatures, CIGS becomes more and more a homogeneous alloy. A larger degree of inhomogeneity for Ga-rich CIGS persists over a wide temperature range, which may contribute to the low observed efficiency of Ga-rich CIGS solar cells.
Journal of Statistical Physics, Vol. 89, Nos. 5/6, 1997 Simulated Annealing Using Hybrid Monte Carlo
Toral, Raúl
of the system. It is known that if a system is heated to a very high temperature T and then it is slowly cooledJournal of Statistical Physics, Vol. 89, Nos. 5/6, 1997 Simulated Annealing Using Hybrid Monte global actualizationsvia the hybrid Monte Carloalgorithmin theirgeneralizedversion for the proposal
Hu, Z. M.; Xie, X. F.; Chen, Z. J.; Peng, X. Y.; Du, T. F.; Cui, Z. Q.; Ge, L. J.; Li, T.; Yuan, X.; Zhang, X.; Li, X. Q.; Zhang, G. H.; Chen, J. X.; Fan, T. S., E-mail: tsfan@pku.edu.cn [State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871 (China); Hu, L. Q.; Zhong, G. Q.; Lin, S. Y.; Wan, B. N. [Institute of Plasma Physics, CAS, Hefei 230031 (China); Gorini, G. [Dipartimento di Fisica, Università di Milano-Bicocca, Milano 20126 (Italy); Istituto di Fisica del Plasma “P. Caldirola,” Milano 20126 (Italy)
2014-11-15T23:59:59.000Z
To assess the neutron energy spectra and the neutron dose for different positions around the Experimental Advanced Superconducting Tokamak (EAST) device, a Bonner Sphere Spectrometer (BSS) was developed at Peking University, with totally nine polyethylene spheres and a SP9 {sup 3}He counter. The response functions of the BSS were calculated by the Monte Carlo codes MCNP and GEANT4 with dedicated models, and good agreement was found between these two codes. A feasibility study was carried out with a simulated neutron energy spectrum around EAST, and the simulated “experimental” result of each sphere was obtained by calculating the response with MCNP, which used the simulated neutron energy spectrum as the input spectrum. With the deconvolution of the “experimental” measurement, the neutron energy spectrum was retrieved and compared with the preset one. Good consistence was found which offers confidence for the application of the BSS system for dose and spectrum measurements around a fusion device.
Qin, Jianguo; Liu, Rong; Zhu, Tonghua; Zhang, Xinwei; Ye, Bangjiao
2015-01-01T23:59:59.000Z
To overcome the problem of inefficient computing time and unreliable results in MCNP5 calculation, a two-step method is adopted to calculate the energy deposition of prompt gamma-rays in detectors for depleted uranium spherical shells under D-T neutrons irradiation. In the first step, the gamma-ray spectrum for energy below 7 MeV is calculated by MCNP5 code; secondly, the electron recoil spectrum in a BC501A liquid scintillator detector is simulated based on EGSnrc Monte Carlo Code with the gamma-ray spectrum from the first step as input. The comparison of calculated results with experimental ones shows that the simulations agree well with experiment in the energy region 0.4-3 MeV for the prompt gamma-ray spectrum and below 4 MeVee for the electron recoil spectrum. The reliability of the two-step method in this work is validated.
Raychaudhuri, Subhadip
2015-01-01T23:59:59.000Z
Death ligand mediated apoptotic activation is a mode of programmed cell death that is widely used in cellular and physiological situations. Interest in studying death ligand induced apoptosis has increased due to the promising role of recombinant soluble forms of death ligands (mainly recombinant TRAIL) in anti-cancer therapy. A clear elucidation of how death ligands activate the type 1 and type 2 apoptotic pathways in healthy and cancer cells may help develop better chemotherapeutic strategies. In this work, we use kinetic Monte Carlo simulations to address the problem of type 1/ type 2 choice in death ligand mediated apoptosis of cancer cells. Our study provides insights into the activation of membrane proximal death module that results from complex interplay between death and decoy receptors. Relative abundance of death and decoy receptors was shown to be a key parameter for activation of the initiator caspases in the membrane module. Increased concentration of death ligands frequently increased the type 1...
,
2015-01-01T23:59:59.000Z
We present a sophisticated likelihood reconstruction algorithm for shower-image analysis of imaging Cherenkov telescopes. The reconstruction algorithm is based on the comparison of the camera pixel amplitudes with the predictions from a Monte Carlo based model. Shower parameters are determined by a maximisation of a likelihood function. Maximisation of the likelihood as a function of shower fit parameters is performed using a numerical non-linear optimisation technique. A related reconstruction technique has already been developed by the CAT and the H.E.S.S. experiments, and provides a more precise direction and energy reconstruction of the photon induced shower compared to the second moment of the camera image analysis. Examples are shown of the performance of the analysis on simulated gamma-ray data from the VERITAS array.
Dokania, N; Mathimalar, S; Garai, A; Nanal, V; Pillay, R G; Bhushan, K G
2015-01-01T23:59:59.000Z
The neutron flux at low energy ($E_n\\leq15$ MeV) resulting from the radioactivity of the rock in the underground cavern of the India-based Neutrino Observatory is estimated using Geant4-based Monte Carlo simulations. The neutron production rate due to the spontaneous fission of U, Th and ($\\alpha, n$) interactions in the rock is determined employing the actual rock composition. It has been demonstrated that the total flux is equivalent to a finite size cylindrical rock ($D=L=140$ cm) element. The energy integrated neutron flux thus obtained at the center of the underground tunnel is 2.76 (0.47) $\\times 10^{-6}\\rm~n ~cm^{-2}~s^{-1}$. The estimated neutron flux is of the same order ($\\sim10^{-6}\\rm~n ~cm^{-2}~s^{-1}$)~as measured in other underground laboratories.
N. Dokania; V. Singh; S. Mathimalar; A. Garai; V. Nanal; R. G. Pillay; K. G. Bhushan
2015-09-23T23:59:59.000Z
The neutron flux at low energy ($E_n\\leq15$ MeV) resulting from the radioactivity of the rock in the underground cavern of the India-based Neutrino Observatory is estimated using Geant4-based Monte Carlo simulations. The neutron production rate due to the spontaneous fission of U, Th and ($\\alpha, n$) interactions in the rock is determined employing the actual rock composition. It has been demonstrated that the total flux is equivalent to a finite size cylindrical rock ($D=L=140$ cm) element. The energy integrated neutron flux thus obtained at the center of the underground tunnel is 2.76 (0.47) $\\times 10^{-6}\\rm~n ~cm^{-2}~s^{-1}$. The estimated neutron flux is of the same order ($\\sim10^{-6}\\rm~n ~cm^{-2}~s^{-1}$)~as measured in other underground laboratories.
Bostani, Maryam, E-mail: mbostani@mednet.ucla.edu; McMillan, Kyle; Cagnon, Chris H.; McNitt-Gray, Michael F. [Departments of Biomedical Physics and Radiology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90024 (United States); DeMarco, John J. [Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, California 90095 (United States)
2014-11-01T23:59:59.000Z
Purpose: Monte Carlo (MC) simulation methods have been widely used in patient dosimetry in computed tomography (CT), including estimating patient organ doses. However, most simulation methods have undergone a limited set of validations, often using homogeneous phantoms with simple geometries. As clinical scanning has become more complex and the use of tube current modulation (TCM) has become pervasive in the clinic, MC simulations should include these techniques in their methodologies and therefore should also be validated using a variety of phantoms with different shapes and material compositions to result in a variety of differently modulated tube current profiles. The purpose of this work is to perform the measurements and simulations to validate a Monte Carlo model under a variety of test conditions where fixed tube current (FTC) and TCM were used. Methods: A previously developed MC model for estimating dose from CT scans that models TCM, built using the platform of MCNPX, was used for CT dose quantification. In order to validate the suitability of this model to accurately simulate patient dose from FTC and TCM CT scan, measurements and simulations were compared over a wide range of conditions. Phantoms used for testing range from simple geometries with homogeneous composition (16 and 32 cm computed tomography dose index phantoms) to more complex phantoms including a rectangular homogeneous water equivalent phantom, an elliptical shaped phantom with three sections (where each section was a homogeneous, but different material), and a heterogeneous, complex geometry anthropomorphic phantom. Each phantom requires varying levels of x-, y- and z-modulation. Each phantom was scanned on a multidetector row CT (Sensation 64) scanner under the conditions of both FTC and TCM. Dose measurements were made at various surface and depth positions within each phantom. Simulations using each phantom were performed for FTC, detailed x–y–z TCM, and z-axis-only TCM to obtain dose estimates. This allowed direct comparisons between measured and simulated dose values under each condition of phantom, location, and scan to be made. Results: For FTC scans, the percent root mean square (RMS) difference between measurements and simulations was within 5% across all phantoms. For TCM scans, the percent RMS of the difference between measured and simulated values when using detailed TCM and z-axis-only TCM simulations was 4.5% and 13.2%, respectively. For the anthropomorphic phantom, the difference between TCM measurements and detailed TCM and z-axis-only TCM simulations was 1.2% and 8.9%, respectively. For FTC measurements and simulations, the percent RMS of the difference was 5.0%. Conclusions: This work demonstrated that the Monte Carlo model developed provided good agreement between measured and simulated values under both simple and complex geometries including an anthropomorphic phantom. This work also showed the increased dose differences for z-axis-only TCM simulations, where considerable modulation in the x–y plane was present due to the shape of the rectangular water phantom. Results from this investigation highlight details that need to be included in Monte Carlo simulations of TCM CT scans in order to yield accurate, clinically viable assessments of patient dosimetry.
A. K. Fomin; A. P. Serebrov
2010-05-17T23:59:59.000Z
We performed a detailed analysis and the Monte Carlo simulation of the neutron lifetime experiment [S. Arzumanov et al., Phys. Lett. B 483 (2000) 15] because of the strong disagreement by 5.6 standard deviations between the results of this experiment and our experiment [A. Serebrov et al., Phys. Lett. B 605 (2005) 72]. We found a few effects which were not taken into account in the experiment [S. Arzumanov et al., Phys. Lett. B 483 (2000) 15]. The possible correction is -5.5 s with uncertainty of 2.4 s which comes from initial data knowledge. We assume that after taking into account this correction the result of work [S. Arzumanov et al., Phys. Lett. B 483 (2000) 15] for neutron lifetime 885.4 +/- 0.9stat +/- 0.4syst s could be corrected to 879.9 +/- 0.9stat +/- 2.4syst s.
Foyevtsova, Kateryna [ORNL] [ORNL; Krogel, Jaron T [ORNL] [ORNL; Kim, Jeongnim [ORNL] [ORNL; Kent, Paul R [ORNL] [ORNL; Dagotto, Elbio R [ORNL] [ORNL; Reboredo, Fernando A [ORNL] [ORNL
2014-01-01T23:59:59.000Z
In view of the continuous theoretical efforts aimed at an accurate microscopic description of the strongly correlated transition metal oxides and related materials, we show that with continuum quantum Monte Carlo (QMC) calculations it is possible to obtain the value of the spin superexchange coupling constant of a copper oxide in a quantitatively excellent agreement with experiment. The variational nature of the QMC total energy allows us to identify the best trial wave function out of the available pool of wave functions, which makes the approach essentially free from adjustable parameters and thus truly ab initio. The present results on magnetic interactions suggest that QMC is capable of accurately describing ground state properties of strongly correlated materials.
Wang Jianhua; Zhang Hualin [Shanghai Institute of Applied Physics, CAS, Shanghai 201800 (China); Department of Radiation Medicine, Ohio State University, Columbus, Ohio 43210 (United States)
2008-04-15T23:59:59.000Z
A recently developed alternative brachytherapy seed, Cs-1 Rev2 cesium-131, has begun to be used in clinical practice. The dosimetric characteristics of this source in various media, particularly in human tissues, have not been fully evaluated. The aim of this study was to calculate the dosimetric parameters for the Cs-1 Rev2 cesium-131 seed following the recommendations of the AAPM TG-43U1 report [Rivard et al., Med. Phys. 31, 633-674 (2004)] for new sources in brachytherapy applications. Dose rate constants, radial dose functions, and anisotropy functions of the source in water, Virtual Water, and relevant human soft tissues were calculated using MCNP5 Monte Carlo simulations following the TG-43U1 formalism. The results yielded dose rate constants of 1.048, 1.024, 1.041, and 1.044 cGy h{sup -1} U{sup -1} in water, Virtual Water, muscle, and prostate tissue, respectively. The conversion factor for this new source between water and Virtual Water was 1.02, between muscle and water was 1.006, and between prostate and water was 1.004. The authors' calculation of anisotropy functions in a Virtual Water phantom agreed closely with Murphy's measurements [Murphy et al., Med. Phys. 31, 1529-1538 (2004)]. Our calculations of the radial dose function in water and Virtual Water have good agreement with those in previous experimental and Monte Carlo studies. The TG-43U1 parameters for clinical applications in water, muscle, and prostate tissue are presented in this work.
Chen, X; Xing, L; Luxton, G; Bush, K [Stanford University, Palo Alto, CA (United States); Azcona, J [Clinica Universidad de Navarra, Pamplona (Spain)
2014-06-01T23:59:59.000Z
Purpose: Patient-specific QA for VMAT is incapable of providing full 3D dosimetric information and is labor intensive in the case of severe heterogeneities or small-aperture beams. A cloud-based Monte Carlo dose reconstruction method described here can perform the evaluation in entire 3D space and rapidly reveal the source of discrepancies between measured and planned dose. Methods: This QA technique consists of two integral parts: measurement using a phantom containing array of dosimeters, and a cloud-based voxel Monte Carlo algorithm (cVMC). After a VMAT plan was approved by a physician, a dose verification plan was created and delivered to the phantom using our Varian Trilogy or TrueBeam system. Actual delivery parameters (i.e., dose fraction, gantry angle, and MLC at control points) were extracted from Dynalog or trajectory files. Based on the delivery parameters, the 3D dose distribution in the phantom containing detector were recomputed using Eclipse dose calculation algorithms (AAA and AXB) and cVMC. Comparison and Gamma analysis is then conducted to evaluate the agreement between measured, recomputed, and planned dose distributions. To test the robustness of this method, we examined several representative VMAT treatments. Results: (1) The accuracy of cVMC dose calculation was validated via comparative studies. For cases that succeeded the patient specific QAs using commercial dosimetry systems such as Delta- 4, MAPCheck, and PTW Seven29 array, agreement between cVMC-recomputed, Eclipse-planned and measured doses was obtained with >90% of the points satisfying the 3%-and-3mm gamma index criteria. (2) The cVMC method incorporating Dynalog files was effective to reveal the root causes of the dosimetric discrepancies between Eclipse-planned and measured doses and provide a basis for solutions. Conclusion: The proposed method offers a highly robust and streamlined patient specific QA tool and provides a feasible solution for the rapidly increasing use of VMAT treatments in the clinic.
Mohammadyari, P [Nuclear Engineering Department, School of Mechanical Engineering, Shiraz Un, Ilam (Iran, Islamic Republic of); Faghihi, R [Nuclear Engineering Department, Shiraz University, Shiraz (Iran, Islamic Republic of); Shirazi, M Mosleh [Radiotherapy and Oncology Department, Namazi Hospital, Shiraz University of M, Shiraz (Iran, Islamic Republic of); Lotfi, M [Shiraz University of Medical Sciences, Medical Imaging Research Center, Shiraz (Iran, Islamic Republic of); Meigooni, A [Comprehensive cancer center of Nevada - University of Nevada Las Vegas UNL, Las Vegas, NV (United States)
2014-06-01T23:59:59.000Z
Purpose: the accuboost is the most modern method of breast brachytherapy that is a boost method in compressed tissue by a mammography unit. the dose distribution in uncompressed tissue, as compressed tissue is important that should be characterized. Methods: In this study, the mechanical behavior of breast in mammography loading, the displacement of breast tissue and the dose distribution in compressed and uncompressed tissue, are investigated. Dosimetry was performed by two dosimeter methods of Monte Carlo simulations using MCNP5 code and thermoluminescence dosimeters. For Monte Carlo simulations, the dose values in cubical lattice were calculated using tally F6. The displacement of the breast elements was simulated by Finite element model and calculated using ABAQUS software, from which the 3D dose distribution in uncompressed tissue was determined. The geometry of the model is constructed from MR images of 6 volunteers. Experimental dosimetery was performed by placing the thermoluminescence dosimeters into the polyvinyl alcohol breast equivalent phantom and on the proximal edge of compression plates to the chest. Results: The results indicate that using the cone applicators would deliver more than 95% of dose to the depth of 5 to 17mm, while round applicator will increase the skin dose. Nodal displacement, in presence of gravity and 60N forces, i.e. in mammography compression, was determined with 43% contraction in the loading direction and 37% expansion in orthogonal orientation. Finally, in comparison of the acquired from thermoluminescence dosimeters with MCNP5, they are consistent with each other in breast phantom and in chest's skin with average different percentage of 13.7±5.7 and 7.7±2.3, respectively. Conclusion: The major advantage of this kind of dosimetry is the ability of 3D dose calculation by FE Modeling. Finally, polyvinyl alcohol is a reliable material as a breast tissue equivalent dosimetric phantom that provides the ability of TLD dosimetry for validation.
Oborn, B. M.; Metcalfe, P. E.; Butson, M. J.; Rosenfeld, A. B. [Illawarra Cancer Care Centre (ICCC), Wollongong, New South Wales 2500 (Australia) and Centre for Medical Radiation Physics (CMRP), University of Wollongong, Wollongong, New South Wales 2500 (Australia); Centre for Medical Radiation Physics (CMRP), University of Wollongong, Wollongong, New South Wales 2500 (Australia); Illawarra Cancer Care Centre (ICCC), Wollongong, New South Wales 2500 (Australia); Centre for Medical Radiation Physics (CMRP), University of Wollongong, Wollongong, New South Wales 2500 (Australia)
2010-10-15T23:59:59.000Z
Purpose: The main focus of this work is to continue investigations into the Monte Carlo predicted skin doses seen in MRI-guided radiotherapy. In particular, the authors aim to characterize the 70 {mu}m skin doses over a larger range of magnetic field strength and x-ray field size than in the current literature. The effect of surface orientation on both the entry and exit sides is also studied. Finally, the use of exit bolus is also investigated for minimizing the negative effects of the electron return effect (ERE) on the exit skin dose. Methods: High resolution GEANT4 Monte Carlo simulations of a water phantom exposed to a 6 MV x-ray beam (Varian 2100C) have been performed. Transverse magnetic fields of strengths between 0 and 3 T have been applied to a 30x30x20 cm{sup 3} phantom. This phantom is also altered to have variable entry and exit surfaces with respect to the beam central axis and they range from -75 deg. to +75 deg. The exit bolus simulated is a 1 cm thick (water equivalent) slab located on the beam exit side. Results: On the entry side, significant skin doses at the beam central axis are reported for large positive surface angles and strong magnetic fields. However, over the entry surface angle range of -30 deg. to -60 deg., the entry skin dose is comparable to or less than the zero magnetic field skin dose, regardless of magnetic field strength and field size. On the exit side, moderate to high central axis skin dose increases are expected except at large positive surface angles. For exit bolus of 1 cm thickness, the central axis exit skin dose becomes an almost consistent value regardless of magnetic field strength or exit surface angle. This is due to the almost complete absorption of the ERE electrons by the bolus. Conclusions: There is an ideal entry angle range of -30 deg. to -60 deg. where entry skin dose is comparable to or less than the zero magnetic field skin dose. Other than this, the entry skin dose increases are significant, especially at higher magnetic fields. On the exit side there is mostly moderate to high skin dose increases for 0.2-3 T with the only exception being large positive angles. Exit bolus of 1 cm thickness will have a significant impact on lowering such exit skin dose increases that occur as a result of the ERE.
Direct test of the AdS/CFT correspondence by Monte Carlo studies of N=4 super Yang-Mills theory
Honda, Masazumi; Kim, Sang-Woo; Nishimura, Jun; Tsuchiya, Asato
2013-01-01T23:59:59.000Z
We perform nonperturbative studies of N=4 super Yang-Mills theory by Monte Carlo simulation. In particular, we calculate the correlation functions of chiral primary operators to test the AdS/CFT correspondence. Our results agree with the predictions obtained from the AdS side that the supersymmetry non-renormalization property is obeyed by the three-point functions but \\emph{not} by the four-point functions investigated in this paper. Instead of the lattice regularization, we use a novel regularization of the theory based on an equivalence in the large-N limit between the N=4 SU(N) theory on RxS^3 and a one-dimensional SU(N) gauge theory known as the plane-wave (BMN) matrix model. The equivalence extends the idea of large-N reduction to a curved space and, at the same time, overcomes the obstacle related to the center symmetry breaking. The adopted regularization preserves 16 supersymmetries, which is crucial in testing the AdS/CFT correspondence with the available computer resources.
Chen, Jinsong; Kemna, Andreas; Hubbard, Susan S.
2008-05-15T23:59:59.000Z
We develop a Bayesian model to invert spectral induced polarization (SIP) data for Cole-Cole parameters using Markov chain Monte Carlo (MCMC) sampling methods. We compare the performance of the MCMC based stochastic method with an iterative Gauss-Newton based deterministic method for Cole-Cole parameter estimation through inversion of synthetic and laboratory SIP data. The Gauss-Newton based method can provide an optimal solution for given objective functions under constraints, but the obtained optimal solution generally depends on the choice of initial values and the estimated uncertainty information is often inaccurate or insufficient. In contrast, the MCMC based inversion method provides extensive global information on unknown parameters, such as the marginal probability distribution functions, from which we can obtain better estimates and tighter uncertainty bounds of the parameters than with the deterministic method. Additionally, the results obtained with the MCMC method are independent of the choice of initial values. Because the MCMC based method does not explicitly offer single optimal solution for given objective functions, the deterministic and stochastic methods can complement each other. For example, the stochastic method can first be used to obtain the means of the unknown parameters by starting from an arbitrary set of initial values and the deterministic method can then be initiated using the means as starting values to obtain the optimal estimates of the Cole-Cole parameters.
Cox, Stephen J.; Michaelides, Angelos, E-mail: angelos.michaelides@ucl.ac.uk [Thomas Young Centre and London Centre for Nanotechnology, 17–19 Gordon Street, London WC1H 0AH (United Kingdom) [Thomas Young Centre and London Centre for Nanotechnology, 17–19 Gordon Street, London WC1H 0AH (United Kingdom); Department of Chemistry, University College London, 20 Gordon Street, London WC1H 0AJ (United Kingdom); Towler, Michael D. [Department of Earth Sciences, University College London Gower Street, London WC1E 6BT (United Kingdom) [Department of Earth Sciences, University College London Gower Street, London WC1E 6BT (United Kingdom); Theory of Condensed Matter Group, Cavendish Laboratory, University of Cambridge, J.J. Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Alfè, Dario [Thomas Young Centre and London Centre for Nanotechnology, 17–19 Gordon Street, London WC1H 0AH (United Kingdom) [Thomas Young Centre and London Centre for Nanotechnology, 17–19 Gordon Street, London WC1H 0AH (United Kingdom); Department of Earth Sciences, University College London Gower Street, London WC1E 6BT (United Kingdom)
2014-05-07T23:59:59.000Z
High quality reference data from diffusion Monte Carlo calculations are presented for bulk sI methane hydrate, a complex crystal exhibiting both hydrogen-bond and dispersion dominated interactions. The performance of some commonly used exchange-correlation functionals and all-atom point charge force fields is evaluated. Our results show that none of the exchange-correlation functionals tested are sufficient to describe both the energetics and the structure of methane hydrate accurately, while the point charge force fields perform badly in their description of the cohesive energy but fair well for the dissociation energetics. By comparing to ice I{sub h}, we show that a good prediction of the volume and cohesive energies for the hydrate relies primarily on an accurate description of the hydrogen bonded water framework, but that to correctly predict stability of the hydrate with respect to dissociation to ice I{sub h} and methane gas, accuracy in the water-methane interaction is also required. Our results highlight the difficulty that density functional theory faces in describing both the hydrogen bonded water framework and the dispersion bound methane.
Pastore, S. [University of South Carolina; Wiringa, Robert B. [ANL; Pieper, Steven C. [ANL; Schiavilla, Rocco [Old Dominion U., JLAB
2014-08-01T23:59:59.000Z
We report quantum Monte Carlo calculations of electromagnetic transitions in $^8$Be. The realistic Argonne $v_{18}$ two-nucleon and Illinois-7 three-nucleon potentials are used to generate the ground state and nine excited states, with energies that are in excellent agreement with experiment. A dozen $M1$ and eight $E2$ transition matrix elements between these states are then evaluated. The $E2$ matrix elements are computed only in impulse approximation, with those transitions from broad resonant states requiring special treatment. The $M1$ matrix elements include two-body meson-exchange currents derived from chiral effective field theory, which typically contribute 20--30\\% of the total expectation value. Many of the transitions are between isospin-mixed states; the calculations are performed for isospin-pure states and then combined with the empirical mixing coefficients to compare to experiment. In general, we find that transitions between states that have the same dominant spatial symmetry are in decent agreement with experiment, but those transitions between different spatial symmetries are often significantly underpredicted.
Shin, Wook-Geun; Shin, Jae-Ik; Jeong, Jong Hwi; Lee, Se Byeong
2015-01-01T23:59:59.000Z
For the in vivo range verification in proton therapy, it has been tried to measure the spatial distribution of the prompt gammas generated by the proton-induced interactions with the close relationship with the proton dose distribution. However, the high energy of the prompt gammas and background gammas are still problematic in measuring the distribution. In this study, we suggested a new method determining the in vivo range by utilizing the time structure of the prompt gammas formed with the rotation of a range modulation wheel (RMW) in the passive scattering proton therapy. To validate the Monte Carlo code simulating the proton beam nozzle, axial percent depth doses (PDDs) were compared with the measured PDDs with the varying beam range of 4.73-24.01 cm. And the relationship between the proton dose rate and the time structure of the prompt gammas was assessed and compared in the water phantom. The results of the PDD showed accurate agreement within the relative errors of 1.1% in the distal range and 2.9% in...
Kadoura, Ahmad; Sun, Shuyu, E-mail: shuyu.sun@kaust.edu.sa; Salama, Amgad
2014-08-01T23:59:59.000Z
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (?, ?) for single site models were proposed for methane, nitrogen and carbon monoxide.
Chen Zhaoquan [College of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, Anhui 232001 (China); State Key Laboratory of Structural Analysis for Industrial Equipment, Dalian University of Technology, Dalian, Liaoning 116024 (China); State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China); Ye Qiubo [College of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, Anhui 232001 (China); Communications Research Centre, 3701 Carling Ave., Ottawa K2H 8S2 (Canada); Xia Guangqing [State Key Laboratory of Structural Analysis for Industrial Equipment, Dalian University of Technology, Dalian, Liaoning 116024 (China); Hong Lingli; Hu Yelin; Zheng Xiaoliang; Li Ping [College of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, Anhui 232001 (China); Zhou Qiyan [College of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, Anhui 232001 (China); State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China); Hu Xiwei; Liu Minghai [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China)
2013-03-15T23:59:59.000Z
Although surface-wave plasma (SWP) sources have many industrial applications, the ionization process for SWP discharges is not yet well understood. The resonant excitation of surface plasmon polaritons (SPPs) has recently been proposed to produce SWP efficiently, and this work presents a numerical study of the mechanism to produce SWP sources. Specifically, SWP resonantly excited by SPPs at low pressure (0.25 Torr) are modeled using a two-dimensional in the working space and three-dimensional in the velocity space particle-in-cell with the Monte Carlo collision method. Simulation results are sampled at different time steps, in which the detailed information about the distribution of electrons and electromagnetic fields is obtained. Results show that the mode conversion between surface waves of SPPs and electron plasma waves (EPWs) occurs efficiently at the location where the plasma density is higher than 3.57 Multiplication-Sign 10{sup 17} m{sup -3}. Due to the effect of the locally enhanced electric field of SPPs, the mode conversion between the surface waves of SPPs and EPWs is very strong, which plays a significant role in efficiently heating SWP to the overdense state.
Fan, Yu; Zou, Ying; Sun, Jizhong; Wang, Dezhen [Key Laboratory of Materials Modification by Laser, Ion and Electron Beams (Ministry of Education), School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024 (China)] [Key Laboratory of Materials Modification by Laser, Ion and Electron Beams (Ministry of Education), School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024 (China); Stirner, Thomas [Department of Electronic Engineering, University of Applied Sciences Deggendorf, Edlmairstr. 6-8, D-94469 Deggendorf (Germany)] [Department of Electronic Engineering, University of Applied Sciences Deggendorf, Edlmairstr. 6-8, D-94469 Deggendorf (Germany)
2013-10-15T23:59:59.000Z
The influence of an applied magnetic field on plasma-related devices has a wide range of applications. Its effects on a plasma have been studied for years; however, there are still many issues that are not understood well. This paper reports a detailed kinetic study with the two-dimension-in-space and three-dimension-in-velocity particle-in-cell plus Monte Carlo collision method on the role of E×B drift in a capacitive argon discharge, similar to the experiment of You et al.[Thin Solid Films 519, 6981 (2011)]. The parameters chosen in the present study for the external magnetic field are in a range common to many applications. Two basic configurations of the magnetic field are analyzed in detail: the magnetic field direction parallel to the electrode with or without a gradient. With an extensive parametric study, we give detailed influences of the drift on the collective behaviors of the plasma along a two-dimensional domain, which cannot be represented by a 1 spatial and 3 velocity dimensions model. By analyzing the results of the simulations, the occurring collisionless heating mechanism is explained well.
Reboredo, Fernando A [ORNL
2009-01-01T23:59:59.000Z
A recently developed Self-Healing Diffusion Monte Carlo Algorithm [PRB {\\bf 79}, 195117 ] is extended to the calculation of excited states. The formalism is based on a excited-state fixed-node approximation and the mixed estimator of the excited-state probability density. The fixed-node ground state wave-functions of inequivalent nodal pockets are found simultaneously using a recursive approach. The decay of the wave-function into lower energy states is prevented using two methods: i) The projection of the improved trial-wave function into previously calculated eigenstates is removed. ii) The reference energy for each nodal pocket is adjusted in order to create a kink in the global fixed-node wave-function which, when locally smoothed out, increases the volume of the higher energy pockets at the expense of the lower energy ones until the energies of every pocket become equal. This reference energy method is designed to find nodal structures that are local minima for arbitrary fluctuations of the nodes within a given nodal topology. We demonstrate in a model system that the algorithm converges to many-body eigenstates in bosonic-like and fermionic cases.
McGrath, Matthew; Kuo, I-F W.; Ngouana, Brice F.; Ghogomu, Julius N.; Mundy, Christopher J.; Marenich, Aleksandr; Cramer, Christopher J.; Truhlar, Donald G.; Siepmann, Joern I.
2013-08-28T23:59:59.000Z
The free energy of solvation and dissociation of hydrogen chloride in water is calculated through a combined molecular simulation quantum chemical approach at four temperatures between T = 300 and 450 K. The free energy is first decomposed into the sum of two components: the Gibbs free energy of transfer of molecular HCl from the vapor to the aqueous liquid phase and the standard-state free energy of acid dissociation of HCl in aqueous solution. The former quantity is calculated using Gibbs ensemble Monte Carlo simulations using either Kohn-Sham density functional theory or a molecular mechanics force field to determine the system’s potential energy. The latter free energy contribution is computed using a continuum solvation model utilizing either experimental reference data or micro-solvated clusters. The predicted combined solvation and dissociation free energies agree very well with available experimental data. CJM was supported by the US Department of Energy,Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. Pacific Northwest National Laboratory is operated by Battelle for the US Department of Energy.
Choi, Myunghee [Retired] [Retired; Chan, Vincent S. [General Atomics] [General Atomics
2014-02-28T23:59:59.000Z
This final report describes the work performed under U.S. Department of Energy Cooperative Agreement DE-FC02-08ER54954 for the period April 1, 2011 through March 31, 2013. The goal of this project was to perform iterated finite-orbit Monte Carlo simulations with full-wall fields for modeling tokamak ICRF wave heating experiments. In year 1, the finite-orbit Monte-Carlo code ORBIT-RF and its iteration algorithms with the full-wave code AORSA were improved to enable systematical study of the factors responsible for the discrepancy in the simulated and the measured fast-ion FIDA signals in the DIII-D and NSTX ICRF fast-wave (FW) experiments. In year 2, ORBIT-RF was coupled to the TORIC full-wave code for a comparative study of ORBIT-RF/TORIC and ORBIT-RF/AORSA results in FW experiments.
Dowdell, Stephen; Paganetti, Harald [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States)] [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States); Grassberger, Clemens [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 and Centre for Proton Therapy, Paul Scherrer Institut, 5232 Villigen-PSI (Switzerland)] [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 and Centre for Proton Therapy, Paul Scherrer Institut, 5232 Villigen-PSI (Switzerland)
2013-12-15T23:59:59.000Z
Purpose: To compare motion effects in intensity modulated proton therapy (IMPT) lung treatments with different levels of intensity modulation.Methods: Spot scanning IMPT treatment plans were generated for ten lung cancer patients for 2.5Gy(RBE) and 12Gy(RBE) fractions and two distinct energy-dependent spot sizes (??8–17 mm and ?2–4 mm). IMPT plans were generated with the target homogeneity of each individual field restricted to <20% (IMPT{sub 20%}). These plans were compared to full IMPT (IMPT{sub full}), which had no restriction on the single field homogeneity. 4D Monte Carlo simulations were performed upon the patient 4DCT geometry, including deformable image registration and incorporating the detailed timing structure of the proton delivery system. Motion effects were quantified via comparison of the results of the 4D simulations (4D-IMPT{sub 20%}, 4D-IMPT{sub full}) with those of a 3D Monte Carlo simulation (3D-IMPT{sub 20%}, 3D-IMPT{sub full}) upon the planning CT using the equivalent uniform dose (EUD), V{sub 95} and D{sub 1}-D{sub 99}. The effects in normal lung were quantified using mean lung dose (MLD) and V{sub 90%}.Results: For 2.5Gy(RBE), the mean EUD for the large spot size is 99.9%± 2.8% for 4D-IMPT{sub 20%} compared to 100.1%± 2.9% for 4D-IMPT{sub full}. The corresponding values are 88.6%± 8.7% (4D-IMPT{sub 20%}) and 91.0%± 9.3% (4D-IMPT{sub full}) for the smaller spot size. The EUD value is higher in 69.7% of the considered deliveries for 4D-IMPT{sub full}. The V{sub 95} is also higher in 74.7% of the plans for 4D-IMPT{sub full}, implying that IMPT{sub full} plans experience less underdose compared to IMPT{sub 20%}. However, the target dose homogeneity is improved in the majority (67.8%) of plans for 4D-IMPT{sub 20%}. The higher EUD and V{sub 95} suggests that the degraded homogeneity in IMPT{sub full} is actually due to the introduction of hot spots in the target volume, perhaps resulting from the sharper in-target dose gradients. The greatest variations between the IMPT{sub 20%} and IMPT{sub full} deliveries are observed for patients with the largest motion amplitudes. These patients would likely be treated using gating or another motion mitigation technique, which was not the focus of this study.Conclusions: For the treatment parameters considered in this study, the differences between IMPT{sub full} and IMPT{sub 20%} are only likely to be clinically significant for patients with large (>20 mm) motion amplitudes.
Pennington, A; Selvaraj, R; Kirkpatrick, S; Oliveira, S [21st Century Oncology, Deerfield Beach, FL (United States); Leventouri, T [Florida Atlantic University, Boca Raton, FL (United States)
2014-06-01T23:59:59.000Z
Purpose: The latest publications indicate that the Ray Tracing algorithm significantly overestimates the dose delivered as compared to the Monte Carlo (MC) algorithm. The purpose of this study is to quantify this overestimation and to identify significant correlations between the RT and MC calculated dose distributions. Methods: Preliminary results are based on 50 preexisting RT algorithm dose optimization and calculation treatment plans prepared on the Multiplan treatment planning system (Accuray Inc., Sunnyvale, CA). The analysis will be expanded to include 100 plans. These plans are recalculated using the MC algorithm, with high resolution and 1% uncertainty. The geometry and number of beams for a given plan, as well as the number of monitor units, is constant for the calculations for both algorithms and normalized differences are compared. Results: MC calculated doses were significantly smaller than RT doses. The D95 of the PTV was 27% lower for the MC calculation. The GTV and PTV mean coverage were 13 and 39% less for MC calculation. The first parameter of conformality, as defined as the ratio of the Prescription Isodose Volume to the PTV Volume was on average 1.18 for RT and 0.62 for MC. Maximum doses delivered to OARs was reduced in the MC plans. The doses for 1000 and 1500 cc of total lung minus PTV, respectively were reduced by 39% and 53% for the MC plans. The correlation of the ratio of air in PTV to the PTV with the difference in PTV coverage had a coefficient of ?0.54. Conclusion: The preliminary results confirm that the RT algorithm significantly overestimates the dosages delivered confirming previous analyses. Finally, subdividing the data into different size regimes increased the correlation for the smaller size PTVs indicating the MC algorithm improvement verses the RT algorithm is dependent upon the size of the PTV.
Zourari, K.; Pantelis, E.; Moutsatsos, A.; Sakelliou, L.; Georgiou, E.; Karaiskos, P.; Papagiannis, P. [Medical Physics Laboratory, Medical School, University of Athens, 75 Mikras Asias, 115 27 Athens (Greece); Department of Physics, Nuclear and Particle Physics Section, University of Athens, Ilisia, 157 71 Athens (Greece); Medical Physics Laboratory, Medical School, University of Athens, 75 Mikras Asias, 115 27 Athens (Greece)
2013-01-15T23:59:59.000Z
Purpose: To compare TG43-based and Acuros deterministic radiation transport-based calculations of the BrachyVision treatment planning system (TPS) with corresponding Monte Carlo (MC) simulation results in heterogeneous patient geometries, in order to validate Acuros and quantify the accuracy improvement it marks relative to TG43. Methods: Dosimetric comparisons in the form of isodose lines, percentage dose difference maps, and dose volume histogram results were performed for two voxelized mathematical models resembling an esophageal and a breast brachytherapy patient, as well as an actual breast brachytherapy patient model. The mathematical models were converted to digital imaging and communications in medicine (DICOM) image series for input to the TPS. The MCNP5 v.1.40 general-purpose simulation code input files for each model were prepared using information derived from the corresponding DICOM RT exports from the TPS. Results: Comparisons of MC and TG43 results in all models showed significant differences, as reported previously in the literature and expected from the inability of the TG43 based algorithm to account for heterogeneities and model specific scatter conditions. A close agreement was observed between MC and Acuros results in all models except for a limited number of points that lay in the penumbra of perfectly shaped structures in the esophageal model, or at distances very close to the catheters in all models. Conclusions: Acuros marks a significant dosimetry improvement relative to TG43. The assessment of the clinical significance of this accuracy improvement requires further work. Mathematical patient equivalent models and models prepared from actual patient CT series are useful complementary tools in the methodology outlined in this series of works for the benchmarking of any advanced dose calculation algorithm beyond TG43.
Farah, J; Bonfrate, A; Donadille, L; Dubourg, N; Lacoste, V; Martinetti, F; Sayah, R; Trompier, F; Clairand, I [IRSN - Institute for Radiological Protection and Nuclear Safety, Fontenay-aux-roses (France); Caresana, M [Politecnico di Milano, Milano (Italy); Delacroix, S; Nauraye, C [Institut Curie - Centre de Protontherapie d Orsay, Orsay (France); Herault, J [Centre Antoine Lacassagne, Nice (France); Piau, S; Vabre, I [Institut de Physique Nucleaire d Orsay, Orsay (France)
2014-06-01T23:59:59.000Z
Purpose: Measure stray radiation inside a passive scattering proton therapy facility, compare values to Monte Carlo (MC) simulations and identify the actual needs and challenges. Methods: Measurements and MC simulations were considered to acknowledge neutron exposure associated with 75 MeV ocular or 180 MeV intracranial passively scattered proton treatments. First, using a specifically-designed high sensitivity Bonner Sphere system, neutron spectra were measured at different positions inside the treatment rooms. Next, measurement-based mapping of neutron ambient dose equivalent was fulfilled using several TEPCs and rem-meters. Finally, photon and neutron organ doses were measured using TLDs, RPLs and PADCs set inside anthropomorphic phantoms (Rando, 1 and 5-years-old CIRS). All measurements were also simulated with MCNPX to investigate the efficiency of MC models in predicting stray neutrons considering different nuclear cross sections and models. Results: Knowledge of the neutron fluence and energy distribution inside a proton therapy room is critical for stray radiation dosimetry. However, as spectrometry unfolding is initiated using a MC guess spectrum and suffers from algorithmic limits a 20% spectrometry uncertainty is expected. H*(10) mapping with TEPCs and rem-meters showed a good agreement between the detectors. Differences within measurement uncertainty (10–15%) were observed and are inherent to the energy, fluence and directional response of each detector. For a typical ocular and intracranial treatment respectively, neutron doses outside the clinical target volume of 0.4 and 11 mGy were measured inside the Rando phantom. Photon doses were 2–10 times lower depending on organs position. High uncertainties (40%) are inherent to TLDs and PADCs measurements due to the need for neutron spectra at detector position. Finally, stray neutrons prediction with MC simulations proved to be extremely dependent on proton beam energy and the used nuclear models and cross sections. Conclusion: This work highlights measurement and simulation limits for ion therapy radiation protection applications.
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2005-09-01T23:59:59.000Z
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.
Philippe Laurent; Lev Titarchuk
2006-11-06T23:59:59.000Z
In Paper by Titarchuk & Shrader the general formulation and results for photon reprocessing (downscattering) that included recoil and Comptonization effects due to divergence of the flow were presented. Here we show the Monte Carlo (MC) simulated continuum and line spectra. We also provide an analytical description of the simulated continuum spectra using the diffusion approximation. We have simulated the propagation of monochromatic and continuum photons in a bulk outflow from a compact object. Electron scattering of the photons within the expanding flow leads to a decrease of their energy which is of first order in V/c (where V is the outflow velocity). The downscattering effect of first order in V/c in the diverging flow is explained by semi-analytical calculations and confirmed by MC simulations. We conclude that redshifted lines and downscattering bumps are intrinsic properties of the powerful outflows for which Thomson optical depth is greater than one. We fitted our model line profiles to the observations using four free parameters, \\beta=V/c, optical depth of the wind \\tau, the wind temperature kT_e and the original line photon energy E_0. We show how the primary spectrum emitted close to the black hole is modified by reprocessing in the warm wind. In the framework of the our wind model the fluorescent iron line K_alpha is formed in the partly ionized wind as a result of illumination by central source continuum photons. The demonstrated application of our outflow model to the XMM observations of MCG 6-30-15, and to the ASCA observations of GRO J1655-40, points out a potential powerful spectral diagnostic for probes of the outflow-central object connection in Galactic and extragalactic BH sources.
Aryal, Prakash; Molloy, Janelle A. [Department of Radiation Medicine, University of Kentucky, Lexington, Kentucky 40536 (United States)] [Department of Radiation Medicine, University of Kentucky, Lexington, Kentucky 40536 (United States); Rivard, Mark J., E-mail: mark.j.rivard@gmail.com [Department of Radiation Oncology, Tufts University School of Medicine, Boston, Massachusetts 02111 (United States)
2014-02-15T23:59:59.000Z
Purpose: To investigate potential causes for differences in TG-43 brachytherapy dosimetry parameters in the existent literature for the model IAI-125A{sup 125}I seed and to propose new standard dosimetry parameters. Methods: The MCNP5 code was used for Monte Carlo (MC) simulations. Sensitivity of dose distributions, and subsequently TG-43 dosimetry parameters, was explored to reproduce historical methods upon which American Association of Physicists in Medicine (AAPM) consensus data are based. Twelve simulation conditions varying{sup 125}I coating thickness, coating mass density, photon interaction cross-section library, and photon emission spectrum were examined. Results: Varying{sup 125}I coating thickness, coating mass density, photon cross-section library, and photon emission spectrum for the model IAI-125A seed changed the dose-rate constant by up to 0.9%, about 1%, about 3%, and 3%, respectively, in comparison to the proposed standard value of 0.922 cGy?h{sup ?1}?U{sup ?1}. The dose-rate constant values by Solberg et al. [“Dosimetric parameters of three new solid core {sup 125}I brachytherapy sources,” J. Appl. Clin. Med. Phys. 3, 119–134 (2002)], Meigooni et al. [“Experimental and theoretical determination of dosimetric characteristics of IsoAid ADVANTAGE™ {sup 125}I brachytherapy source,” Med. Phys. 29, 2152–2158 (2002)], and Taylor and Rogers [“An EGSnrc Monte Carlo-calculated database of TG-43 parameters,” Med. Phys. 35, 4228–4241 (2008)] for the model IAI-125A seed and Kennedy et al. [“Experimental and Monte Carlo determination of the TG-43 dosimetric parameters for the model 9011 THINSeed™ brachytherapy source,” Med. Phys. 37, 1681–1688 (2010)] for the model 6711 seed were +4.3% (0.962 cGy?h{sup ?1}?U{sup ?1}), +6.2% (0.98 cGy?h{sup ?1}?U{sup ?1}), +0.3% (0.925 cGy?h{sup ?1}?U{sup ?1}), and ?0.2% (0.921 cGy?h{sup ?1}?U{sup ?1}), respectively, in comparison to the proposed standard value. Differences in the radial dose functions between the current study and both Solberg et al. and Meigooni et al. were <10% for r ? 5 cm, and increased for r > 5 cm with a maximum difference of 29% at r = 9 cm. In comparison to Taylor and Rogers, these differences were lower (maximum of 2% at r = 9 cm). For the similarly designed model 6711 {sup 125}I seed, differences did not exceed 0.5% for 0.5 ? r ? 10 cm. Radial dose function values varied by 1% as coating thickness and coating density were changed. Varying the cross-section library and source spectrum altered the radial dose function by 25% and 12%, respectively, but these differences occurred at r = 10 cm where the dose rates were very low. The 2D anisotropy function results were most similar to those of Solberg et al. and most different to those of Meigooni et al. The observed order of simulation condition variables from most to least important for influencing the 2D anisotropy function was spectrum, coating thickness, coating density, and cross-section library. Conclusions: Several MC radiation transport codes are available for calculation of the TG-43 dosimetry parameters for brachytherapy seeds. The physics models in these codes and their related cross-section libraries have been updated and improved since publication of the 2007 AAPM TG-43U1S1 report. Results using modern data indicated statistically significant differences in these dosimetry parameters in comparison to data recommended in the TG-43U1S1 report. Therefore, it seems that professional societies such as the AAPM should consider reevaluating the consensus data for this and others seeds and establishing a process of regular evaluations in which consensus data are based upon methods that remain state-of-the-art.
Monte Carlo Stratego Jeroen Mets
Emmerich, Michael
stellen, mits er op een veld niet meer dan ´e´en stuk wordt geplaatst. In Figuur 2 is een spelers een stuk te verzetten, waarbij de onderste speler begint. Dit wordt herhaald tot er een speler als winnaar over blijft. Voor ieder stuk geldt dat het per beurt ´e´en veld mag verplaatsen (naar links, naar
SU-E-T-180: Fano Cavity Test of Proton Transport in Monte Carlo Codes Running On GPU and Xeon Phi
Sterpin, E; Sorriaux, J; Souris, K; Lee, J; Vynckier, S [Universite catholique de Louvain, Brussels, Brussels (Belgium); Schuemann, J; Paganetti, H [Massachusetts General Hospital, Boston, MA (United States); Jia, X; Jiang, S [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)
2014-06-01T23:59:59.000Z
Purpose: In proton dose calculation, clinically compatible speeds are now achieved with Monte Carlo codes (MC) that combine 1) adequate simplifications in the physics of transport and 2) the use of hardware architectures enabling massive parallel computing (like GPUs). However, the uncertainties related to the transport algorithms used in these codes must be kept minimal. Such algorithms can be checked with the so-called “Fano cavity test”. We implemented the test in two codes that run on specific hardware: gPMC on an nVidia GPU and MCsquare on an Intel Xeon Phi (60 cores). Methods: gPMC and MCsquare are designed for transporting protons in CT geometries. Both codes use the method of fictitious interaction to sample the step-length for each transport step. The considered geometry is a water cavity (2×2×0.2 cm{sup 3}, 0.001 g/cm{sup 3}) in a 10×10×50 cm{sup 3} water phantom (1 g/cm{sup 3}). CPE in the cavity is established by generating protons over the phantom volume with a uniform momentum (energy E) and a uniform intensity per unit mass I. Assuming no nuclear reactions and no generation of other secondaries, the computed cavity dose should equal IE, according to Fano's theorem. Both codes were tested for initial proton energies of 50, 100, and 200 MeV. Results: For all energies, gPMC and MCsquare are within 0.3 and 0.2 % of the theoretical value IE, respectively (0.1% standard deviation). Single-precision computations (instead of double) increased the error by about 0.1% in MCsquare. Conclusion: Despite the simplifications in the physics of transport, both gPMC and MCsquare successfully pass the Fano test. This ensures optimal accuracy of the codes for clinical applications within the uncertainties on the underlying physical models. It also opens the path to other applications of these codes, like the simulation of ion chamber response.
Forbang, R Teboh [John Hopkins University, Baltimore, MD (United States)
2014-06-01T23:59:59.000Z
Purpose: MultiPlan, the treatment planning system for the CyberKnife Robotic Radiosurgery system offers two approaches to dose computation, namely Ray-Tracing (RT), the default technique and Monte Carlo (MC), an option. RT is deterministic, however it accounts for primary heterogeneity only. MC on the other hand has an uncertainty associated with the calculation results. The advantage is that in addition, it accounts for heterogeneity effects on the scattered dose. Not all sites will benefit from MC. The goal of this work was to focus on central nervous system (CNS) tumors and compare dosimetrically, treatment plans computed with RT versus MC. Methods: Treatment plans were computed using both RT and MC for sites covering (a) the brain (b) C-spine (c) upper T-spine (d) lower T-spine (e) L-spine and (f) sacrum. RT was first used to compute clinically valid treatment plans. Then the same treatment parameters, monitor units, beam weights, etc., were used in the MC algorithm to compute the dose distribution. The plans were then compared for tumor coverage to illustrate the difference if any. All MC calculations were performed at a 1% uncertainty. Results: Using the RT technique, the tumor coverage for the brain, C-spine (C3–C7), upper T-spine (T4–T6), lower T-spine (T10), Lspine (L2) and sacrum were 96.8%, 93.1%, 97.2%, 87.3%, 91.1%, and 95.3%. The corresponding tumor coverage based on the MC approach was 98.2%, 95.3%, 87.55%, 88.2%, 92.5%, and 95.3%. It should be noted that the acceptable planning target coverage for our clinical practice is >95%. The coverage can be compromised for spine tumors to spare normal tissues such as the spinal cord. Conclusion: For treatment planning involving the CNS, RT and MC appear to be similar for most sites but for the T-spine area where most of the beams traverse lung tissue. In this case, MC is highly recommended.
Lou, K [U.T M.D. Anderson Cancer Center, Houston, TX (United States); Rice University, Houston, TX (United States); Mirkovic, D; Sun, X; Zhu, X; Poenisch, F; Grosshans, D; Shao, Y [U.T M.D. Anderson Cancer Center, Houston, TX (United States); Clark, J [Rice University, Houston, TX (United States)
2014-06-01T23:59:59.000Z
Purpose: To study the feasibility of intra-fraction proton beam-range verification with PET imaging. Methods: Two phantoms homogeneous cylindrical PMMA phantoms (290 mm axial length, 38 mm and 200 mm diameter respectively) were studied using PET imaging: a small phantom using a mouse-sized PET (61 mm diameter field of view (FOV)) and a larger phantom using a human brain-sized PET (300 mm FOV). Monte Carlo (MC) simulations (MCNPX and GATE) were used to simulate 179.2 MeV proton pencil beams irradiating the two phantoms and be imaged by the two PET systems. A total of 50 simulations were conducted to generate 50 positron activity distributions and correspondingly 50 measured activity-ranges. The accuracy and precision of these activity-ranges were calculated under different conditions (including count statistics and other factors, such as crystal cross-section). Separate from the MC simulations, an activity distribution measured from a simulated PET image was modeled as a noiseless positron activity distribution corrupted by Poisson counting noise. The results from these two approaches were compared to assess the impact of count statistics on the accuracy and precision of activity-range calculations. Results: MC Simulations show that the accuracy and precision of an activity-range are dominated by the number (N) of coincidence events of the reconstructed image. They are improved in a manner that is inversely proportional to 1/sqrt(N), which can be understood from the statistical modeling. MC simulations also indicate that the coincidence events acquired within the first 60 seconds with 10{sup 9} protons (small phantom) and 10{sup 10} protons (large phantom) are sufficient to achieve both sub-millimeter accuracy and precision. Conclusion: Under the current MC simulation conditions, the initial study indicates that the accuracy and precision of beam-range verification are dominated by count statistics, and intra-fraction PET image-based beam-range verification is feasible. This work was supported by a research award RP120326 from Cancer Prevention and Research Institute of Texas.
Barrera, C A; Moran, M J
2007-08-21T23:59:59.000Z
The Neutron Imaging System (NIS) is one of seven ignition target diagnostics under development for the National Ignition Facility. The NIS is required to record hot-spot (13-15 MeV) and downscattered (6-10 MeV) images with a resolution of 10 microns and a signal-to-noise ratio (SNR) of 10 at the 20% contour. The NIS is a valuable diagnostic since the downscattered neutrons reveal the spatial distribution of the cold fuel during an ignition attempt, providing important information in the case of a failed implosion. The present study explores the parameter space of several line-of-sight (LOS) configurations that could serve as the basis for the final design. Six commercially available organic scintillators were experimentally characterized for their light emission decay profile and neutron sensitivity. The samples showed a long lived decay component that makes direct recording of a downscattered image impossible. The two best candidates for the NIS detector material are: EJ232 (BC422) plastic fibers or capillaries filled with EJ399B. A Monte Carlo-based end-to-end model of the NIS was developed to study the imaging capabilities of several LOS configurations and verify that the recovered sources meet the design requirements. The model includes accurate neutron source distributions, aperture geometries (square pinhole, triangular wedge, mini-penumbral, annular and penumbral), their point spread functions, and a pixelated scintillator detector. The modeling results show that a useful downscattered image can be obtained by recording the primary peak and the downscattered images, and then subtracting a decayed version of the former from the latter. The difference images need to be deconvolved in order to obtain accurate source distributions. The images are processed using a frequency-space modified-regularization algorithm and low-pass filtering. The resolution and SNR of these sources are quantified by using two surrogate sources. The simulations show that all LOS configurations have a resolution of 7 microns or better. The 28 m LOS with a 7 x 7 array of 100-micron mini-penumbral apertures or 50-micron square pinholes meets the design requirements and is a very good design alternative.
Zhai, Pengwang
2009-06-02T23:59:59.000Z
Committee Members, Chia-Ren Hu M. Suhail Zubairy Head of Department, Edward S. Fry August 2006 Major Subject: Physics iii ABSTRACT A Fourth-Order Symplectic Finite-difference Time-domain (FDTD) Method for Light Scattering and a 3D Monte Carlo Code... class of ?getting something vii from nothing?. I have learned much about electromagnetic theory from Dr. Zubairy?s instruction, which provided a solid background for my research. I thank my office mates, Changhui Li and Yu You. Whenever I had problems...
Ivanov, I. E., E-mail: ilshai-hulud@yandex.ru; Schukin, N. V. [National Research Nuclear University MEPhI (Russian Federation); Bychkov, S. A.; Druzhinin, V. E.; Lysov, D. A.; Shmonin, Yu. V. [All-Russia Research Institute for Nuclear Power Plant Operation (VNIIAES) (Russian Federation); Gurevich, M. I. [National Research Center Kurchatov Institute (Russian Federation)
2014-12-15T23:59:59.000Z
Statistical errors in sampling neutron fields in physically large systems like an RBMK are analyzed both qualitatively and quantitatively. Recommendations concerning the choice of parameters for calculations are given. A new procedure for Monte Carlo RBMK calculations with model corrections on the basis of data from in-core detectors is proposed. Dedicated software based on the CUDA software and hardware platform is developed for computational research. Results of testing the procedure and software in question via calculations for real RBMK reactors are discussed.
Zhai, Pengwang
2009-06-02T23:59:59.000Z
meter. 60 20 Geometry of a scattering event. . . . . . . . . . . . . . . . . . . . . . 63 21 An example of the atmosphere model used in the 3D Monte Carlo code for the vector radiative transfer systems. Inhomogeneous layers are divided into voxels... cases can be solved analytically. Several popular numerical methods include the T-matrix method [15, 16, 17, 18, 19], finite-element method [20, 21], finite-difference time-domain(FDTD)method[22,23,24,25,26,27,28,29,30,31,32], point-matching method [33...
Ali, Imad, E-mail: iali@ouhsc.edu [Department of Radiation Oncology, University of Oklahoma Health Sciences Center, Oklahoma City, OK (United States); Ahmad, Salahuddin [Department of Radiation Oncology, University of Oklahoma Health Sciences Center, Oklahoma City, OK (United States)
2013-10-01T23:59:59.000Z
To compare the doses calculated using the BrainLAB pencil beam (PB) and Monte Carlo (MC) algorithms for tumors located in various sites including the lung and evaluate quality assurance procedures required for the verification of the accuracy of dose calculation. The dose-calculation accuracy of PB and MC was also assessed quantitatively with measurement using ionization chamber and Gafchromic films placed in solid water and heterogeneous phantoms. The dose was calculated using PB convolution and MC algorithms in the iPlan treatment planning system from BrainLAB. The dose calculation was performed on the patient's computed tomography images with lesions in various treatment sites including 5 lungs, 5 prostates, 4 brains, 2 head and necks, and 2 paraspinal tissues. A combination of conventional, conformal, and intensity-modulated radiation therapy plans was used in dose calculation. The leaf sequence from intensity-modulated radiation therapy plans or beam shapes from conformal plans and monitor units and other planning parameters calculated by the PB were identical for calculating dose with MC. Heterogeneity correction was considered in both PB and MC dose calculations. Dose-volume parameters such as V95 (volume covered by 95% of prescription dose), dose distributions, and gamma analysis were used to evaluate the calculated dose by PB and MC. The measured doses by ionization chamber and EBT GAFCHROMIC film in solid water and heterogeneous phantoms were used to quantitatively asses the accuracy of dose calculated by PB and MC. The dose-volume histograms and dose distributions calculated by PB and MC in the brain, prostate, paraspinal, and head and neck were in good agreement with one another (within 5%) and provided acceptable planning target volume coverage. However, dose distributions of the patients with lung cancer had large discrepancies. For a plan optimized with PB, the dose coverage was shown as clinically acceptable, whereas in reality, the MC showed a systematic lack of dose coverage. The dose calculated by PB for lung tumors was overestimated by up to 40%. An interesting feature that was observed is that despite large discrepancies in dose-volume histogram coverage of the planning target volume between PB and MC, the point doses at the isocenter (center of the lesions) calculated by both algorithms were within 7% even for lung cases. The dose distributions measured with EBT GAFCHROMIC films in heterogeneous phantoms showed large discrepancies of nearly 15% lower than PB at interfaces between heterogeneous media, where these lower doses measured by the film were in agreement with those by MC. The doses (V95) calculated by MC and PB agreed within 5% for treatment sites with small tissue heterogeneities such as the prostate, brain, head and neck, and paraspinal tumors. Considerable discrepancies, up to 40%, were observed in the dose-volume coverage between MC and PB in lung tumors, which may affect clinical outcomes. The discrepancies between MC and PB increased for 15 MV compared with 6 MV indicating the importance of implementation of accurate clinical treatment planning such as MC. The comparison of point doses is not representative of the discrepancies in dose coverage and might be misleading in evaluating the accuracy of dose calculation between PB and MC. Thus, the clinical quality assurance procedures required to verify the accuracy of dose calculation using PB and MC need to consider measurements of 2- and 3-dimensional dose distributions rather than a single point measurement using heterogeneous phantoms instead of homogenous water-equivalent phantoms.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5 TablesExports(Journal Article) |govInstrumentsmfrirtA Journey Inside the Complex andFOUR Los Phase 1Miller wins Early CareerSectionsA SearchMiniDFT
Torres-Verdín, Carlos
of Mud-Filtrate Invasion and Salt Mixing1 Alberto Mendoza2 , William E. Preeg3 , Carlos Torres-Verdín2 the influence of the spatial distributions of fluid saturation and salt concentration on generic compensated-bearing formation. The simulations also consider the mixing of salt between mud-filtrate and connate water. Subse
Amalfi/Positano Monte CarloMarseille
Spence, Harlan Ernest
(Piraeus), Greece Embark 1:00 p.m. 8:00 p.m. Day 3 Santorini, Greece 8:00 a.m. 8:00 p.m. Day 4 Ephesus of Greek beauty. EPHESUS/KUSADASI, TURKEY The port of Kusadasi, recently a fishing and farming village of Ephesus, a veritable museum of Greek and Roman history with impeccably preserved relics from its ancient
Kinetic Monte Carlo Simulation of Electron Transfer
Southern California, University of
hold up to one electron: occ(i) = 1 (occupied) or 0 (unoccupied) · Reduction: If molecule i;Algorithm occ[i] 0 (i = 0 to N-1) nred 0 // number of injected electrons nox 0 // number of ejected(rand()/RAND_MAX)/r rth r*rand()/RAND_MAX racc 0 for i 0 to N-1 if (rth reduction
Quantum Monte Carlo for Vibrating Molecules
Brown, W.R.
2010-01-01T23:59:59.000Z
optimized local mode basis functions 4.7 Acceptance functionAcceptance function parameters for C3 L M basis set LocalAcceptance function parameters for H 2 O i n 1/bohr local
Krylov-projected quantum Monte Carlo Method
Blunt, N. S.; Alavi, Ali; Booth, George H.
2015-07-31T23:59:59.000Z
problem is pro- jected into a stochastically sampled Krylov subspace, thus allowing finite-temperature and dynamical quanti- ties to be calculated. Since the method exploits spar- sity in the sampled wavefunctions, the stochastic dynamic avoids storing...
MCM for PDEs Monte Carlo Methods for
Mascagni, Michael
Clain Research supported by ARO, DOE/ASCI, NATO, and NSF #12;MCM for PDEs Introduction Early History of MCMs for PDEs Probabilistic Representations of PDEs Probabilistic Representation of Elliptic PDEs via Feynman-Kac Probabilistic Representation of Parabolic PDEs via Feynman-Kac Probabilistic Approaches of Reaction
Monte Carlo simulation in systems biology
Schellenberger, Jan
2010-01-01T23:59:59.000Z
+ gdp + 3 h + 2 pi + ppi + so3 + trdox COMBO10; Range: 0 toh2s + 3 nadp -> 5 h + 3 nadph + so3 SULabc; Range: 0 to 100;
Statistical assessment of Monte Carlo distributional tallies
Kiedrowski, Brian C [Los Alamos National Laboratory; Solomon, Clell J [Los Alamos National Laboratory
2010-12-09T23:59:59.000Z
Four tests are developed to assess the statistical reliability of distributional or mesh tallies. To this end, the relative variance density function is developed and its moments are studied using simplified, non-transport models. The statistical tests are performed upon the results of MCNP calculations of three different transport test problems and appear to show that the tests are appropriate indicators of global statistical quality.