Brown, F.B.; Sutton, T.M.
1996-02-01T23:59:59.000Z
This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.
Frixione, Stefano [INFN, Sezione di Genova, Via Dodecaneso 33, 16146 Genova (Italy)
2005-10-06T23:59:59.000Z
I review recent progress in the physics of parton shower Monte Carlos, emphasizing the ideas which allow the inclusion of higher-order matrix elements into the framework of event generators.
Monte Carlo photon benchmark problems
Whalen, D.J.; Hollowell, D.E.; Hendricks, J.S.
1991-01-01T23:59:59.000Z
Photon benchmark calculations have been performed to validate the MCNP Monte Carlo computer code. These are compared to both the COG Monte Carlo computer code and either experimental or analytic results. The calculated solutions indicate that the Monte Carlo method, and MCNP and COG in particular, can accurately model a wide range of physical problems.
Density Functional Theory (DFT) Simulated Annealing (SA)
. . . . . . . . 9 2009 #12;! " # $ % & - " # $ %' ! " # # $ % & # ( # " ) Density Functional Theory) % Lattice-Boltzmann (LBM) #12;! " # $ % & - " # $ %' ! " # # $ % & # ( # " ) Density Functional Theory (DFT;! " # $ % & - " # $ %' ! " # # $ % & # ( # " ) Density Functional Theory (DFT) Simulated Annealing (SA) Monte Carlo &$ ' ' (GCMC
Path Integral Monte Carlo and Density Functional Molecular Dynamics Simulations of Hot, Dense Helium
Militzer, Burkhard
Path Integral Monte Carlo and Density Functional Molecular Dynamics Simulations of Hot, Dense integral Monte Carlo (PIMC) and density func- tional molecular dynamics (DFT-MD), are applied to study hot excitation mecha- nisms that determine their behavior at high temperature. The helium atom has two ionization
Is Monte Carlo embarrassingly parallel?
Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)
2012-07-01T23:59:59.000Z
Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)
Monte Carlo calculations of nuclei
Pieper, S.C. [Argonne National Lab., IL (United States). Physics Div.
1997-10-01T23:59:59.000Z
Nuclear many-body calculations have the complication of strong spin- and isospin-dependent potentials. In these lectures the author discusses the variational and Green`s function Monte Carlo techniques that have been developed to address this complication, and presents a few results.
Shell model Monte Carlo methods
Koonin, S.E. [California Inst. of Tech., Pasadena, CA (United States). W.K. Kellogg Radiation Lab.; Dean, D.J. [Oak Ridge National Lab., TN (United States)
1996-10-01T23:59:59.000Z
We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of {gamma}-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs.
Zimmerman, G.B.
1997-06-24T23:59:59.000Z
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
Parallel Monte Carlo reactor neutronics
Blomquist, R.N.; Brown, F.B.
1994-03-01T23:59:59.000Z
The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved.
The MC21 Monte Carlo Transport Code
Sutton TM, Donovan TJ, Trumbull TH, Dobreff PS, Caro E, Griesheimer DP, Tyburski LJ, Carpenter DC, Joo H
2007-01-09T23:59:59.000Z
MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities.
THE BEGINNING of the MONTE CARLO METHOD
. For a whole host of 125 #12;Monte Carlo reasons, he had become seriously inter- ested in the thermonuclear a preliminary computational model of a thermonuclear reaction for the ENIAC. He felt he could convince
Monte Carlo simulation in systems biology
Schellenberger, Jan
2010-01-01T23:59:59.000Z
2 The history of Monte Carlo Sampling in Systems Biology 1.1simulation tools: the systems biology workbench and biospiceCellular and Molecular Biology. ASM Press, Washington
Exponential convergence with adaptive Monte Carlo
Booth, T.E.
1997-11-01T23:59:59.000Z
For over a decade, it has been known that exponential convergence on discrete transport problems was possible using adaptive Monte Carlo techniques. Now, exponential convergence has been empirically demonstrated on a spatially continuous problem.
The role of Monte Carlo within a diagonalization/Monte Carlo scheme
Dean Lee
2000-10-31T23:59:59.000Z
We review the method of stochastic error correction which eliminates the truncation error associated with any subspace diagonalization. Monte Carlo sampling is used to compute the contribution of the remaining basis vectors not included in the initial diagonalization. The method is part of a new approach to computational quantum physics which combines both diagonalization and Monte Carlo techniques.
Grossman, Jeffrey C.
We analyze the density-functional theory (DFT) description of weak interactions by employing diffusion and reptation quantum Monte Carlo (QMC) calculations, for a set of benzene-molecule complexes. While the binding energies ...
anatomy monte carlo: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
22 23 24 25 Next Page Last Page Topic Index 1 Optical Monte Carlo modeling of a true port wine stain anatomy Biology and Medicine Websites Summary: Optical Monte Carlo modeling of...
Monte Carlo Tools for Jet Quenching
Korinna Zapp
2011-09-07T23:59:59.000Z
A thorough understanding of jet quenching on the basis of multi-particle final states and jet observables requires new theoretical tools. This talk summarises the status and propects of the theoretical description of jet quenching in terms of Monte Carlo generators.
MONTE CARLO CALCULATIONS OF LR115 DETECTOR RESPONSE TO 222
Yu, K.N.
(4):414Â419; 2000 Key words: Monte Carlo; radon progeny; detector, alpha- track; thoron INTRODUCTION THE LR115
Module 2: Monte Carlo Methods Prof. Mike Giles
Giles, Mike
, Bermudan options, optimal trading given transaction costs) MC Lecture 1 Â p. 7 #12;Monte Carlo vs. finite 1 Â p. 5 #12;Monte Carlo vs. finite differences Hard to get reliable figures, but my "guesstimate most heavily? . . . and will it stay that way in the future? MC Lecture 1 Â p. 6 #12;Monte Carlo vs
John von Neumann Institute for Computing Monte Carlo Protein Folding
Hsu, Hsiao-Ping
John von Neumann Institute for Computing Monte Carlo Protein Folding: Simulations of Met://www.fz-juelich.de/nic-series/volume20 #12;#12;Monte Carlo Protein Folding: Simulations of Met-Enkephalin with Solvent-Accessible Area difficulties in applying Monte Carlo methods to protein folding. The solvent-accessible area method, a popular
Multiple Overlapping Tiles for Contextual Monte Carlo Tree Search
for linear transforms [4] or active learning [8]. The use of Monte Carlo simulations to evaluate a situation- tions depending on the context. The modification is based on a reward function learned on a tiling of the space of Monte Carlo simulations. The tiling is done by regrouping the Monte Carlo simulations where two
Guideline of Monte Carlo calculation. Neutron/gamma ray transport simulation by Monte Carlo method
2002-01-01T23:59:59.000Z
This report condenses basic theories and advanced applications of neutron/gamma ray transport calculations in many fields of nuclear energy research. Chapters 1 through 5 treat historical progress of Monte Carlo methods, general issues of variance reduction technique, cross section libraries used in continuous energy Monte Carlo codes. In chapter 6, the following issues are discussed: fusion benchmark experiments, design of ITER, experiment analyses of fast critical assembly, core analyses of JMTR, simulation of pulsed neutron experiment, core analyses of HTTR, duct streaming calculations, bulk shielding calculations, neutron/gamma ray transport calculations of the Hiroshima atomic bomb. Chapters 8 and 9 treat function enhancements of MCNP and MVP codes, and a parallel processing of Monte Carlo calculation, respectively. An important references are attached at the end of this report.
STORM in Monte Carlo reactor physics calculations KAUR TUTTELBERG
Haviland, David
STORM in Monte Carlo reactor physics calculations KAUR TUTTELBERG Master of Science Thesis Carlo reactor physics criticality calculations. This is achieved by optimising the number of neutron for more efficient Monte Carlo reactor physics calculations, giving results with errors that can
A Monte Carlo algorithm for degenerate plasmas
Turrell, A.E., E-mail: a.turrell09@imperial.ac.uk; Sherlock, M.; Rose, S.J.
2013-09-15T23:59:59.000Z
A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the Fermi–Dirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electron–ion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.
Monte Carlo simulations on Graphics Processing Units
Vadim Demchik; Alexei Strelchenko
2009-03-30T23:59:59.000Z
Implementation of basic local Monte-Carlo algorithms on ATI Graphics Processing Units (GPU) is investigated. The Ising model and pure SU(2) gluodynamics simulations are realized with the Compute Abstraction Layer (CAL) of ATI Stream environment using the Metropolis and the heat-bath algorithms, respectively. We present an analysis of both CAL programming model and the efficiency of the corresponding simulation algorithms on GPU. In particular, the significant performance speed-up of these algorithms in comparison with serial execution is observed.
Quantum Monte Carlo calculations for light nuclei
Wiringa, R.B.
1998-08-01T23:59:59.000Z
Quantum Monte Carlo calculations of ground and low-lying excited states for nuclei with A {le} 8 are made using a realistic Hamiltonian that fits NN scattering data. Results for more than 30 different (j{sup {prime}}, T) states, plus isobaric analogs, are obtained and the known excitation spectra are reproduced reasonably well. Various density and momentum distributions and electromagnetic form factors and moments have also been computed. These are the first microscopic calculations that directly produce nuclear shell structure from realistic NN interactions.
Energy Monte Carlo (EMCEE) | Open Energy Information
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home5b9fcbce19 No revision has beenFfe2fb55-352f-473b-a2dd-50ae8b27f0a6 No revisionWind,SoilsfilesystemEvents UKSPARQLIncMonte Carlo
Monte Carlo Simulations of the Corrosion of Aluminoborosilicate...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Corrosion of Aluminoborosilicate Glasses. Monte Carlo Simulations of the Corrosion of Aluminoborosilicate Glasses. Abstract: Aluminum is one of the most common components included...
Quantum Monte Carlo methods for nuclear physics
J. Carlson; S. Gandolfi; F. Pederiva; Steven C. Pieper; R. Schiavilla; K. E. Schmidt; R. B. Wiringa
2014-12-09T23:59:59.000Z
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states and transition moments in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE
WATERS, LAURIE S. [Los Alamos National Laboratory; MCKINNEY, GREGG W. [Los Alamos National Laboratory; DURKEE, JOE W. [Los Alamos National Laboratory; FENSIN, MICHAEL L. [Los Alamos National Laboratory; JAMES, MICHAEL R. [Los Alamos National Laboratory; JOHNS, RUSSELL C. [Los Alamos National Laboratory; PELOWITZ, DENISE B. [Los Alamos National Laboratory
2007-01-10T23:59:59.000Z
MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.
RADIATIVE HEAT TRANSFER WITH QUASI-MONTE CARLO METHODS
RADIATIVE HEAT TRANSFER WITH QUASI-MONTE CARLO METHODS A. Kersch1 W. Moroko2 A. Schuster1 1Siemens of Quasi-Monte Carlo to this problem. 1.1 Radiative Heat Transfer Reactors In the manufacturing of the problems which can be solved by such a simulation is high accuracy modeling of the radiative heat transfer
Adjoint electron-photon transport Monte Carlo calculations with ITS
Lorence, L.J.; Kensek, R.P.; Halbleib, J.A. [Sandia National Labs., Albuquerque, NM (United States); Morel, J.E. [Los Alamos National Lab., NM (United States)
1995-02-01T23:59:59.000Z
A general adjoint coupled electron-photon Monte Carlo code for solving the Boltzmann-Fokker-Planck equation has recently been created. It is a modified version of ITS 3.0, a coupled electronphoton Monte Carlo code that has world-wide distribution. The applicability of the new code to radiation-interaction problems of the type found in space environments is demonstrated.
Special Topics Monte Carlo Methods in Science, Engineering and Business
Shepp, Larry
SYLLABUS Special Topics Monte Carlo Methods in Science, Engineering and Business Fall, 2007 in Probability and Statistics 3. Simple Simulation Methods 4. Sequential Monte Carlo Methods 5. Markov Chain up shortly Prerequisite: First Graduate Level Mathematical Statistics Course It should be emphasized
CERN-TH.6275/91 Monte Carlo Event Generation
SjÃ¶strand, TorbjÃ¶rn
CERN-TH.6275/91 Monte Carlo Event Generation for LHC T. SjÂ¨ostrand CERN -- Geneva Abstract The necessity of event generators for LHC physics studies is illustrated, and the Monte Carlo approach is outlined. A survey is presented of existing event generators, followed by a more detailed study
Monte Carlo stratified source-sampling
Blomquist, R.N.; Gelbard, E.M.
1997-09-01T23:59:59.000Z
In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo {open_quotes}eigenvalue of the world{close_quotes} problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. The original test-problem was treated by a special code designed specifically for that purpose. Recently ANL started work on a method for dealing with more realistic eigenvalue of the world configurations, and has been incorporating this method into VIM. The original method has been modified to take into account real-world statistical noise sources not included in the model problem. This paper constitutes a status report on work still in progress.
Quantum Ice : a quantum Monte Carlo study
Nic Shannon; Olga Sikora; Frank Pollmann; Karlo Penc; Peter Fulde
2011-12-13T23:59:59.000Z
Ice states, in which frustrated interactions lead to a macroscopic ground-state degeneracy, occur in water ice, in problems of frustrated charge order on the pyrochlore lattice, and in the family of rare-earth magnets collectively known as spin ice. Of particular interest at the moment are "quantum spin ice" materials, where large quantum fluctuations may permit tunnelling between a macroscopic number of different classical ground states. Here we use zero-temperature quantum Monte Carlo simulations to show how such tunnelling can lift the degeneracy of a spin or charge ice, stabilising a unique "quantum ice" ground state --- a quantum liquid with excitations described by the Maxwell action of 3+1-dimensional quantum electrodynamics. We further identify a competing ordered "squiggle" state, and show how both squiggle and quantum ice states might be distinguished in neutron scattering experiments on a spin ice material.
A hybrid Monte Carlo and response matrix Monte Carlo method in criticality calculation
Li, Z.; Wang, K. [Dept. of Engineering Physics, Tsinghua Univ., Beijing, 100084 (China)
2012-07-01T23:59:59.000Z
Full core calculations are very useful and important in reactor physics analysis, especially in computing the full core power distributions, optimizing the refueling strategies and analyzing the depletion of fuels. To reduce the computing time and accelerate the convergence, a method named Response Matrix Monte Carlo (RMMC) method based on analog Monte Carlo simulation was used to calculate the fixed source neutron transport problems in repeated structures. To make more accurate calculations, we put forward the RMMC method based on non-analog Monte Carlo simulation and investigate the way to use RMMC method in criticality calculations. Then a new hybrid RMMC and MC (RMMC+MC) method is put forward to solve the criticality problems with combined repeated and flexible geometries. This new RMMC+MC method, having the advantages of both MC method and RMMC method, can not only increase the efficiency of calculations, also simulate more complex geometries rather than repeated structures. Several 1-D numerical problems are constructed to test the new RMMC and RMMC+MC method. The results show that RMMC method and RMMC+MC method can efficiently reduce the computing time and variations in the calculations. Finally, the future research directions are mentioned and discussed at the end of this paper to make RMMC method and RMMC+MC method more powerful. (authors)
E-Print Network 3.0 - accelerating monte carlo Sample Search...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
of the Computational Cost of a Monte Carlo and Deterministic Algorithm for Summary: Comparison of the Computational Cost of a Monte Carlo and Deterministic Algorithm for...
E-Print Network 3.0 - accelerated monte carlo Sample Search Results
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
of the Computational Cost of a Monte Carlo and Deterministic Algorithm for Summary: Comparison of the Computational Cost of a Monte Carlo and Deterministic Algorithm for...
E-Print Network 3.0 - alloys monte carlo Sample Search Results
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
to adjacent threefold sites according to the Metropolis Monte Carlo algorithm with the energy landscape from... by the displacement after 1000 Monte Carlo time steps. 16...
Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU)
Yang, Owen; Choi, Bernard
2013-01-01T23:59:59.000Z
and S. Andersson-Engels, “Accelerated Monte Carlo models toAccelerated rescaling of single Monte Carlo simulation runsreported on online, GPU- accelerated MC simulations. Along
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Urbatsch, T.J.
1995-11-01T23:59:59.000Z
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Sequential Monte Carlo Methods for Protein Folding
Peter Grassberger
2004-08-26T23:59:59.000Z
We describe a class of growth algorithms for finding low energy states of heteropolymers. These polymers form toy models for proteins, and the hope is that similar methods will ultimately be useful for finding native states of real proteins from heuristic or a priori determined force fields. These algorithms share with standard Markov chain Monte Carlo methods that they generate Gibbs-Boltzmann distributions, but they are not based on the strategy that this distribution is obtained as stationary state of a suitably constructed Markov chain. Rather, they are based on growing the polymer by successively adding individual particles, guiding the growth towards configurations with lower energies, and using "population control" to eliminate bad configurations and increase the number of "good ones". This is not done via a breadth-first implementation as in genetic algorithms, but depth-first via recursive backtracking. As seen from various benchmark tests, the resulting algorithms are extremely efficient for lattice models, and are still competitive with other methods for simple off-lattice models.
Enhancements in Continuous-Energy Monte Carlo Capabilities in SCALE
Bekar, Kursat B [ORNL] [ORNL; Celik, Cihangir [ORNL] [ORNL; Wiarda, Dorothea [ORNL] [ORNL; Peplow, Douglas E. [ORNL] [ORNL; Rearden, Bradley T [ORNL] [ORNL; Dunn, Michael E [ORNL] [ORNL
2013-01-01T23:59:59.000Z
Monte Carlo tools in SCALE are commonly used in criticality safety calculations as well as sensitivity and uncertainty analysis, depletion, and criticality alarm system analyses. Recent improvements in the continuous-energy data generated by the AMPX code system and significant advancements in the continuous-energy treatment in the KENO Monte Carlo eigenvalue codes facilitate the use of SCALE Monte Carlo codes to model geometrically complex systems with enhanced solution fidelity. The addition of continuous-energy treatment to the SCALE Monaco code, which can be used with automatic variance reduction in the hybrid MAVRIC sequence, provides significant enhancements, especially for criticality alarm system modeling. This paper describes some of the advancements in continuous-energy Monte Carlo codes within the SCALE code system.
Variance Reduction Techniques for Implicit Monte Carlo Simulations
Landman, Jacob Taylor
2013-09-19T23:59:59.000Z
The Implicit Monte Carlo (IMC) method is widely used for simulating thermal radiative transfer and solving the radiation transport equation. During an IMC run a grid network is constructed and particles are sourced into the problem to simulate...
An Analysis Tool for Flight Dynamics Monte Carlo Simulations
Restrepo, Carolina 1982-
2011-05-20T23:59:59.000Z
and analysis work to understand vehicle operating limits and identify circumstances that lead to mission failure. A Monte Carlo simulation approach that varies a wide range of physical parameters is typically used to generate thousands of test cases...
Monte Carlos of the new generation: status and progress
Frixione, Stefano [INFN, Sezione di Genova, Via Dodecaneso 33, 16146 Genova (Italy)
2005-03-22T23:59:59.000Z
Standard parton shower monte carlos are designed to give reliable descriptions of low-pT physics. In the very high-energy regime of modern colliders, this is may lead to largely incorrect predictions of the basic reaction processes. This motivated the recent theoretical efforts aimed at improving monte carlos through the inclusion of matrix elements computed beyond the leading order in QCD. I briefly review the progress made, and discuss bottom production at the Tevatron.
Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments
Pevey, Ronald E.
2005-09-15T23:59:59.000Z
Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL.
Establishing Quantum Monte Carlo and Hybird Density
Militzer, Burkhard
Program in Physics The Ohio State University 2011 Dissertation Committee: John W. Wilkins, Advisor Richard-GGAs and hybrid functionals, such as the screened hybrid, HSE, have been developed to try to improve the flaws properties available for silica. The HSE DFT functional is shown to reproduce QMC results for both silicon
A Multivariate Time Series Method for Monte Carlo Reactor Analysis
Taro Ueki
2008-08-14T23:59:59.000Z
A robust multivariate time series method has been established for the Monte Carlo calculation of neutron multiplication problems. The method is termed Coarse Mesh Projection Method (CMPM) and can be implemented using the coarse statistical bins for acquisition of nuclear fission source data. A novel aspect of CMPM is the combination of the general technical principle of projection pursuit in the signal processing discipline and the neutron multiplication eigenvalue problem in the nuclear engineering discipline. CMPM enables reactor physicists to accurately evaluate major eigenvalue separations of nuclear reactors with continuous energy Monte Carlo calculation. CMPM was incorporated in the MCNP Monte Carlo particle transport code of Los Alamos National Laboratory. The great advantage of CMPM over the traditional Fission Matrix method is demonstrated for the three space-dimensional modeling of the initial core of a pressurized water reactor.
Quantum Monte Carlo Calculations of Light Nuclei Using Chiral Potentials
J. E. Lynn; J. Carlson; E. Epelbaum; S. Gandolfi; A. Gezerlis; A. Schwenk
2014-11-09T23:59:59.000Z
We present the first Green's function Monte Carlo calculations of light nuclei with nuclear interactions derived from chiral effective field theory up to next-to-next-to-leading order. Up to this order, the interactions can be constructed in a local form and are therefore amenable to quantum Monte Carlo calculations. We demonstrate a systematic improvement with each order for the binding energies of $A=3$ and $A=4$ systems. We also carry out the first few-body tests to study perturbative expansions of chiral potentials at different orders, finding that higher-order corrections are more perturbative for softer interactions. Our results confirm the necessity of a three-body force for correct reproduction of experimental binding energies and radii, and pave the way for studying few- and many-nucleon systems using quantum Monte Carlo methods with chiral interactions.
Fast neutron fluxes in pressure vessels using Monte Carlo methods
Edlund, M.C.; Thomas, J.R.
1986-01-01T23:59:59.000Z
The objective of this project is to determine the feasibility of calculating the fast neutron flux in the pressure vessel of a pressurized water reactor by Monte Carlo methods. Neutron reactions reduce the ductility of the steel and thus limit the useful life of this important reactor component. This work was performed for Virginia Power (VEPCO). VIM is a continuous-energy Monte Carlo code which provides a versatile geometrical capability and a neutron physics data base closely representing the EDNF/B-IV data from which it was derived.
Metodi Monte Carlo in Finanza Lucia Caramellino
Caramellino, Lucia
Carlo `e un metodo stocastico per il calcolo numerico di quantit`a deterministiche. L'idea `e la finanza 5 3.1 Calcolo numerico del prezzo . . . . . . . . . . . . . . . . . . . 5 3.1.1 Call e put . . . . . . . . . . . . . . . . . . . . . 7 3.1.3 Opzioni su due sottostanti . . . . . . . . . . . . . . . . 8 3.2 Calcolo numerico della
Romano, Paul K. (Paul Kollath)
2013-01-01T23:59:59.000Z
Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there ...
Monte Carlo Filtering on Lie Groups Alessandro Chiuso 1 and Stefano Soatto 2
Soatto, Stefano
Monte Carlo Filtering on Lie Groups Alessandro Chiuso 1 and Stefano Soatto 2 Abstract We propose to be consistent with the updated conditional distribution. The algorithm proposed, like other Monte Carlo methods
Guan, Fada 1982-
2012-04-27T23:59:59.000Z
Monte Carlo method has been successfully applied in simulating the particles transport problems. Most of the Monte Carlo simulation tools are static and they can only be used to perform the static simulations for the problems with fixed physics...
Pasciak, Alexander Samuel
2007-04-25T23:59:59.000Z
Advancements in parallel and cluster computing have made many complex Monte Carlo simulations possible in the past several years. Unfortunately, cluster computers are large, expensive, and still not fast enough to make the Monte Carlo technique...
Selection Criteria Based on Monte Carlo Simulation and Cross Validation
Shang, Junfeng
Shang Bowling Green State University, USA Abstract In the mixed modeling framework, Monte Carlo State University, Bowling Green, OH 43403. #12;1 Introduction The Akaike (1973, 1974) information-mail: jshang@bgnet.bgsu.edu. Department of Mathematics and Statistics, 450 Math Science Building, Bowling Green
Difficulties in vector-parallel processing of Monte Carlo codes
Higuchi, Kenji; Asai, Kiyoshi [Japan Atomic Energy Research Inst., Tokyo (Japan). Center for Promotion of Computational Science and Engineering; Hasegawa, Yukihiro [Research Organization for Information Science and Technology, Tokai, Ibaraki (Japan)
1997-09-01T23:59:59.000Z
Experiences with vectorization of production-level Monte Carlo codes such as KENO-IV, MCNP, VIM, and MORSE have shown that it is difficult to attain high speedup ratios on vector processors because of indirect addressing, nests of conditional branches, short vector length, cache misses, and operations for realization of robustness and generality. A previous work has already shown that the first, second, and third difficulties can be resolved by using special computer hardware for vector processing of Monte Carlo codes. Here, the fourth and fifth difficulties are discussed in detail using the results for a vectorized version of the MORSE code. As for the fourth difficulty, it is shown that the cache miss-hit ratio affects execution times of the vectorized Monte Carlo codes and the ratio strongly depends on the number of the particles simultaneously tracked. As for the fifth difficulty, it is shown that remarkable speedup ratios are obtained by removing operations that are not essential to the specific problem being solved. These experiences have shown that if a production-level Monte Carlo code system had a capability to selectively construct source coding that complements the input data, then the resulting code could achieve much higher performance.
Quantum Monte Carlo calculations of symmetric nuclear matter
Stefano Gandolfi; Francesco Pederiva; Stefano Fantoni; Kevin E. Schmidt
2007-04-13T23:59:59.000Z
We present an accurate numerical study of the equation of state of nuclear matter based on realistic nucleon--nucleon interactions by means of Auxiliary Field Diffusion Monte Carlo (AFDMC) calculations. The AFDMC method samples the spin and isospin degrees of freedom allowing for quantum simulations of large nucleonic systems and can provide quantitative understanding of problems in nuclear structure and astrophysics.
Evolutionary Monte Carlo for protein folding simulations Faming Lianga)
Liang, Faming
Evolutionary Monte Carlo for protein folding simulations Faming Lianga) Department of Statistics to simulations of protein folding on simple lattice models, and to finding the ground state of a protein. In all structures in protein folding. The numerical results show that it is drastically superior to other methods
ENVIRONMENTAL MODELING: 1 APPLICATIONS: MONTE CARLO SENSITIVITY SIMULATIONS
Dimov, Ivan
SIMULATIONS TO THE PROBLEM OF AIR POLLUTION TRANSPORT 3 1.1 The Danish Eulerian Model #12;Chapter 1 APPLICATIONS: MONTE CARLO SENSITIVITY SIMULATIONS TO THE PROBLEM OF AIR POLLUTION of pollutants in a real-live scenario of air-pollution transport over Europe. First, the developed technique
Monte Carlo Simulations of Thermal Conductivity in Nanoporous Si Membranes
candidates for thermoelectric materials as they can provide extremely low thermal conductivity , relatively of boundary scattering on the thermal conductivity. We show that the material porosity strongly affects1 Monte Carlo Simulations of Thermal Conductivity in Nanoporous Si Membranes Stefanie Wolf1
Multiple Overlapping Tiles for Contextual Monte Carlo Tree Search
Paris-Sud XI, Université de
generation of libraries for linear transforms [4] or active learning [8]. The use of Monte Carlo simulations is to group simulations where two particular actions have been selected by the same player. Then, we learn simulations in the MCTS algorithm has been proposed. We first present reinforcement learning, the principle
A MONTE CARLO SIMULATION OF WATER FLOW IN VARIABLY ...
1910-10-30T23:59:59.000Z
A Monte Carlo simulation method is employed to study groundwater flow in variably saturated fractal porous ... Richards' equation which is solved using a hybridized mixed finite element procedure. ... INTRODUCTION ... This conclusion has led to the development of stochastic models for the basic un- ... different soils.
SciTech Connect: Fast Monte Carlo for radiation therapy: the...
Office of Scientific and Technical Information (OSTI)
MEDICINE, BASIC STUDIES; RADIOTHERAPY; PLANNING; COMPUTER CALCULATIONS; RADIATION DOSE DISTRIBUTIONS; MONTE CARLO METHOD; THREE-DIMENSIONAL CALCULATIONS; COMPUTERIZED TOMOGRAPHY...
Beyond the Born-Oppenheimer approximation with quantum Monte Carlo
Tubman, Norm M; Hammes-Schiffer, Sharon; Ceperley, David M
2014-01-01T23:59:59.000Z
In this work we develop tools that enable the study of non-adiabatic effects with variational and diffusion Monte Carlo methods. We introduce a highly accurate wave function ansatz for electron-ion systems that can involve a combination of both fixed and quantum ions. We explicitly calculate the ground state energies of H$_{2}$, LiH, H$_{2}$O and FHF$^{-}$ using fixed-node quantum Monte Carlo with wave function nodes that explicitly depend on the ion positions. The obtained energies implicitly include the effects arising from quantum nuclei and electron-nucleus coupling. We compare our results to the best theoretical and experimental results available and find excellent agreement.
Fixed-Node Diffusion Monte Carlo of Lithium Systems
Rasch, Kevin
2015-01-01T23:59:59.000Z
We study lithium systems over a range of number of atoms, e.g., atomic anion, dimer, metallic cluster, and body-centered cubic crystal by the diffusion Monte Carlo method. The calculations include both core and valence electrons in order to avoid any possible impact by pseudo potentials. The focus of the study is the fixed-node errors, and for that purpose we test several orbital sets in order to provide the most accurate nodal hyper surfaces. We compare our results to other high accuracy calculations wherever available and to experimental results so as to quantify the the fixed-node errors. The results for these Li systems show that fixed-node quantum Monte Carlo achieves remarkably high accuracy total energies and recovers 97-99 % of the correlation energy.
Efficient, automated Monte Carlo methods for radiation transport
Kong Rong; Ambrose, Martin [Claremont Graduate University, 150 E. 10th Street, Claremont, CA 91711 (United States); Spanier, Jerome [Claremont Graduate University, 150 E. 10th Street, Claremont, CA 91711 (United States); Beckman Laser Institute and Medical Clinic, University of California, 1002 Health Science Road E., Irvine, CA 92612 (United States)], E-mail: jspanier@uci.edu
2008-11-20T23:59:59.000Z
Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k+1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed.
The hybrid Monte Carlo Algorithm and the chiral transition
Gupta, R.
1987-01-01T23:59:59.000Z
In this talk the author describes tests of the Hybrid Monte Carlo Algorithm for QCD done in collaboration with Greg Kilcup and Stephen Sharpe. We find that the acceptance in the glubal Metropolis step for Staggered fermions can be tuned and kept large without having to make the step-size prohibitively small. We present results for the finite temperature transition on 4/sup 4/ and 4 x 6/sup 3/ lattices using this algorithm.
Regional Monte Carlo solution of elliptic partial differential equations
Booth, T.E.
1981-01-01T23:59:59.000Z
A continuous random walk procedure for solving some elliptic partial differential equations at a single point is generalized to estimate the solution everywhere. The Monte Carlo method described here is exact (except at the boundary) in the sense that the only error is the statistical sampling error that tends to zero as the sample size increases. A method to estimate the error introduced at the boundary is provided so that the boundary error can always be made less than the statistical error.
Monte Carlo approach to nuclei and nuclear matter
Fantoni, Stefano [S.I.S.S.A., International School of Advanced Studies, INFN, Sezione di Trieste and INFM, CNR-DEMOCRITOS National Supercomputing Center (Italy); Gandolfi, Stefano; Illarionov, Alexey Yu. [S.I.S.S.A., International School of Advanced Studies, INFN, Sezione di Trieste (Italy); Schmidt, Kevin E. [Department of Physics, Arizona State University (United States); Pederiva, Francesco [Dipartimento di Fisica, University of Trento (Italy); INFM, CNR-DEMOCRITOS National Supercomputing Center (Greece)
2008-10-13T23:59:59.000Z
We report on the most recent applications of the Auxiliary Field Diffusion Monte Carlo (AFDMC) method. The equation of state (EOS) for pure neutron matter in both normal and BCS phase and the superfluid gap in the low-density regime are computed, using a realistic Hamiltonian containing the Argonne AV8' plus Urbana IX three-nucleon interaction. Preliminary results for the EOS of isospin-asymmetric nuclear matter are also presented.
Monte Carlo approach to nuclei and nuclear matter
Stefano Fantoni; Stefano Gandolfi; Alexey Yu. Illarionov; Kevin E. Schmidt; Francesco Pederiva
2008-07-31T23:59:59.000Z
We report on the most recent applications of the Auxiliary Field Diffusion Monte Carlo (AFDMC) method. The equation of state (EOS) for pure neutron matter in both normal and BCS phase and the superfluid gap in the low--density regime are computed, using a realistic Hamiltonian containing the Argonne AV8' plus Urbana IX three--nucleon interaction. Preliminary results for the EOS of isospin--asymmetric nuclear matter are also presented.
Quantum Monte Carlo Calculations of Symmetric Nuclear Matter
Gandolfi, Stefano [Dipartimento di Fisica and INFN, University of Trento, via Sommarive 14, I-38050 Povo, Trento (Italy); Pederiva, Francesco [Dipartimento di Fisica and INFN, University of Trento, via Sommarive 14, I-38050 Povo, Trento (Italy); CNR-DEMOCRITOS National Supercomputing Center, Trieste (Italy); Fantoni, Stefano [Scuola Internazionale Superiore di Studi Avanzati and INFN via Beirut 2/4, 34014 Trieste (Italy); CNR-DEMOCRITOS National Supercomputing Center, Trieste (Italy); Schmidt, Kevin E. [Department of Physics, Arizona State University, Tempe, Arizona (United States)
2007-03-09T23:59:59.000Z
We present an accurate numerical study of the equation of state of nuclear matter based on realistic nucleon-nucleon interactions by means of auxiliary field diffusion Monte Carlo (AFDMC) calculations. The AFDMC method samples the spin and isospin degrees of freedom allowing for quantum simulations of large nucleonic systems and represents an important step forward towards a quantitative understanding of problems in nuclear structure and astrophysics.
Monte-Carlo Simulation for an Aerogel Cherenkov Counter
Ryuji Suda et al
1997-07-31T23:59:59.000Z
We have developed a Monte-Carlo simulation code for an aerogel \\v Cerenkov Counter which is operated under a strong magnetic field such as 1.5T. This code consists of two parts: photon transportation inside aerogel tiles, and one-dimensional amplification in a fine-mesh photomultiplier tube. It simulates the output photoelectron yields as accurately as 5% with only a single free parameter. This code is applied to simulations for a B-Factory particle-identification system.
MC++: Parallel, portable, Monte Carlo neutron transport in C++
Lee, S.R.; Cummings, J.C. [Los Alamos National Lab., NM (United States); Nolen, S.D. [Texas A& M Univ., College Station, TX (United States). Dept. of Nuclear Engineering
1997-02-01T23:59:59.000Z
We have developed an implicit Monte Carlo neutron transport code in C++ using the Parallel Object-Oriented Methods and Applications (POOMA) class library. MC++ runs in parallel on and is portable to a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and parallelism. Current capabilities of MC++ are discussed, along with future plans and physics and performance results on many different platforms.
OBJECT KINETIC MONTE CARLO SIMULATIONS OF MICROSTRUCTURE EVOLUTION
Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.; Roche, Kenneth J.; Kurtz, Richard J.; Wirth, Brian D.
2013-09-30T23:59:59.000Z
The objective is to report the development of the flexible object kinetic Monte Carlo (OKMC) simulation code KSOME (kinetic simulation of microstructure evolution) which can be used to simulate microstructure evolution of complex systems under irradiation. In this report we briefly describe the capabilities of KSOME and present preliminary results for short term annealing of single cascades in tungsten at various primary-knock-on atom (PKA) energies and temperatures.
A Wigner Monte Carlo approach to density functional theory
Sellier, J.M., E-mail: jeanmichel.sellier@gmail.com; Dimov, I.
2014-08-01T23:59:59.000Z
In order to simulate quantum N-body systems, stationary and time-dependent density functional theories rely on the capacity of calculating the single-electron wave-functions of a system from which one obtains the total electron density (Kohn–Sham systems). In this paper, we introduce the use of the Wigner Monte Carlo method in ab-initio calculations. This approach allows time-dependent simulations of chemical systems in the presence of reflective and absorbing boundary conditions. It also enables an intuitive comprehension of chemical systems in terms of the Wigner formalism based on the concept of phase-space. Finally, being based on a Monte Carlo method, it scales very well on parallel machines paving the way towards the time-dependent simulation of very complex molecules. A validation is performed by studying the electron distribution of three different systems, a Lithium atom, a Boron atom and a hydrogenic molecule. For the sake of simplicity, we start from initial conditions not too far from equilibrium and show that the systems reach a stationary regime, as expected (despite no restriction is imposed in the choice of the initial conditions). We also show a good agreement with the standard density functional theory for the hydrogenic molecule. These results demonstrate that the combination of the Wigner Monte Carlo method and Kohn–Sham systems provides a reliable computational tool which could, eventually, be applied to more sophisticated problems.
Monte Carlo Methods for Uncertainty Quantification Mathematical Institute, University of Oxford
Giles, Mike
Lecture 1: Introduction and Monte Carlo basics some model applications random number generation Monte force being outside some specified range Note: if we turn this into a full finite element analysis on the boundary. Mike Giles (Oxford) Monte Carlo methods October 25, 2013 7 / 28 #12;Application 3 In modelling
Improved quantum Monte Carlo calculation of the ground-state energy of the hydrogen molecule
Anderson, James B.
variational energies. The accuracy of the new Monte Carlo energy is approximately equal to that of recentImproved quantum Monte Carlo calculation of the ground-state energy of the hydrogen molecule Bin Carlo calculation of the nonrelativistic ground-state energy of the hydrogen molecule, without the use
Properties of Reactive Oxygen Species by Quantum Monte Carlo
Andrea Zen; Bernhardt L. Trout; Leonardo Guidoni
2014-03-11T23:59:59.000Z
The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of Chemistry, Biology and Atmospheric Science. Nevertheless, the electronic structure of such species is a challenge for ab-initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as $N^3-N^4$, where $N$ is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.
Properties of reactive oxygen species by quantum Monte Carlo
Zen, Andrea [Dipartimento di Fisica, La Sapienza - Università di Roma, Piazzale Aldo Moro 2, 00185 Rome (Italy); Trout, Bernhardt L. [Department of Chemical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, Massachusetts 02139 (United States); Guidoni, Leonardo, E-mail: leonardo.guidoni@univaq.it [Dipartimento di Scienze Fisiche e Chimiche, Università degli studi de L'Aquila, Via Vetoio, 67100 Coppito, L'Aquila (Italy)
2014-07-07T23:59:59.000Z
The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N{sup 3} ? N{sup 4}, where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.
Computational radiology and imaging with the MCNP Monte Carlo code
Estes, G.P.; Taylor, W.M.
1995-05-01T23:59:59.000Z
MCNP, a 3D coupled neutron/photon/electron Monte Carlo radiation transport code, is currently used in medical applications such as cancer radiation treatment planning, interpretation of diagnostic radiation images, and treatment beam optimization. This paper will discuss MCNP`s current uses and capabilities, as well as envisioned improvements that would further enhance MCNP role in computational medicine. It will be demonstrated that the methodology exists to simulate medical images (e.g. SPECT). Techniques will be discussed that would enable the construction of 3D computational geometry models of individual patients for use in patient-specific studies that would improve the quality of care for patients.
Monte Carlo Simulations of the two-dimensional dipolar fluid
Caillol, Jean-Michel
2015-01-01T23:59:59.000Z
We study a two-dimensional fluid of dipolar hard disks by Monte Carlo simulations in a square with periodic boundary conditions and on the surface of a sphere. The theory of the dielectric constant and the asymptotic behaviour of the equilibrium pair correlation function in the fluid phase is derived for both geometries. After having established the equivalence of the two methods we study the stability of the liquid phase in the canonical ensemble. We give evidence of a phase made of living polymers at low temperatures and provide a tentative phase diagram.
GPU accelerated Monte Carlo simulations of lattice spin models
Martin Weigel; Taras Yavors'kii
2011-07-27T23:59:59.000Z
We consider Monte Carlo simulations of classical spin models of statistical mechanics using the massively parallel architecture provided by graphics processing units (GPUs). We discuss simulations of models with discrete and continuous variables, and using an array of algorithms ranging from single-spin flip Metropolis updates over cluster algorithms to multicanonical and Wang-Landau techniques to judge the scope and limitations of GPU accelerated computation in this field. For most simulations discussed, we find significant speed-ups by two to three orders of magnitude as compared to single-threaded CPU implementations.
Temperature and density extrapolations in canonical ensemble Monte Carlo simulations
A. L. Ferreira; M. A. Barroso
1999-06-14T23:59:59.000Z
We show how to use the multiple histogram method to combine canonical ensemble Monte Carlo simulations made at different temperatures and densities. The method can be applied to study systems of particles with arbitrary interaction potential and to compute the thermodynamic properties over a range of temperatures and densities. The calculation of the Helmholtz free energy relative to some thermodynamic reference state enables us to study phase coexistence properties. We test the method on the Lennard-Jones fluids for which many results are available.
Monte Carlo Tools for charged Higgs boson production
K. Kovarik
2014-12-18T23:59:59.000Z
In this short review we discuss two implementations of the charged Higgs boson production process in association with a top quark in Monte Carlo event generators at next-to-leading order in QCD. We introduce the MC@NLO and the POWHEG method of matching next-to-leading order matrix elements with parton showers and compare both methods analyzing the charged Higgs boson production process in association with a top quark. We shortly discuss the case of a light charged Higgs boson where the associated charged Higgs production interferes with the charged Higgs production via t tbar-production and subsequent decay of the top quark.
Adaptively Learning an Importance Function Using Transport Constrained Monte Carlo
Booth, T.E.
1998-06-22T23:59:59.000Z
It is well known that a Monte Carlo estimate can be obtained with zero-variance if an exact importance function for the estimate is known. There are many ways that one might iteratively seek to obtain an ever more exact importance function. This paper describes a method that has obtained ever more exact importance functions that empirically produce an error that is dropping exponentially with computer time. The method described herein constrains the importance function to satisfy the (adjoint) Boltzmann transport equation. This constraint is provided by using the known form of the solution, usually referred to as the Case eigenfunction solution.
Bounded limit for the Monte Carlo point-flux-estimator
Grimesey, R.A.
1981-01-01T23:59:59.000Z
In a Monte Carlo random walk the kernel K(R,E) is used as an expected value estimator at every collision for the collided flux phi/sub c/ r vector,E) at the detector point. A limiting value for the kernel is derived from a diffusion approximation for the probability current at a radius R/sub 1/ from the detector point. The variance of the collided flux at the detector point is thus bounded using this asymptotic form for K(R,E). The bounded point flux estimator is derived. (WHK)
Burnup calculation methodology in the serpent 2 Monte Carlo code
Leppaenen, J. [VTT Technical Research Centre of Finland, P.O.Box 1000, FI-02044 VTT (Finland); Isotalo, A. [Aalto Univ., Dept. of Applied Physics, P.O.Box 14100, FI-00076 AALTO (Finland)
2012-07-01T23:59:59.000Z
This paper presents two topics related to the burnup calculation capabilities in the Serpent 2 Monte Carlo code: advanced time-integration methods and improved memory management, accomplished by the use of different optimization modes. The development of the introduced methods is an important part of re-writing the Serpent source code, carried out for the purpose of extending the burnup calculation capabilities from 2D assembly-level calculations to large 3D reactor-scale problems. The progress is demonstrated by repeating a PWR test case, originally carried out in 2009 for the validation of the newly-implemented burnup calculation routines in Serpent 1. (authors)
Global neutrino parameter estimation using Markov Chain Monte Carlo
Steen Hannestad
2007-10-10T23:59:59.000Z
We present a Markov Chain Monte Carlo global analysis of neutrino parameters using both cosmological and experimental data. Results are presented for the combination of all presently available data from oscillation experiments, cosmology, and neutrinoless double beta decay. In addition we explicitly study the interplay between cosmological, tritium decay and neutrinoless double beta decay data in determining the neutrino mass parameters. We furthermore discuss how the inference of non-neutrino cosmological parameters can benefit from future neutrino mass experiments such as the KATRIN tritium decay experiment or neutrinoless double beta decay experiments.
Continuous-Estimator Representation for Monte Carlo Criticality Diagnostics
Kiedrowski, Brian C. [Los Alamos National Laboratory; Brown, Forrest B. [Los Alamos National Laboratory
2012-06-18T23:59:59.000Z
An alternate means of computing diagnostics for Monte Carlo criticality calculations is proposed. Overlapping spherical regions or estimators are placed covering the fissile material with a minimum center-to-center separation of the 'fission distance', which is defined herein, and a radius that is some multiple thereof. Fission neutron production is recorded based upon a weighted average of proximities to centers for all the spherical estimators. These scores are used to compute the Shannon entropy, and shown to reproduce the value, to within an additive constant, determined from a well-placed mesh by a user. The spherical estimators are also used to assess statistical coverage.
E-Print Network 3.0 - atlas monte carlo Sample Search Results
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
carlo Search Powered by Explorit Topic List Advanced Search Sample search results for: atlas monte carlo Page: << < 1 2 3 4 5 > >> 1 NSF UChicago: Tier 2 and Tier 3 1 Tier 2 and...
Liao, Jun-min
2006-01-01T23:59:59.000Z
??We used the combination of molecular dynamics and Monte Carlo method to investigate protein folding problems. The environments of proteins are very big, and often… (more)
Protein folding and phylogenetic tree reconstruction using stochastic approximation Monte Carlo.
Cheon, Sooyoung
2007-01-01T23:59:59.000Z
??Recently, the stochastic approximation Monte Carlo algorithm has been proposed by Liang et al. (2005) as a general-purpose stochastic optimization and simulation algorithm. An annealing… (more)
E-Print Network 3.0 - all-atom monte carlo Sample Search Results
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Population Monte Carlo and adaptive sampling... schemes Outline 1 Crash ... Source: Robert, Christian P. - Centre De Recherche en Mathmatiques de la Dcision, Universit...
E-Print Network 3.0 - adjoint monte carlo Sample Search Results
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
and finite-difference solving algorithms and Monte Carlo ensemble generation... INVERSE PROBLEMS Waveform-based seismic adjoint tomography, validation of global and regional...
Monte Carlo source convergence and the Whitesides problem
Blomquist, R. N.
2000-02-25T23:59:59.000Z
The issue of fission source convergence in Monte Carlo eigenvalue calculations is of interest because of the potential consequences of erroneous criticality safety calculations. In this work, the authors compare two different techniques to improve the source convergence behavior of standard Monte Carlo calculations applied to challenging source convergence problems. The first method, super-history powering, attempts to avoid discarding important fission sites between generations by delaying stochastic sampling of the fission site bank until after several generations of multiplication. The second method, stratified sampling of the fission site bank, explicitly keeps the important sites even if conventional sampling would have eliminated them. The test problems are variants of Whitesides' Criticality of the World problem in which the fission site phase space was intentionally undersampled in order to induce marginally intolerable variability in local fission site populations. Three variants of the problem were studied, each with a different degree of coupling between fissionable pieces. Both the superhistory powering method and the stratified sampling method were shown to improve convergence behavior, although stratified sampling is more robust for the extreme case of no coupling. Neither algorithm completely eliminates the loss of the most important fissionable piece, and if coupling is absent, the lost piece cannot be recovered unless its sites from earlier generations have been retained. Finally, criteria for measuring source convergence reliability are proposed and applied to the test problems.
Stratified source-sampling techniques for Monte Carlo eigenvalue analysis.
Mohamed, A.
1998-07-10T23:59:59.000Z
In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo ''Eigenvalue of the World'' problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. In this paper, stratified source-sampling techniques are generalized and applied to three different Eigenvalue of the World configurations which take into account real-world statistical noise sources not included in the model problem, but which differ in the amount of neutronic coupling among the constituents of each configuration. It is concluded that, in Monte Carlo eigenvalue analysis of loosely-coupled arrays, the use of stratified source-sampling reduces the probability of encountering an anomalous result over that if conventional source-sampling methods are used. However, this gain in reliability is substantially less than that observed in the model-problem results.
Improved criticality convergence via a modified Monte Carlo iteration method
Booth, Thomas E [Los Alamos National Laboratory; Gubernatis, James E [Los Alamos National Laboratory
2009-01-01T23:59:59.000Z
Nuclear criticality calculations with Monte Carlo codes are normally done using a power iteration method to obtain the dominant eigenfunction and eigenvalue. In the last few years it has been shown that the power iteration method can be modified to obtain the first two eigenfunctions. This modified power iteration method directly subtracts out the second eigenfunction and thus only powers out the third and higher eigenfunctions. The result is a convergence rate to the dominant eigenfunction being |k{sub 3}|/k{sub 1} instead of |k{sub 2}|/k{sub 1}. One difficulty is that the second eigenfunction contains particles of both positive and negative weights that must sum somehow to maintain the second eigenfunction. Summing negative and positive weights can be done using point detector mechanics, but this sometimes can be quite slow. We show that an approximate cancellation scheme is sufficient to accelerate the convergence to the dominant eigenfunction. A second difficulty is that for some problems the Monte Carlo implementation of the modified power method has some stability problems. We also show that a simple method deals with this in an effective, but ad hoc manner.
Monte Carlo simulation of quantum Zeno effect in the brain
Danko Georgiev
2014-12-11T23:59:59.000Z
Environmental decoherence appears to be the biggest obstacle for successful construction of quantum mind theories. Nevertheless, the quantum physicist Henry Stapp promoted the view that the mind could utilize quantum Zeno effect to influence brain dynamics and that the efficacy of such mental efforts would not be undermined by environmental decoherence of the brain. To address the physical plausibility of Stapp's claim, we modeled the brain using quantum tunneling of an electron in a multiple-well structure such as the voltage sensor in neuronal ion channels and performed Monte Carlo simulations of quantum Zeno effect exerted by the mind upon the brain in the presence or absence of environmental decoherence. The simulations unambiguously showed that the quantum Zeno effect breaks down for timescales greater than the brain decoherence time. To generalize the Monte Carlo simulation results for any n-level quantum system, we further analyzed the change of brain entropy due to the mind probing actions and proved a theorem according to which local projections cannot decrease the von Neumann entropy of the unconditional brain density matrix. The latter theorem establishes that Stapp's model is physically implausible but leaves a door open for future development of quantum mind theories provided the brain has a decoherence-free subspace.
Optimization of quantum Monte Carlo wave functions using analytical energy derivatives
Lin, Xi
Optimization of quantum Monte Carlo wave functions using analytical energy derivatives Xi Lin of the local energy, H^ / .5 If the wave function were the exact ground eigenstate, the local energy would November 1999 An algorithm is proposed to optimize quantum Monte Carlo QMC wave functions based on Newton
Status of the VIM Monte Carlo neutron/photon transport code.
Blomquist, R.N.
2002-01-22T23:59:59.000Z
Recent work on the VIM Monte Carlo code has aimed at advanced data libraries, ease of use, availability to users outside of Argonne, and fission source convergence algorithms in eigenvalue calculations. VIM is one of three US Monte Carlo codes in the USDOE Nuclear Criticality Safety Program, and is available through RSICC and the NEA Data Bank.
Kemner, Ken
Tuning Green's Function Monte Carlo for Mira Steven C. Pieper, Physics Division, Argonne National Laboratory Partners in crime Ralph Butler (Middle Tennessee State) Joseph Carlson (Los Alamos) Stefano for comparisons of models to data Â· Quantum Monte Carlo has made much progress for A 12 Â· Nuclei go up to A=238
A new quasi-Monte Carlo technique based on nonnegative least squares and
De Marchi, Stefano
A new quasi-Monte Carlo technique based on nonnegative least squares and approximate Fekete points Claudia Bittantea , Stefano De Marchia, , Alvise Sommarivaa aUniversity of Padova, Department of the quasi-Monte Carlo method. The method, simple in its formulation, be- comes computationally inefficient
Melting of Iron under Earth's Core Conditions from Diffusion Monte Carlo Free Energy Calculations
AlfÃ¨, Dario
Melting of Iron under Earth's Core Conditions from Diffusion Monte Carlo Free Energy Calculations Ester Sola1 and Dario Alfe`1,2 1 Thomas Young Centre@UCL, and Department of Earth Sciences, UCL, Gower. Here we used quantum Monte Carlo techniques to compute the free energies of solid and liquid iron
On Filtering the Noise from the Random Parameters in Monte Carlo Rendering
Sen, Pradeep
On Filtering the Noise from the Random Parameters in Monte Carlo Rendering PRADEEP SEN and SOHEIL DARABI UNM Advanced Graphics Lab Monte Carlo (MC) rendering systems can produce spectacular images from a small number of input samples. To do this, we treat the rendering system as a black box
Monte Carlo simulations of free chains in end-linked polymer networks Nisha Gilra
be significantly altered.1Â3 This occurs because the micro- scopic structure including network defectsMonte Carlo simulations of free chains in end-linked polymer networks Nisha Gilra School networks prepared in the presence of inert linear chain solvent were investigated with Monte Carlo
Self-assembly of surfactants in a supercritical solvent from lattice Monte Carlo simulations
Lisal, Martin
Self-assembly of surfactants in a supercritical solvent from lattice Monte Carlo simulations Martin self-assembly of surfactants in a supercritical solvent by large-scale Monte Carlo simulations. CarbonCO2.3 Surfactant molecules used in scCO2 have two mutually incompatible components: a CO2-philic tail
Quantum Monte Carlo study of a disordered 2D Josephson junction array
Stroud, David
Quantum Monte Carlo study of a disordered 2D Josephson junction array W.A. Al-Saidi *, D. Stroud reserved. PACS: 74.25.Dw; 05.30.Jp; 85.25.Cp Keywords: Josephson junctions; Quantum Monte Carlo; Disorder 1. Introduction A Josephson junction array (JJA) consists of a collection of superconducting islands connected
Monte Carlo Methods for Uncertainty Quantification Mathematical Institute, University of Oxford
Giles, Mike
methods October 25, 2013 7 / 28 Application 3 In modelling groundwater flow in nuclear waste repositories: Introduction and Monte Carlo basics some model applications random number generation Monte Carlo estimation specified range Note: if we turn this into a full finite element analysis, then the computational cost
Hydrogen molecule ion: Path-integral Monte Carlo approach
Kylaenpaeae, I.; Leino, M.; Rantala, T. T. [Institute of Physics, Tampere University of Technology, P.O. Box 692, FI-33101 Tampere (Finland)
2007-11-15T23:59:59.000Z
The path-integral Monte Carlo approach is used to study the coupled quantum dynamics of the electron and nuclei in hydrogen molecule ion. The coupling effects are demonstrated by comparing differences in adiabatic Born-Oppenheimer and nonadiabatic simulations, and inspecting projections of the full three-body dynamics onto the adiabatic Born-Oppenheimer approximation. Coupling of the electron and nuclear quantum dynamics is clearly seen. The nuclear pair correlation function is found to broaden by 0.040a{sub 0}, and the average bond length is larger by 0.056a{sub 0}. Also, a nonadiabatic correction to the binding energy is found. The electronic distribution is affected less than the nuclear one upon inclusion of nonadiabatic effects.
Normality of Monte Carlo criticality eigenfunction decomposition coefficients
Toth, B. E.; Martin, W. R. [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109 (United States); Griesheimer, D. P. [Bechtel Bettis, Inc., P.O. Box 79, West Mifflin, PA 15122 (United States)
2013-07-01T23:59:59.000Z
A proof is presented, which shows that after a single Monte Carlo (MC) neutron transport power method iteration without normalization, the coefficients of an eigenfunction decomposition of the fission source density are normally distributed when using analog or implicit capture MC. Using a Pearson correlation coefficient test, the proof is corroborated by results from a uniform slab reactor problem, and those results also suggest that the coefficients are normally distributed with normalization. The proof and numerical test results support the application of earlier work on the convergence of eigenfunctions under stochastic operators. Knowledge of the Gaussian shape of decomposition coefficients allows researchers to determine an appropriate level of confidence in the distribution of fission sites taken from a MC simulation. This knowledge of the shape of the probability distributions of decomposition coefficients encourages the creation of new predictive convergence diagnostics. (authors)
Monte Carlo solution of a semi-discrete transport equation
Urbatsch, T.J.; Morel, J.E.; Gulick, J.C.
1999-09-01T23:59:59.000Z
The authors present the S{sub {infinity}} method, a hybrid neutron transport method in which Monte Carlo particles traverse discrete space. The goal of any deterministic/stochastic hybrid method is to couple selected characters from each of the methods in hopes of producing a better method. The S{sub {infinity}} method has the features of the lumped, linear-discontinuous (LLD) spatial discretization, yet it has no ray-effects because of the continuous angular variable. They derive the S{sub {infinity}} method for the solid-state, mono-energetic transport equation in one-dimensional slab geometry with isotropic scattering and an isotropic internal source. They demonstrate the viability of the S{sub {infinity}} method by comparing their results favorably to analytic and deterministic results.
Velocity renormalization in graphene from lattice Monte Carlo
Joaquín E. Drut; Timo A. Lähde
2014-03-26T23:59:59.000Z
We compute the Fermi velocity of the Dirac quasiparticles in clean graphene at the charge neutrality point for strong Coulomb coupling alpha_g. We perform a Lattice Monte Carlo calculation within the low-energy Dirac theory, which includes an instantaneous, long-range Coulomb interaction. We find a renormalized Fermi velocity v_FR > v_F, where v_F = c/300. Our results are consistent with a momentum-independent v_FR which increases approximately linearly with alpha_g, although a logarithmic running with momentum cannot be excluded at present. At the predicted critical coupling alpha_gc for the semimetal-insulator transition due to excitonic pair formation, we find v_FR/v_F = 3.3, which we discuss in light of experimental findings for v_FR/v_F at the charge neutrality point in ultra-clean suspended graphene.
Quantum Monte Carlo study of inhomogeneous neutron matter
Stefano Gandolfi
2012-08-31T23:59:59.000Z
We present an ab-initio study of neutron drops. We use Quantum Monte Carlo techniques to calculate the energy up to 54 neutrons in different external potentials, and we compare the results with Skyrme forces. We also calculate the rms radii and radial densities, and we find that a re-adjustment of the gradient term in Skyrme is needed in order to reproduce the properties of these systems given by the ab-initio calculation. By using the ab-initio results for neutron drops for close- and open-shell configurations, we suggest how to improve Skyrme forces when dealing with systems with large isospin-asymmetries like neutron-rich nuclei.
The neutron instrument Monte Carlo library MCLIB: Recent developments
Seeger, P.A.; Daemen, L.L.; Hjelm, R.P. Jr.; Thelliez, T.G.
1998-12-31T23:59:59.000Z
A brief review is given of the developments since the ICANS-XIII meeting made in the neutron instrument design codes using the Monte Carlo library MCLIB. Much of the effort has been to assure that the library and the executing code MC{_}RUN connect efficiently with the World Wide Web application MC-WEB as part of the Los Alamos Neutron Instrument Simulation Package (NISP). Since one of the most important features of MCLIB is its open structure and capability to incorporate any possible neutron transport or scattering algorithm, this document describes the current procedure that would be used by an outside user to add a feature to MCLIB. Details of the calling sequence of the core subroutine OPERATE are discussed, and questions of style are considered and additional guidelines given. Suggestions for standardization are solicited, as well as code for new algorithms.
Monte Carlo Simulation Tool Installation and Operation Guide
Aguayo Navarrete, Estanislao; Ankney, Austin S.; Berguson, Timothy J.; Kouzes, Richard T.; Orrell, John L.; Troy, Meredith D.; Wiseman, Clinton G.
2013-09-02T23:59:59.000Z
This document provides information on software and procedures for Monte Carlo simulations based on the Geant4 toolkit, the ROOT data analysis software and the CRY cosmic ray library. These tools have been chosen for its application to shield design and activation studies as part of the simulation task for the Majorana Collaboration. This document includes instructions for installation, operation and modification of the simulation code in a high cyber-security computing environment, such as the Pacific Northwest National Laboratory network. It is intended as a living document, and will be periodically updated. It is a starting point for information collection by an experimenter, and is not the definitive source. Users should consult with one of the authors for guidance on how to find the most current information for their needs.
Synchronous parallel kinetic Monte Carlo Diffusion in Heterogeneous Systems
Martinez Saez, Enrique [Los Alamos National Laboratory; Hetherly, Jeffery [Los Alamos National Laboratory; Caro, Jose A [Los Alamos National Laboratory
2010-12-06T23:59:59.000Z
A new hybrid Molecular Dynamics-kinetic Monte Carlo algorithm has been developed in order to study the basic mechanisms taking place in diffusion in concentrated alloys under the action of chemical and stress fields. Parallel implementation of the k-MC part based on a recently developed synchronous algorithm [1. Compo Phys. 227 (2008) 3804-3823] resorting on the introduction of a set of null events aiming at synchronizing the time for the different subdomains, added to the parallel efficiency of MD, provides the computer power required to evaluate jump rates 'on the flight', incorporating in this way the actual driving force emerging from chemical potential gradients, and the actual environment-dependent jump rates. The time gain has been analyzed and the parallel performance reported. The algorithm is tested on simple diffusion problems to verify its accuracy.
Atomistic Kinetic Monte Carlo Simulations of Polycrystalline Copper Electrodeposition
Treeratanaphitak, Tanyakarn; Abukhdeir, Nasser Mohieddin
2014-01-01T23:59:59.000Z
A high-fidelity kinetic Monte Carlo (KMC) simulation method (T. Treeratanaphitak, M. Pritzker, N. M. Abukhdeir, Electrochim. Acta 121 (2014) 407--414) using the semi-empirical multi-body embedded-atom method (EAM) potential has been extended to model polycrystalline metal electrodeposition. The presented KMC-EAM method enables true three-dimensional atomistic simulations of electrodeposition over experimentally relevant timescales. Simulations using KMC-EAM are performed over a range of overpotentials to predict the effect on deposit texture evolution. Results show strong agreement with past experimental results both with respect to deposition rates on various copper surfaces and roughness-time power law behaviour. It is found that roughness scales with time $\\propto t^\\beta$ where $\\beta=0.62 \\pm 0.12$, which is in good agreement with past experimental results. Furthermore, the simulations provide insights into sub-surface deposit morphologies which are not directly accessible from experimental measurements.
RMC - A Monte Carlo code for reactor physics analysis
Wang, K.; Li, Z.; She, D.; Liang, J.; Xu, Q.; Qiu, A.; Yu, J.; Sun, J.; Fan, X.; Yu, G. [Department of Engineering Physics, Tsinghua University, Liuqing Building, Beijing, 100084 (China)
2013-07-01T23:59:59.000Z
A new Monte Carlo neutron transport code RMC has been being developed by Department of Engineering Physics, Tsinghua University, Beijing as a tool for reactor physics analysis on high-performance computing platforms. To meet the requirements of reactor analysis, RMC now has such functions as criticality calculation, fixed-source calculation, burnup calculation and kinetics simulations. Some techniques for geometry treatment, new burnup algorithm, source convergence acceleration, massive tally and parallel calculation, and temperature dependent cross sections processing are researched and implemented in RMC to improve the efficiency. Validation results of criticality calculation, burnup calculation, source convergence acceleration, tallies performance and parallel performance shown in this paper prove the capabilities of RMC in dealing with reactor analysis problems with good performances. (authors)
Monte Carlo reactor calculation with substantially reduced number of cycles
Lee, M. J.; Joo, H. G. [Seoul National Univ., 599 Gwanak-ro, Gwanak-gu, Seoul, 151-744 (Korea, Republic of); Lee, D. [Ulsan National Inst. of Science and Technology, UNIST-gil 50, Eonyang-eup, Ulju-gun, Ulsan, 689-798 (Korea, Republic of); Smith, K. [Massachusetts Inst. of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139-4307 (United States)
2012-07-01T23:59:59.000Z
A new Monte Carlo (MC) eigenvalue calculation scheme that substantially reduces the number of cycles is introduced with the aid of coarse mesh finite difference (CMFD) formulation. First, it is confirmed in terms of pin power errors that using extremely many particles resulting in short active cycles is beneficial even in the conventional MC scheme although wasted operations in inactive cycles cannot be reduced with more particles. A CMFD-assisted MC scheme is introduced as an effort to reduce the number of inactive cycles and the fast convergence behavior and reduced inter-cycle effect of the CMFD assisted MC calculation is investigated in detail. As a practical means of providing a good initial fission source distribution, an assembly based few-group condensation and homogenization scheme is introduced and it is shown that efficient MC eigenvalue calculations with fewer than 20 total cycles (including inactive cycles) are possible for large power reactor problems. (authors)
Monte Carlo modeling of spallation targets containing uranium and americium
Malyshkin, Yury; Mishustin, Igor; Greiner, Walter
2013-01-01T23:59:59.000Z
Neutron production and transport in spallation targets made of uranium and americium are studied with a Geant4-based code MCADS (Monte Carlo model for Accelerator Driven Systems). A good agreement of MCADS results with experimental data on neutron- and proton-induced reactions on $^{241}$Am and $^{243}$Am nuclei allows to use this model for simulations with extended Am targets. Several geometry options and material compositions (U, U+Am, Am, Am$_2$O$_3$) are considered for spallation targets to be used in Accelerator Driven Systems. It was demonstrated that MCADS model can be reliably used for calculating critical masses of fissile materials. All considered options operate as deep subcritical targets having neutron multiplication factor of $k \\sim 0.5$. It is found that more than 4 kg of Am can be burned in one spallation target during the first year of operation.
Monte Carlo modeling of spallation targets containing uranium and americium
Yury Malyshkin; Igor Pshenichnov; Igor Mishustin; Walter Greiner
2014-05-02T23:59:59.000Z
Neutron production and transport in spallation targets made of uranium and americium are studied with a Geant4-based code MCADS (Monte Carlo model for Accelerator Driven Systems). A good agreement of MCADS results with experimental data on neutron- and proton-induced reactions on $^{241}$Am and $^{243}$Am nuclei allows to use this model for simulations with extended Am targets. It was demonstrated that MCADS model can be used for calculating the values of critical mass for $^{233,235}$U, $^{237}$Np, $^{239}$Pu and $^{241}$Am. Several geometry options and material compositions (U, U+Am, Am, Am$_2$O$_3$) are considered for spallation targets to be used in Accelerator Driven Systems. All considered options operate as deep subcritical targets having neutron multiplication factor of $k \\sim 0.5$. It is found that more than 4 kg of Am can be burned in one spallation target during the first year of operation.
Lifting -- A Nonreversible Markov Chain Monte Carlo Algorithm
Vucelja, Marija
2015-01-01T23:59:59.000Z
Markov Chain Monte Carlo algorithms are invaluable numerical tools for exploring stationary properties of physical systems -- in particular when direct sampling is not feasible. They are widely used in many areas of physics and other sciences. Most common implementations are done with reversible Markov chains -- Markov chains that obey detailed balance. Reversible Markov chains are sufficient in order for the physical system to relax to equilibrium, but it is not necessary. Here we review several works that use "lifted" or nonreversible Markov chains, which violate detailed balance, yet still converge to the correct stationary distribution (they obey the global balance condition). In certain cases, the acceleration is a square root improvement at most, to the conventional reversible Markov chains. We introduce the problem in a way that makes it accessible to non-specialists. We illustrate the method on several representative examples (sampling on a ring, sampling on a torus, an Ising model on a complete graph...
Monte Carlo simulation study of scanning Auger electron images
Li, Y. G.; Ding, Z. J. [Department of Physics and Hefei National Laboratory for Physical Sciences at Microscale, University of Science and Technology of China, Hefei, Anhui 230026 (China); Zhang, Z. M. [Department of Astronomy and Applied Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China)
2009-07-15T23:59:59.000Z
Simulation of contrast formation in Auger electron imaging of surfaces is helpful for analyzing scanning Auger microscopy/microanalysis (SAM) images. In this work, we have extended our previous Monte Carlo model and the simulation method for calculation of scanning electron microscopy (SEM) images to SAM images of complex structures. The essentials of the simulation method are as follows. (1) We use a constructive solid geometry modeling for a sample geometry, which is complex in elemental distribution, as well as in topographical configuration and a ray-tracing technique in the calculation procedure of electron flight steps that across the different element zones. The combination of the basic objects filled with elements, alloys, or compounds enables the simulation to a variety of sample geometries. (2) Sampled Auger signal electrons with a characteristic energy are generated in the simulation following an inner-shell ionization event, whose description is based on the Castani's inner-shell ionization cross section. This paper discusses in detail the features of simulated SAM images and of line scans for structured samples, i.e., the objects embedded in a matrix, under various experimental conditions (object size, location depth, beam energy, and the incident angle). Several effects are predicted and explained, such as the contrast reversion for nanoparticles in sizes of 10-60 nm, the contrast enhancement for particles made of different elements and wholly embedded in a matrix, and the artifact contrast due to nearby objects containing different elements. The simulated SAM images are also compared with the simulated SEM images of secondary electrons and of backscattered electrons. The results indicate that the Monte Carlo simulation can play an important role in quantitative SAM mapping.
MONTE-CARLO BURNUP CALCULATION UNCERTAINTY QUANTIFICATION AND PROPAGATION DETERMINATION
Nichols, T.; Sternat, M.; Charlton, W.
2011-05-08T23:59:59.000Z
MONTEBURNS is a Monte-Carlo depletion routine utilizing MCNP and ORIGEN 2.2. Uncertainties exist in the MCNP transport calculation, but this information is not passed to the depletion calculation in ORIGEN or saved. To quantify this transport uncertainty and determine how it propagates between burnup steps, a statistical analysis of a multiple repeated depletion runs is performed. The reactor model chosen is the Oak Ridge Research Reactor (ORR) in a single assembly, infinite lattice configuration. This model was burned for a 25.5 day cycle broken down into three steps. The output isotopics as well as effective multiplication factor (k-effective) were tabulated and histograms were created at each burnup step using the Scott Method to determine the bin width. It was expected that the gram quantities and k-effective histograms would produce normally distributed results since they were produced from a Monte-Carlo routine, but some of results do not. The standard deviation at each burnup step was consistent between fission product isotopes as expected, while the uranium isotopes created some unique results. The variation in the quantity of uranium was small enough that, from the reaction rate MCNP tally, round off error occurred producing a set of repeated results with slight variation. Statistical analyses were performed using the {chi}{sup 2} test against a normal distribution for several isotopes and the k-effective results. While the isotopes failed to reject the null hypothesis of being normally distributed, the {chi}{sup 2} statistic grew through the steps in the k-effective test. The null hypothesis was rejected in the later steps. These results suggest, for a high accuracy solution, MCNP cell material quantities less than 100 grams and greater kcode parameters are needed to minimize uncertainty propagation and minimize round off effects.
Koh, Wonshill
2013-02-22T23:59:59.000Z
The light propagation in highly scattering turbid media composed of the particles with different size distribution is studied using a Monte Carlo simulation model implemented in Standard C. Monte Carlo method has been widely utilized to study...
Quantum Monte Carlo Calculations Applied to Magnetic Molecules
Larry Engelhardt
2006-08-09T23:59:59.000Z
We have calculated the equilibrium thermodynamic properties of Heisenberg spin systems using a quantum Monte Carlo (QMC) method. We have used some of these systems as models to describe recently synthesized magnetic molecules, and-upon comparing the results of these calculations with experimental data-have obtained accurate estimates for the basic parameters of these models. We have also performed calculations for other systems that are of more general interest, being relevant both for existing experimental data and for future experiments. Utilizing the concept of importance sampling, these calculations can be carried out in an arbitrarily large quantum Hilbert space, while still avoiding any approximations that would introduce systematic errors. The only errors are statistical in nature, and as such, their magnitudes are accurately estimated during the course of a simulation. Frustrated spin systems present a major challenge to the QMC method, nevertheless, in many instances progress can be made. In this chapter, the field of magnetic molecules is introduced, paying particular attention to the characteristics that distinguish magnetic molecules from other systems that are studied in condensed matter physics. We briefly outline the typical path by which we learn about magnetic molecules, which requires a close relationship between experiments and theoretical calculations. The typical experiments are introduced here, while the theoretical methods are discussed in the next chapter. Each of these theoretical methods has a considerable limitation, also described in Chapter 2, which together serve to motivate the present work. As is shown throughout the later chapters, the present QMC method is often able to provide useful information where other methods fail. In Chapter 3, the use of Monte Carlo methods in statistical physics is reviewed, building up the fundamental ideas that are necessary in order to understand the method that has been used in this work. With these ideas in hand, we then provide a detailed explanation of the current QMC method in Chapter 4. The remainder of the thesis is devoted to presenting specific results: Chapters 5 and 6 contain articles in which this method has been used to answer general questions that are relevant to broad classes of systems. Then, in Chapter 7, we provide an analysis of four different species of magnetic molecules that have recently been synthesized and studied. In all cases, comparisons between QMC calculations and experimental data allow us to distinguish a viable microscopic model and make predictions for future experiments. In Chapter 8, the infamous ''negative sign problem'' is described in detail, and we clearly indicate the limitations on QMC that are imposed by this obstacle. Finally, Chapter 9 contains a summary of the present work and the expected directions for future research.
Complete Monte Carlo Simulation of Neutron Scattering Experiments
Drosg, M. [Faculty of Physics, University of Vienna, Boltzmanngasse 5, A-1090 Wien (Austria)
2011-12-13T23:59:59.000Z
In the far past, it was not possible to accurately correct for the finite geometry and the finite sample size of a neutron scattering set-up. The limited calculation power of the ancient computers as well as the lack of powerful Monte Carlo codes and the limitation in the data base available then prevented a complete simulation of the actual experiment. Using e.g. the Monte Carlo neutron transport code MCNPX [1], neutron scattering experiments can be simulated almost completely with a high degree of precision using a modern PC, which has a computing power that is ten thousand times that of a super computer of the early 1970s. Thus, (better) corrections can also be obtained easily for previous published data provided that these experiments are sufficiently well documented. Better knowledge of reference data (e.g. atomic mass, relativistic correction, and monitor cross sections) further contributes to data improvement. Elastic neutron scattering experiments from liquid samples of the helium isotopes performed around 1970 at LANL happen to be very well documented. Considering that the cryogenic targets are expensive and complicated, it is certainly worthwhile to improve these data by correcting them using this comparatively straightforward method. As two thirds of all differential scattering cross section data of {sup 3}He(n,n){sup 3}He are connected to the LANL data, it became necessary to correct the dependent data measured in Karlsruhe, Germany, as well. A thorough simulation of both the LANL experiments and the Karlsruhe experiment is presented, starting from the neutron production, followed by the interaction in the air, the interaction with the cryostat structure, and finally the scattering medium itself. In addition, scattering from the hydrogen reference sample was simulated. For the LANL data, the multiple scattering corrections are smaller by a factor of five at least, making this work relevant. Even more important are the corrections to the Karlsruhe data due to the inclusion of the missing outgoing self-attenuation that amounts to up to 15%.
Monte Carlo simulations for generic granite repository studies
Chu, Shaoping [Los Alamos National Laboratory; Lee, Joon H [SNL; Wang, Yifeng [SNL
2010-12-08T23:59:59.000Z
In a collaborative study between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL) for the DOE-NE Office of Fuel Cycle Technologies Used Fuel Disposition (UFD) Campaign project, we have conducted preliminary system-level analyses to support the development of a long-term strategy for geologic disposal of high-level radioactive waste. A general modeling framework consisting of a near- and a far-field submodel for a granite GDSE was developed. A representative far-field transport model for a generic granite repository was merged with an integrated systems (GoldSim) near-field model. Integrated Monte Carlo model runs with the combined near- and farfield transport models were performed, and the parameter sensitivities were evaluated for the combined system. In addition, a sub-set of radionuclides that are potentially important to repository performance were identified and evaluated for a series of model runs. The analyses were conducted with different waste inventory scenarios. Analyses were also conducted for different repository radionuelide release scenarios. While the results to date are for a generic granite repository, the work establishes the method to be used in the future to provide guidance on the development of strategy for long-term disposal of high-level radioactive waste in a granite repository.
An efficient approach to ab initio Monte Carlo simulation
Leiding, Jeff; Coe, Joshua D., E-mail: jcoe@lanl.gov [Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)
2014-01-21T23:59:59.000Z
We present a Nested Markov chain Monte Carlo (NMC) scheme for building equilibrium averages based on accurate potentials such as density functional theory. Metropolis sampling of a reference system, defined by an inexpensive but approximate potential, was used to substantially decorrelate configurations at which the potential of interest was evaluated, thereby dramatically reducing the number needed to build ensemble averages at a given level of precision. The efficiency of this procedure was maximized on-the-fly through variation of the reference system thermodynamic state (characterized here by its inverse temperature ?{sup 0}), which was otherwise unconstrained. Local density approximation results are presented for shocked states of argon at pressures from 4 to 60 GPa, where—depending on the quality of the reference system potential—acceptance probabilities were enhanced by factors of 1.2–28 relative to unoptimized NMC. The optimization procedure compensated strongly for reference potential shortcomings, as evidenced by significantly higher speedups when using a reference potential of lower quality. The efficiency of optimized NMC is shown to be competitive with that of standard ab initio molecular dynamics in the canonical ensemble.
Hyperon Puzzle: Hints from Quantum Monte Carlo Calculations
Diego Lonardoni; Alessandro Lovato; Stefano Gandolfi; Francesco Pederiva
2015-02-27T23:59:59.000Z
The onset of hyperons in the core of neutron stars and the consequent softening of the equation of state have been questioned for a long time. Controversial theoretical predictions and recent astrophysical observations of neutron stars are the grounds for the so-called hyperon puzzle. We calculate the equation of state and the neutron star mass-radius relation of an infinite systems of neutrons and $\\Lambda$ particles by using the auxiliary field diffusion Monte Carlo algorithm. We find that the three-body hyperon-nucleon interaction plays a fundamental role in the softening of the equation of state and for the consequent reduction of the predicted maximum mass. We have considered two different models of three-body force that successfully describe the binding energy of medium mass hypernuclei. Our results indicate that they give dramatically different results on the maximum mass of neutron stars, not necessarily incompatible with the recent observation of very massive neutron stars. We conclude that stronger constraints on the hyperon-neutron force are necessary in order to properly assess the role of hyperons in neutron stars.
Global variance reduction for Monte Carlo reactor physics calculations
Zhang, Q.; Abdel-Khalik, H. S. [Department of Nuclear Engineering, North Carolina State University, P.O. Box 7909, Raleigh, NC 27695-7909 (United States)
2013-07-01T23:59:59.000Z
Over the past few decades, hybrid Monte-Carlo-Deterministic (MC-DT) techniques have been mostly focusing on the development of techniques primarily with shielding applications in mind, i.e. problems featuring a limited number of responses. This paper focuses on the application of a new hybrid MC-DT technique: the SUBSPACE method, for reactor analysis calculation. The SUBSPACE method is designed to overcome the lack of efficiency that hampers the application of MC methods in routine analysis calculations on the assembly level where typically one needs to execute the flux solver in the order of 10{sup 3}-10{sup 5} times. It places high premium on attaining high computational efficiency for reactor analysis application by identifying and capitalizing on the existing correlations between responses of interest. This paper places particular emphasis on using the SUBSPACE method for preparing homogenized few-group cross section sets on the assembly level for subsequent use in full-core diffusion calculations. A BWR assembly model is employed to calculate homogenized few-group cross sections for different burn-up steps. It is found that using the SUBSPACE method significant speedup can be achieved over the state of the art FW-CADIS method. While the presented speed-up alone is not sufficient to render the MC method competitive with the DT method, we believe this work will become a major step on the way of leveraging the accuracy of MC calculations for assembly calculations. (authors)
Ensemble bayesian model averaging using markov chain Monte Carlo sampling
Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL
2008-01-01T23:59:59.000Z
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.
Binding and Diffusion of Lithium in Graphite: Quantum Monte Carlo...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
choice for models of Li-ion battery electrodes due to its balance between accuracy and cost. 4 While DFT is in principle an exact approach, in practice it involves approximating...
Monte Carlo Simulations of Microchannel Plate Based, Fast-Gated X-Ray Imagers
Wu., M., Kruschwitz, C.
2011-02-01T23:59:59.000Z
This is a chapter in a book titled Applications of Monte Carlo Method in Science and Engineering Edited by: Shaul Mordechai ISBN 978-953-307-691-1, Hard cover, 950 pages Publisher: InTech Publication date: February 2011
Serdar Elhatisari; Dean Lee
2014-12-01T23:59:59.000Z
We present lattice Monte Carlo calculations of fermion-dimer scattering in the limit of zero-range interactions using the adiabatic projection method. The adiabatic projection method uses a set of initial cluster states and Euclidean time projection to give a systematically improvable description of the low-lying scattering cluster states in a finite volume. We use L\\"uscher's finite-volume relations to determine the s-wave, p-wave, and d-wave phase shifts. For comparison, we also compute exact lattice results using Lanczos iteration and continuum results using the Skorniakov-Ter-Martirosian equation. For our Monte Carlo calculations we use a new lattice algorithm called impurity lattice Monte Carlo. This algorithm can be viewed as a hybrid technique which incorporates elements of both worldline and auxiliary-field Monte Carlo simulations.
Washington at Seattle, University of - Department of Physics, Electroweak Interaction Research Group
Nuclear Structure and Reactions (Quantum Monte Carlo, Lanczos Methods, Density Functional Methods systems: nuclei and the unitary Fermi gas" Thursday, June 9 10:00 am Stefano Gandolfi "Ab
Protein folding and phylogenetic tree reconstruction using stochastic approximation Monte Carlo
Cheon, Sooyoung
2007-09-17T23:59:59.000Z
folding problems. The numerical results indicate that it outperforms simulated annealing and conventional Monte Carlo algorithms as a stochastic optimization algorithm. We also propose one method for the use of secondary structures in protein folding...
ATLAS Monte Carlo production Run-1 experience and readiness for Run-2 challenges
Chapman, John Derek; The ATLAS collaboration; Garcia Navarro, Jose Enrique; Gwenlan, Claire; Mehlhase, Sascha; Tsulaia, Vakhtang; Vaniachine, Alexandre; Zhong, Jiahang; Pacheco Pages, Andres
2015-01-01T23:59:59.000Z
In this presentation we will review the ATLAS Monte Carlo production setup including the different production steps involved in full and fast detector simulation. A report on the Monte Carlo production campaigns during Run-I, Long Shutdown 1 (LS1) and status of the production for Run-2 will be presented. The presentation will include the details on various performance aspects. Important improvements in the workflow and software will be highlighted. Besides standard Monte Carlo production for data analyses at 7 and 8 TeV, the production accommodates for various specialised activities. These range from extended Monte Carlo validation, Geant4 validation, pileup simulation using zero bias data and production for various upgrade studies. The challenges of these activities will be discussed.
Efficient scene simulation for robust monte carlo localization using an RGB-D camera
Fallon, Maurice Francis
2013-05-14T23:59:59.000Z
This paper presents Kinect Monte Carlo Localization (KMCL), a new method for localization in three dimensional indoor environments using RGB-D cameras, such as the Microsoft Kinect. The approach makes use of a low fidelity ...
Annealing contour Monte Carlo algorithm for structure optimization in an off-lattice protein model
Liang, Faming
. For example, the HP model1 treats each amino acid as a point particle and restricts the model to fold of the energy landscape, so it is an excellent tool for Monte Carlo optimization. The ACMC algorithm is an accel
Wang, Li-Fang, Ph. D. Massachusetts Institute of Technology
2007-01-01T23:59:59.000Z
In this thesis research, a coherent scattering model for microwave remote sensing of vegetation canopy is developed on the basis of Monte Carlo simulations. An accurate model of vegetation structure is essential for the ...
Comparison of value-added models for school ranking and classification: a Monte Carlo study
Wang, Zhongmiao
2009-05-15T23:59:59.000Z
COMPARISON OF VALUE-ADDED MODELS FOR SCHOOL RANKING AND CLASSIFICATION: A MONTE CARLO STUDY A Dissertation by ZHONGMIAO WANG Submitted to the Office of Graduate Studies of Texas A&M University... AND CLASSIFICATION: A MONTE CARLO STUDY A Dissertation by ZHONGMIAO WANG Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Approved by: Co...
Ibrahim, Ahmad M [ORNL] [ORNL; Peplow, Douglas E. [ORNL] [ORNL; Peterson, Joshua L [ORNL] [ORNL; Grove, Robert E [ORNL] [ORNL
2013-01-01T23:59:59.000Z
The rigorous 2-step (R2S) method uses three-dimensional Monte Carlo transport simulations to calculate the shutdown dose rate (SDDR) in fusion reactors. Accurate full-scale R2S calculations are impractical in fusion reactors because they require calculating space- and energy-dependent neutron fluxes everywhere inside the reactor. The use of global Monte Carlo variance reduction techniques was suggested for accelerating the neutron transport calculation of the R2S method. The prohibitive computational costs of these approaches, which increase with the problem size and amount of shielding materials, inhibit their use in the accurate full-scale neutronics analyses of fusion reactors. This paper describes a novel hybrid Monte Carlo/deterministic technique that uses the Consistent Adjoint Driven Importance Sampling (CADIS) methodology but focuses on multi-step shielding calculations. The Multi-Step CADIS (MS-CADIS) method speeds up the Monte Carlo neutron calculation of the R2S method using an importance function that represents the importance of the neutrons to the final SDDR. Using a simplified example, preliminarily results showed that the use of MS-CADIS enhanced the efficiency of the neutron Monte Carlo simulation of an SDDR calculation by a factor of 550 compared to standard global variance reduction techniques, and that the increase over analog Monte Carlo is higher than 10,000.
Density functional and Monte Carlo studies of sulfur. II. Equilibrium polymerization of the liquid 7 July 2003; accepted 28 July 2003 The equilibrium polymerization of sulfur is investigated by Monte), within which polymerization occurs readily, with entropy from the bond distribution overcompensating
Monte Carlo simulations of dose near a nonradioactive gold seed
Chow, James C. L.; Grigorov, Grigor N. [Department of Radiation Oncology, University of Toronto and Radiation Medicine Program, Princess Margaret Hospital, 610 University Avenue, Toronto, ON N2G 1G3 (Canada) and Department of Physics, University of Waterloo, 200 University Avenue West, Waterloo, ON N2L 3G1 (Canada); Medical Physics Department, Grand River Regional Cancer Center, Grand River Hospital, 835 King Street West, Kitchener, ON N2G 1G3 (Canada)
2006-12-15T23:59:59.000Z
The relative doses and hot/cold spot positions around a non-radioactive gold seed, irradiated by a 6 or 18 MV photon beam in water, were calculated using Monte Carlo simulation. Phase space files of 6 and 18 MV photon beams with a field size of 1x1 cm{sup 2} were generated by a Varian 21 EX linear accelerator using the EGSnrc and BEAMnrc code. The seed (1.2x1.2x3.2 mm{sup 3}) was positioned at the isocenter in a water phantom (20x20x20 cm{sup 2}) with source-to-axis distance=100 cm. For the single beam geometry, the relative doses (normalized to the dose at 5 mm distance above the isocenter) at the upstream seed surface were calculated to be 1.64 and 1.56 for the 6 and 18 MV beams respectively when the central beam axis (CAX) is parallel to the width of the seed. These doses were slightly higher than those (1.58 and 1.52 for 6 and 18 MV beams respectively) calculated when the CAX is perpendicular to the width of the seed. Compared to the relative dose profiles with the same beam geometry without the seed in the water phantom, the presence of the seed affects the dose distribution at about 3 mm distance beyond both the upstream and downstream seed surface. For a pair of opposing beams with equal and unequal beam weight, the hot and cold spots of both opposing beams were mixed. For a 360 degree photon arc around the longitudinal axis of the seed, the relative dose profile along the width of the seed was similar to that of the opposing beam pair, except the former geometry has a larger dose gradient near the seed surface. In this study, selected results from our simulation were compared to previous measurements using film dosimetry.
Monte Carlo Simulations of the Corrosion of Aluminoborosilicate Glasses
Kerisit, Sebastien N.; Ryan, Joseph V.; Pierce, Eric M.
2013-10-15T23:59:59.000Z
Aluminum is one of the most common components included in nuclear waste glasses. Therefore, Monte Carlo (MC) simulations were carried out to investigate the influence of aluminum on the rate and mechanism of dissolution of sodium borosilicate glasses in static conditions. The glasses studied were in the compositional range (70-2x)% SiO2 x% Al2O3 15% B2O3 (15+x)% Na2O, where 0 ? x ? 15%. The simulation results show that increasing amounts of aluminum in the pristine glasses slow down the initial rate of dissolution as determined from the rate of boron release. However, the extent of corrosion - as measured by the total amount of boron release - initially increases with addition of Al2O3, up to 5 Al2O3 mol%, but subsequently decreases with further Al2O3 addition. The MC simulations reveal that this behavior is due to the interplay between two opposing mechanisms: (1) aluminum slows down the kinetics of hydrolysis/condensation reactions that drive the reorganization of the glass surface and eventual formation of a blocking layer; and (2) aluminum strengthens the glass thereby increasing the lifetime of the upper part of its surface and allowing for more rapid formation of a blocking layer. Additional MC simulations were performed whereby a process representing the formation of a secondary aluminosilicate phase was included. Secondary phase formation draws dissolved glass components out of the aqueous solution, thereby diminishing the rate of condensation and delaying the formation of a blocking layer. As a result, the extent of corrosion is found to increase continuously with increasing Al2O3 content, as observed experimentally. For Al2O3 < 10 mol%, the MC simulations also indicate that, because the secondary phase solubility eventually controls the aluminum content in the part of the altered layer in contact with the bulk aqueous solution, the dissolved aluminum and silicon concentrations at steady state are not dependent on the Al2O3 content of the pristine aluminoborosilicate glass.
Monte Carlo Simulations of the Corrosion of Aluminoborosilicate Glasses
Kerisit, Sebastien [Pacific Northwest National Laboratory (PNNL); Ryan, Joseph V [Pacific Northwest National Laboratory (PNNL); Pierce, Eric M [ORNL
2013-01-01T23:59:59.000Z
Aluminum is one of the most common components included in nuclear waste glasses. Therefore, Monte Carlo (MC) simulations were carried out to investigate the influence of aluminum on the rate and mechanism of dissolution of sodium borosilicate glasses in static conditions. The glasses studied were in the compositional range (70 2x)% SiO2x% Al2O3 15% B2O3 (15 + x)% Na2O, where 0 x 15%. The simulation results show that increasing amounts of aluminum in the pristine glasses slow down the initial rate of dissolution as determined from the rate of boron release. However, the extent of corrosion as measured by the total amount of boron release initially increases with addition of Al2O3, up to 5 mol% Al2O3, but subsequently decreases with further Al2O3 addition. The MC simulations reveal that this behavior is due to the interplay between two opposing mechanisms: (1) aluminum slows down the kinetics of hydrolysis/condensation reactions that drive the reorganization of the glass surface and eventual formation of a blocking layer; and (2) aluminum strengthens the glass thereby increasing the lifetime of the upper part of its surface and allowing for more rapid formation of a blocking layer. Additional MC simulations were performed whereby a process representing the formation of a secondary aluminosilicate phase was included. Secondary phase formation draws dissolved glass components out of the aqueous solution, thereby diminishing the rate of condensation and delaying the formation of a blocking layer. As a result, the extent of corrosion is found to increase continuously with increasing Al2O3 content, as observed experimentally. For Al2O3 < 10 mol%, the MC simulations also indicate that, because the secondary phase solubility eventually controls the aluminum content in the part of the altered layer in contact with the bulk aqueous solution, the dissolved aluminum and silicon concentrations at steady state are not dependent on the Al2O3 content of the pristine aluminoborosilicate glass.
Del Moral , Pierre
MÃ©thodes de Monte Carlo et processus stochastiques Pierre Del Moral - Stefano De Marco Monte Carlo et processus stochastiques: du linÃ©aire au non linÃ©aire (E. Gobet). On considÃ¨re un systÃ¨me MÃ©thodes de Monte Carlo et processus stochastiques: du linÃ©aire au non linÃ©aire (E. Gobet). On considÃ¨re
Del Moral , Pierre
MÃ©thodes de Monte Carlo et processus stochastiques Pierre Del Moral - Stefano De Marco de refaire l'une des expÃ©riences de simulation de Monte Carlo les plus anciennes, proposÃ©e en 1733 aiguille touche le bord d'une latte. 1. MÃ©thode de Monte Carlo : VÃ©rifier numÃ©riquement que la probabilitÃ©
Fission matrix-based Monte Carlo criticality analysis of fuel storage pools
Farlotti, M. [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI 48109 (United States); Ecole Polytechnique, Palaiseau, F 91128 (France); Larsen, E. W. [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI 48109 (United States)
2013-07-01T23:59:59.000Z
Standard Monte Carlo transport procedures experience difficulties in solving criticality problems in fuel storage pools. Because of the strong neutron absorption between fuel assemblies, source convergence can be very slow, leading to incorrect estimates of the eigenvalue and the eigenfunction. This study examines an alternative fission matrix-based Monte Carlo transport method that takes advantage of the geometry of a storage pool to overcome this difficulty. The method uses Monte Carlo transport to build (essentially) a fission matrix, which is then used to calculate the criticality and the critical flux. This method was tested using a test code on a simple problem containing 8 assemblies in a square pool. The standard Monte Carlo method gave the expected eigenfunction in 5 cases out of 10, while the fission matrix method gave the expected eigenfunction in all 10 cases. In addition, the fission matrix method provides an estimate of the error in the eigenvalue and the eigenfunction, and it allows the user to control this error by running an adequate number of cycles. Because of these advantages, the fission matrix method yields a higher confidence in the results than standard Monte Carlo. We also discuss potential improvements of the method, including the potential for variance reduction techniques. (authors)
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, William P. (Tracy, CA); Hartmann-Siantar, Christine L. (San Ramon, CA); Rathkopf, James A. (Livermore, CA)
1999-01-01T23:59:59.000Z
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, W.P.; Hartmann-Siantar, C.L.; Rathkopf, J.A.
1999-02-09T23:59:59.000Z
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media. 57 figs.
Crossing the mesoscale no-mans land via parallel kinetic Monte Carlo.
Garcia Cardona, Cristina (San Diego State University); Webb, Edmund Blackburn, III; Wagner, Gregory John; Tikare, Veena; Holm, Elizabeth Ann; Plimpton, Steven James; Thompson, Aidan Patrick; Slepoy, Alexander (U. S. Department of Energy, NNSA); Zhou, Xiao Wang; Battaile, Corbett Chandler; Chandross, Michael Evan
2009-10-01T23:59:59.000Z
The kinetic Monte Carlo method and its variants are powerful tools for modeling materials at the mesoscale, meaning at length and time scales in between the atomic and continuum. We have completed a 3 year LDRD project with the goal of developing a parallel kinetic Monte Carlo capability and applying it to materials modeling problems of interest to Sandia. In this report we give an overview of the methods and algorithms developed, and describe our new open-source code called SPPARKS, for Stochastic Parallel PARticle Kinetic Simulator. We also highlight the development of several Monte Carlo models in SPPARKS for specific materials modeling applications, including grain growth, bubble formation, diffusion in nanoporous materials, defect formation in erbium hydrides, and surface growth and evolution.
General purpose dynamic Monte Carlo with continuous energy for transient analysis
Sjenitzer, B. L.; Hoogenboom, J. E. [Delft Univ. of Technology, Dept. of Radiation, Radionuclide and Reactors, Mekelweg 15, 2629JB Delft (Netherlands)
2012-07-01T23:59:59.000Z
For safety assessments transient analysis is an important tool. It can predict maximum temperatures during regular reactor operation or during an accident scenario. Despite the fact that this kind of analysis is very important, the state of the art still uses rather crude methods, like diffusion theory and point-kinetics. For reference calculations it is preferable to use the Monte Carlo method. In this paper the dynamic Monte Carlo method is implemented in the general purpose Monte Carlo code Tripoli4. Also, the method is extended for use with continuous energy. The first results of Dynamic Tripoli demonstrate that this kind of calculation is indeed accurate and the results are achieved in a reasonable amount of time. With the method implemented in Tripoli it is now possible to do an exact transient calculation in arbitrary geometry. (authors)
Revised methods for few-group cross sections generation in the Serpent Monte Carlo code
Fridman, E. [Reactor Safety Div., Helmholz-Zentrum Dresden-Rossendorf, POB 51 01 19, Dresden, 01314 (Germany); Leppaenen, J. [VTT Technical Research Centre of Finland, POB 1000, FI-02044 VTT (Finland)
2012-07-01T23:59:59.000Z
This paper presents new calculation methods, recently implemented in the Serpent Monte Carlo code, and related to the production of homogenized few-group constants for deterministic 3D core analysis. The new methods fall under three topics: 1) Improved treatment of neutron-multiplying scattering reactions, 2) Group constant generation in reflectors and other non-fissile regions and 3) Homogenization in leakage-corrected criticality spectrum. The methodology is demonstrated by a numerical example, comparing a deterministic nodal diffusion calculation using Serpent-generated cross sections to a reference full-core Monte Carlo simulation. It is concluded that the new methodology improves the results of the deterministic calculation, and paves the way for Monte Carlo based group constant generation. (authors)
Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes
Frambati, S.; Frignani, M. [Ansaldo Nucleare S.p.A., Corso F.M. Perrone 25, 1616 Genova (Italy)
2012-07-01T23:59:59.000Z
We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design for radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)
A Proposal for a Standard Interface Between Monte Carlo Tools And One-Loop Programs
Binoth, T.; /Edinburgh U.; Boudjema, F.; /Annecy, LAPP; Dissertori, G.; Lazopoulos, A.; /Zurich, ETH; Denner, A.; /PSI, Villigen; Dittmaier, S.; /Freiburg U.; Frederix, R.; Greiner, N.; Hoeche, Stefan; /Zurich U.; Giele, W.; Skands, P.; Winter, J.; /Fermilab; Gleisberg, T.; /SLAC; Archibald, J.; Heinrich, G.; Krauss, F.; Maitre, D.; /Durham U., IPPP; Huber, M.; /Munich, Max Planck Inst.; Huston, J.; /Michigan State U.; Kauer, N.; /Royal Holloway, U. of London; Maltoni, F.; /Louvain U., CP3 /Milan Bicocca U. /INFN, Turin /Turin U. /Granada U., Theor. Phys. Astrophys. /CERN /NIKHEF, Amsterdam /Heidelberg U. /Oxford U., Theor. Phys.
2011-11-11T23:59:59.000Z
Many highly developed Monte Carlo tools for the evaluation of cross sections based on tree matrix elements exist and are used by experimental collaborations in high energy physics. As the evaluation of one-loop matrix elements has recently been undergoing enormous progress, the combination of one-loop matrix elements with existing Monte Carlo tools is on the horizon. This would lead to phenomenological predictions at the next-to-leading order level. This note summarises the discussion of the next-to-leading order multi-leg (NLM) working group on this issue which has been taking place during the workshop on Physics at TeV Colliders at Les Houches, France, in June 2009. The result is a proposal for a standard interface between Monte Carlo tools and one-loop matrix element programs.
A proposal for a standard interface between Monte Carlo tools and one-loop programs
Binoth, T.; Boudjema, F.; Dissertori, G.; Lazopoulos, A.; Denner, A.; Dittmaier, S.; Frederix, R.; Greiner, N.; Hoche, S.; Giele, W.; Skands, P.
2010-01-01T23:59:59.000Z
Many highly developed Monte Carlo tools for the evaluation of cross sections based on tree matrix elements exist and are used by experimental collaborations in high energy physics. As the evaluation of one-loop matrix elements has recently been undergoing enormous progress, the combination of one-loop matrix elements with existing Monte Carlo tools is on the horizon. This would lead to phenomenological predictions at the next-to-leading order level. This note summarizes the discussion of the next-to-leading order multi-leg (NLM) working group on this issue which has been taking place during the workshop on Physics at TeV colliders at Les Houches, France, in June 2009. The result is a proposal for a standard interface between Monte Carlo tools and one-loop matrix element programs.
Monte Carlo calculation of helical tomotherapy dose delivery
Zhao Yingli; Mackenzie, M.; Kirkby, C.; Fallone, B. G. [Department of Medical Physics, Cross Cancer Institute, University of Alberta, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada) and Department of Physics, University of Alberta, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada); Department of Medical Physics, Cross Cancer Institute, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada); Department of Medical Physics, Cross Cancer Institute, University of Alberta, 11560 University Avenue, Edmonton, Alberta T6G 1Z2, Canada and Department of Physics, University of Alberta, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada)
2008-08-15T23:59:59.000Z
Helical tomotherapy delivers intensity modulated radiation therapy using a binary multileaf collimator (MLC) to modulate a fan beam of radiation. This delivery occurs while the linac gantry and treatment couch are both in constant motion, so the beam describes, from a patient/phantom perspective, a spiral or helix of dose. The planning system models this continuous delivery as a large number (51) of discrete gantry positions per rotation, and given the small jaw/fan width setting typically used (1 or 2.5 cm) and the number of overlapping rotations used to cover the target (pitch often <0.5), the treatment planning system (TPS) potentially employs a very large number of static beam directions and leaf opening configurations to model the modulated fields. All dose calculations performed by the system employ a convolution/superposition model. In this work the authors perform a full Monte Carlo (MC) dose calculation of tomotherapy deliveries to phantom computed tomography (CT) data sets to verify the TPS calculations. All MC calculations are performed with the EGSnrc-based MC simulation codes, BEAMnrc and DOSXYZnrc. Simulations are performed by taking the sinogram (leaf opening versus time) of the treatment plan and decomposing it into 51 different projections per rotation, as does the TPS, each of which is segmented further into multiple MLC opening configurations, each with different weights that correspond to leaf opening times. Then the projection is simulated by the summing of all of the opening configurations, and the overall rotational treatment is simulated by the summing of all of the projection simulations. Commissioning of the source model was verified by comparing measured and simulated values for the percent depth dose and beam profiles shapes for various jaw settings. The accuracy of the MLC leaf width and tongue and groove spacing were verified by comparing measured and simulated values for the MLC leakage and a picket fence pattern. The validated source and MLC configuration were then used to simulate a complex modulated delivery from fixed gantry angle. Further, a preliminary rotational treatment plan to a delivery quality assurance phantom (the 'cheese' phantom) CT data set was simulated. Simulations were compared with measured results taken with an A1SL ionization chamber or EDR2 film measurements in a water tank or in a solid water phantom, respectively. The source and MLC MC simulations agree with the film measurements, with an acceptable number of pixels passing the 2%/1 mm gamma criterion. 99.8% of voxels of the MC calculation in the planning target volume (PTV) of the preliminary plan passed the 2%/2 mm gamma value test. 87.0% and 66.2% of the voxels in two organs at risk (OARs) passed the 2%/2 mm tests. For a 3%/3 mm criterion, the PTV and OARs show 100%, 93.2%, and 86.6% agreement, respectively. All voxels passed the gamma value test with a criterion of 5%/3 mm. The TomoTherapy TPS showed comparable results.
Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis
Wilson, Paul; Evans, Thomas; Tautges, Tim
2012-12-24T23:59:59.000Z
This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well-suited to coupling with the unstructured meshes that are used in other physics simulations.
A Monte Carlo synthetic-acceleration method for solving the thermal radiation diffusion equation
Evans, Thomas M., E-mail: evanstm@ornl.gov [Oak Ridge National Laboratory, 1 Bethel Valley Rd., Oak Ridge, TN 37831 (United States); Mosher, Scott W., E-mail: moshersw@ornl.gov [Oak Ridge National Laboratory, 1 Bethel Valley Rd., Oak Ridge, TN 37831 (United States); Slattery, Stuart R., E-mail: sslattery@wisc.edu [University of Wisconsin–Madison, 1500 Engineering Dr., Madison, WI 53716 (United States); Hamilton, Steven P., E-mail: hamiltonsp@ornl.gov [Oak Ridge National Laboratory, 1 Bethel Valley Rd., Oak Ridge, TN 37831 (United States)
2014-02-01T23:59:59.000Z
We present a novel synthetic-acceleration-based Monte Carlo method for solving the equilibrium thermal radiation diffusion equation in three spatial dimensions. The algorithm performance is compared against traditional solution techniques using a Marshak benchmark problem and a more complex multiple material problem. Our results show that our Monte Carlo method is an effective solver for sparse matrix systems. For solutions converged to the same tolerance, it performs competitively with deterministic methods including preconditioned conjugate gradient and GMRES. We also discuss various aspects of preconditioning the method and its general applicability to broader classes of problems.
A Monte Carlo Synthetic-Acceleration Method for Solving the Thermal Radiation Diffusion Equation
Evans, Thomas M [ORNL] [ORNL; Mosher, Scott W [ORNL] [ORNL; Slattery, Stuart [University of Wisconsin, Madison] [University of Wisconsin, Madison
2014-01-01T23:59:59.000Z
We present a novel synthetic-acceleration based Monte Carlo method for solving the equilibrium thermal radiation diusion equation in three dimensions. The algorithm performance is compared against traditional solution techniques using a Marshak benchmark problem and a more complex multiple material problem. Our results show that not only can our Monte Carlo method be an eective solver for sparse matrix systems, but also that it performs competitively with deterministic methods including preconditioned Conjugate Gradient while producing numerically identical results. We also discuss various aspects of preconditioning the method and its general applicability to broader classes of problems.
Pseudo-random number generators for Monte Carlo simulations on Graphics Processing Units
Demchik, Vadim
2010-01-01T23:59:59.000Z
Basic uniform pseudo-random number generators are implemented on ATI Graphics Processing Units (GPU). The performance results of the realized generators (multiplicative linear congruential (GGL), XOR-shift (XOR128), RANECU, RANMAR, RANLUX and Mersenne Twister (MT19937)) on CPU and GPU are discussed. The obtained speed-up factor is hundreds of times in comparison with CPU. RANLUX generator is found to be the most appropriate for using on GPU in Monte Carlo simulations. The brief review of the pseudo-random number generators used in modern software packages for Monte Carlo simulations in high-energy physics is present.
Pseudo-random number generators for Monte Carlo simulations on Graphics Processing Units
Vadim Demchik
2010-03-09T23:59:59.000Z
Basic uniform pseudo-random number generators are implemented on ATI Graphics Processing Units (GPU). The performance results of the realized generators (multiplicative linear congruential (GGL), XOR-shift (XOR128), RANECU, RANMAR, RANLUX and Mersenne Twister (MT19937)) on CPU and GPU are discussed. The obtained speed-up factor is hundreds of times in comparison with CPU. RANLUX generator is found to be the most appropriate for using on GPU in Monte Carlo simulations. The brief review of the pseudo-random number generators used in modern software packages for Monte Carlo simulations in high-energy physics is present.
T.J. Urbatsch; T.M. Evans
2006-02-15T23:59:59.000Z
We have released Version 2 of Milagro, an object-oriented, C++ code that performs radiative transfer using Fleck and Cummings' Implicit Monte Carlo method. Milagro, a part of the Jayenne program, is a stand-alone driver code used as a methods research vehicle and to verify its underlying classes. These underlying classes are used to construct Implicit Monte Carlo packages for external customers. Milagro-2 represents a design overhaul that allows better parallelism and extensibility. New features in Milagro-2 include verified momentum deposition, restart capability, graphics capability, exact energy conservation, and improved load balancing and parallel efficiency. A users' guide also describes how to configure, make, and run Milagro2.
Calculating kinetics parameters and reactivity changes with continuous-energy Monte Carlo
Kiedrowski, Brian C [Los Alamos National Laboratory; Brown, Forrest B [Los Alamos National Laboratory; Wilson, Paul [UNIV. WISCONSIN
2009-01-01T23:59:59.000Z
The iterated fission probability interpretation of the adjoint flux forms the basis for a method to perform adjoint weighting of tally scores in continuous-energy Monte Carlo k-eigenvalue calculations. Applying this approach, adjoint-weighted tallies are developed for two applications: calculating point reactor kinetics parameters and estimating changes in reactivity from perturbations. Calculations are performed in the widely-used production code, MCNP, and the results of both applications are compared with discrete ordinates calculations, experimental measurements, and other Monte Carlo calculations.
An Advanced Neutronic Analysis Toolkit with Inline Monte Carlo capability for BHTR Analysis
William R. Martin; John C. Lee
2009-12-30T23:59:59.000Z
Monte Carlo capability has been combined with a production LWR lattice physics code to allow analysis of high temperature gas reactor configurations, accounting for the double heterogeneity due to the TRISO fuel. The Monte Carlo code MCNP5 has been used in conjunction with CPM3, which was the testbench lattice physics code for this project. MCNP5 is used to perform two calculations for the geometry of interest, one with homogenized fuel compacts and the other with heterogeneous fuel compacts, where the TRISO fuel kernels are resolved by MCNP5.
Monte Carlo simulations of the HP model (the "Ising model" of protein folding)
Li, Ying Wai; Landau, David P; 10.1016/j.cpc.2010.12.049
2011-01-01T23:59:59.000Z
Using Wang-Landau sampling with suitable Monte Carlo trial moves (pull moves and bond-rebridging moves combined) we have determined the density of states and thermodynamic properties for a short sequence of the HP protein model. For free chains these proteins are known to first undergo a collapse "transition" to a globule state followed by a second "transition" into a native state. When placed in the proximity of an attractive surface, there is a competition between surface adsorption and folding that leads to an intriguing sequence of "transitions". These transitions depend upon the relative interaction strengths and are largely inaccessible to "standard" Monte Carlo methods.
A study of the contrast of a submerged disc using Monte Carlo techniques
Hagan, Donald Frank
2012-06-07T23:59:59.000Z
A STUDY OF THE CONTRAST OF A SUBMERGED DISC USING MONTE CARLO TECHNIQUES A Thesis by DONALD FRANK HAGAN Submitted to the Graduate College of Texas A&M University in partial fulfillment of the requirement for the degree of MASTER OF SCIENCE... December 1980 Major Subject: Physics A STUDY OF THE CONTRAST OF A SUBMERGED DISC USING MONTE CARLO TECHNIQUES A Thesis by DONALD FRANK HAGAN Approved as to sty1e and content by: (Chairman of Committee) (Head of Department) (Member) (Member...
Schulze, Tim
Inverted List Kinetic Monte Carlo with Rejection ap- plied to Directed Self-Assembly of Epitaxial of subsequently deposited material using a kinetic Monte Carlo algorithm that combines the use of inverted lists finding is that the relative performance of the inverted list algorithm improves with increasing system
Quasi-Monte Carlo simulation of the light environment of plants Mikolaj CieslakA,E,F
Prusinkiewicz, Przemyslaw
Quasi-Monte Carlo simulation of the light environment of plants Mikolaj CieslakA,E,F , Christiane-based CARIBU software (Chelle et al. 2004),and we showthat thesetwo programs produceconsistent results. Wealso assessed theperformance oftheRQMCpath tracing algorithm by comparing it with Monte Carlo path tracing
Chung, Kiwhan
1996-01-01T23:59:59.000Z
While the use of Monte Carlo method has been prevalent in nuclear engineering, it has yet to fully blossom in the study of solute transport in porous media. By using an etched-glass micromodel, an attempt is made to apply Monte Carlo method...
Del Moral , Pierre
MÂ´ethodes de Monte Carlo et processus stochastiques. Pierre Del Moral Â Stefano De Marco la mÂ´ethode de Monte Carlo multi-niveaux. L'Â´equation diffÂ´erentielle stochastique de Black-Scholes d
Monte Carlo Posterior Integration in GARCH Peter M uller and Andy Pole
West, Mike
Monte Carlo Posterior Integration in GARCH Models Peter MÂ¨ uller and Andy Pole Peter M along both lines to apply to the analysis of GARCH (generalized autoregressive conditionalÂ tion to GARCH models in Bollerslev (1986). There are now over 300 papers in the mainstream statistics
Supertrack Monte Carlo variance reduction experience for non-Boltzmann tallies
Estes, G.P.; Booth, T.E.
1995-02-01T23:59:59.000Z
This paper applies a recently developed variance reduction technique to the first principles calculations of photon detector responses. This technique makes possible the direct comparison of pulse height calculations with measurements without the need for unfolding techniques. Comparisons are made between several experiments and the calculations to demonstrate the utility of the supertrack Monte Carlo technique for reproducing and interpreting experimental count rate spectra.
Elsevier Science 1 Use of the GATE Monte Carlo package for dosimetry
Paris-Sud XI, UniversitÃ© de
Elsevier Science 1 Use of the GATE Monte Carlo package for dosimetry applications D. Visvikis, a* M Angeles, USA Abstract One of the roles for MC simulation studies is in the area of dosimetry. A number of different codes dedicated to dosimetry applications are available and widely used today, such as MCNP
Monte Carlo simulation methodology of the ghost interface theory for the planar surface tension
Attard, Phil
Monte Carlo simulation methodology of the ghost interface theory for the planar surface tension October 2003 A novel ``ghost interface'' expression for the surface tension of a planar liquid coexisting phases. Results generated from the ghost interface theory for the surface tension are presented
Monte Carlo Simulation of Radiation in Gases with a NarrowBand Model
Dufresne, Jean-Louis
, France (\\Phi) now at the Institute of Energy and Power Plant Technology, TH Darmstadt, 64287 DarmstadtMonte Carlo Simulation of Radiation in Gases with a NarrowÂBand Model and a Net is used for simulation of radiative heat transfers in nonÂgray gases. The proposed procedure is based
Green's function Monte Carlo calculation for the ground state of helium trimers
Cabral, F.; Kalos, M.H.
1981-02-01T23:59:59.000Z
The ground state energy of weakly bound boson trimers interacting via Lennard-Jones (12,6) pair potentials is calculated using a Monte Carlo Green's Function Method. Threshold coupling constants for self binding are obtained by extrapolation to zero binding.
Quasi-Monte Carlo Simulation of the Light Environment of Plants Mikolaj Cieslak1,5
Lemieux, Christiane
Quasi-Monte Carlo Simulation of the Light Environment of Plants Mikolaj Cieslak1,5 , Christiane and Food Research Institute of New Zealand Limited Running Title: QMC Simulation of the Light Environment. In this paper, we will outline the RQMC path tracing algorithm that we use in our light environment program
Reliability estimation by advanced Monte Carlo E. Zio and N. Pedroni
Paris-Sud XI, UniversitÃ© de
Chapter 1 Reliability estimation by advanced Monte Carlo simulation E. Zio and N. Pedroni Energy for evaluating the reliability of a system, due to the modeling flexibility that it offers indiffer- ently, which points towards the failure domain of interest; the high-dimensional reliability problem
Combining Monte Carlo Simulations and Options to Manage the Risk of Real
Boyer, Edmond
of real estate portfolio valuations can be improved through the simultaneous use of Monte Carlo simulations and options theory. Our method considers the options embedded in Continental European lease are more reliable that those usually computed by the traditional method of discounted cash flow. Moreover
Optical Monte Carlo modeling of a true port wine stain anatomy
Barton, Jennifer K.
of accommodating an arbitrarily complex geometry was used to determine the energy deposition in a true port wineOptical Monte Carlo modeling of a true port wine stain anatomy Jennifer Kehlet Barton, T. Joshua nm. At both wavelengths, the greatest energy deposition occurred in the superficial blood vessels
VIM Monte Carlo versus CASMO comparisons for BWR advanced fuel designs
Pallotta, A.S. [Commonwealth Edison Co., Chicago, IL (United States); Blomquist, R.N. [Argonne National Lab., IL (United States)
1994-03-01T23:59:59.000Z
Eigenvalues and two-dimensional fission rate distributions computed with the CASMO-3G lattice physics code and the VIM Monte Carlo Code are compared. The cases assessed are two advanced commercial BWR pin bundle designs. Generally, the two codes show good agreement in K{sub inf}, fission rate distributions, and control rod worths.
Quantum Monte Carlo calculations of electronic excitation energies: the case of the singlet n
Paris-Sud XI, UniversitÃ© de
) transition in acrolein Julien Toulouse1 , Michel Caffarel2 , Peter Reinhardt1 , Philip E. Hoggan3 , and C. J-of-the-art quantum Monte Carlo calculations of the singlet n (CO) vertical excitation energy in the acrolein in the acrolein molecule without reoptimization of the determinantal part of the wave function. The acrolein
Comparison of the Monte Carlo adjoint-weighted and differential operator perturbation methods
Kiedrowski, Brian C [Los Alamos National Laboratory; Brown, Forrest B [Los Alamos National Laboratory
2010-01-01T23:59:59.000Z
Two perturbation theory methodologies are implemented for k-eigenvalue calculations in the continuous-energy Monte Carlo code, MCNP6. A comparison of the accuracy of these techniques, the differential operator and adjoint-weighted methods, is performed numerically and analytically. Typically, the adjoint-weighted method shows better performance over a larger range; however, there are exceptions.
Simulations of polycrystalline CVD diamond film growth using a simplified Monte Carlo model
Bristol, University of
Simulations of polycrystalline CVD diamond film growth using a simplified Monte Carlo model P online 6 November 2009 Keywords: CVD diamond growth Modelling Nucleation Nanodiamond A simple 1) of a diamond (100) surface. The model considers adsorption, etching/desorption, lattice incorporation
Sources of Traffic Demand Variability and Use of Monte Carlo for Network Capacity Planning
Cortes, Corinna
to deal with rightfully angry business and finance teams: physical resources start depreciating the moment the sources of traffic demand variability and dive into Monte-Carlo methodology as an efficient way; throughput; traffic; concurrency; availability; node-and-link model; fast-time simulation; agent
Performance Characteristics of Cathode Materials for Lithium-Ion Batteries: A Monte Carlo Strategy
Subramanian, Venkat
Performance Characteristics of Cathode Materials for Lithium-Ion Batteries: A Monte Carlo Strategy to study the performance of cathode materials in lithium-ion batteries. The methodology takes into account. Published September 26, 2008. Lithium-ion batteries are state-of-the-art power sources1 for por- table
Monte Carlo calculations of pair production in high-intensity laser-plasma interactions
Roland Duclous; John Kirk; Anthony Bell
2010-10-21T23:59:59.000Z
Gamma-ray and electron-positron pair production will figure prominently in laser-plasma experiments with next generation lasers. Using a Monte Carlo approach we show that straggling effects arising from the finite recoil an electron experiences when it emits a high energy photon, increase the number of pairs produced on further interaction with the laser fields.
Monte Carlo Characterization of a Pulsed Laser-Wakefield Driven Monochromatic
Umstadter, Donald
Monte Carlo Characterization of a Pulsed Laser-Wakefield Driven Monochromatic X-Ray Source S. D determination of the incident X-ray energy by using unfolding techniques. I. INTRODUCTION HE Diocles laser light from the same laser system, producing monochromatic X-rays with energy and spectral width
A Positive-Weight Next-to-Leading-Order Monte Carlo for Heavy Flavour Hadroproduction
Stefano Frixione; Paolo Nason; Giovanni Ridolfi
2007-09-22T23:59:59.000Z
We present a next-to-leading order calculation of heavy flavour production in hadronic collisions that can be interfaced to shower Monte Carlo programs. The calculation is performed in the context of the POWHEG method. It is suitable for the computation of charm, bottom and top hadroproduction. In the case of top production, spin correlations in the decay products are taken into account.
Direct Simulation Monte Carlo of Inductively Coupled Plasma and Comparison with Experiments
Economou, Demetre J.
Direct Simulation Monte Carlo of Inductively Coupled Plasma and Comparison with Experiments Justine of Chemical Engineering, University of Houston, Houston, Texas 77204-4 792, USA ABSTRACT Direct simulation-density inductively coupled reactor with chlorine (electronegative) chemistry. Electron density and temperature were
Optimisation of masked ion irradiation damage profiles in YBCO thin films by Monte Carlo simulation
Webb, Roger P.
Optimisation of masked ion irradiation damage profiles in YBCO thin films by Monte Carlo simulation production with a given mask structure. The results suggest that minimum ion scattering broadening tails with beam energy up to a few hundred keV, though the throughput is intrinsically low [1]. A combination
Use of single scatter electron monte carlo transport for medical radiation sciences
Svatos, Michelle M. (Oakland, CA)
2001-01-01T23:59:59.000Z
The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.
Equilibrium polymerization in sulphur: Monte Carlo simulations with a density functional based Messina, I-98166 Messina, Italy The equilibrium polymerization of sulphur is investigated by a combination leading to polymerization, and the results for the cohesive energy, structural and vibrational properties
Washington at Seattle, University of - Department of Physics, Electroweak Interaction Research Group
Monte Carlo Calculations of the Intrinsic Detector Backgrounds for the Karlsruhe Tritium Neutrino of the Intrinsic Detector Backgrounds for the Karlsruhe Tritium Neutrino Experiment Michelle L. Leber Chair of the Supervisory Committee: Professor John F. Wilkerson Physics The Karlsruhe Tritium Neutrino Experiment (KATRIN
Collective enhancement of nuclear state densities by the shell model Monte Carlo approach
Özen, C; Nakada, H
2015-01-01T23:59:59.000Z
The shell model Monte Carlo (SMMC) approach allows for the microscopic calculation of statistical and collective properties of heavy nuclei using the framework of the configuration-interaction shell model in very large model spaces. We present recent applications of the SMMC method to the calculation of state densities and their collective enhancement factors in rare-earth nuclei.
Comparison of Monte-Carlo and Einstein methods in the light-gas interactions
Jacques Moret-Bailly
2010-01-18T23:59:59.000Z
To study the propagation of light in nebulae, many astrophysicists use a Monte-Carlo computation which does not take interferences into account. Replacing the wrong method by Einstein coefficients theory gives, on an example, a theoretical spectrum much closer to the observed one.
SCALE Continuous-Energy Monte Carlo Depletion with Parallel KENO in TRITON
Goluoglu, Sedat [ORNL] [ORNL; Bekar, Kursat B [ORNL] [ORNL; Wiarda, Dorothea [ORNL] [ORNL
2012-01-01T23:59:59.000Z
The TRITON sequence of the SCALE code system is a powerful and robust tool for performing multigroup (MG) reactor physics analysis using either the 2-D deterministic solver NEWT or the 3-D Monte Carlo transport code KENO. However, as with all MG codes, the accuracy of the results depends on the accuracy of the MG cross sections that are generated and/or used. While SCALE resonance self-shielding modules provide rigorous resonance self-shielding, they are based on 1-D models and therefore 2-D or 3-D effects such as heterogeneity of the lattice structures may render final MG cross sections inaccurate. Another potential drawback to MG Monte Carlo depletion is the need to perform resonance self-shielding calculations at each depletion step for each fuel segment that is being depleted. The CPU time and memory required for self-shielding calculations can often eclipse the resources needed for the Monte Carlo transport. This summary presents the results of the new continuous-energy (CE) calculation mode in TRITON. With the new capability, accurate reactor physics analyses can be performed for all types of systems using the SCALE Monte Carlo code KENO as the CE transport solver. In addition, transport calculations can be performed in parallel mode on multiple processors.
Path-Integral Monte Carlo And The Squeezed Trapped Bose-Einstein Gas
Mullin, William J.
Path-Integral Monte Carlo And The Squeezed Trapped Bose-Einstein Gas Juan Pablo FernÃ¡ndez1 the gas becomes effectively two-dimensional (2D). We confirm the plausibility of this result by performing different estimates for the condensate fraction. For the ideal gas, we find that the PIMC column density
The S/sub N//Monte Carlo response matrix hybrid method
Filippone, W.L.; Alcouffe, R.E.
1987-01-01T23:59:59.000Z
A hybrid method has been developed to iteratively couple S/sub N/ and Monte Carlo regions of the same problem. This technique avoids many of the restrictions and limitations of previous attempts to do the coupling and results in a general and relatively efficient method. We demonstrate the method with some simple examples.
Path Integral Monte Carlo Simulation of the Low-Density Hydrogen Plasma B. Militzer y
Militzer, Burkhard
Path Integral Monte Carlo Simulation of the Low-Density Hydrogen Plasma B. Militzer y Lawrence to calculate the equilibrium properties of hydrogen in the density and temperature range of 9:83 #2; 10 4 #20 surface. We calculate the equation of state and compare with other models for hydrogen valid
Takahiro Mizusaki; Noritaka Shimizu
2012-01-27T23:59:59.000Z
We propose a new variational Monte Carlo (VMC) method with an energy variance extrapolation for large-scale shell-model calculations. This variational Monte Carlo is a stochastic optimization method with a projected correlated condensed pair state as a trial wave function, and is formulated with the M-scheme representation of projection operators, the Pfaffian and the Markov-chain Monte Carlo (MCMC). Using this method, we can stochastically calculate approximated yrast energies and electro-magnetic transition strengths. Furthermore, by combining this VMC method with energy variance extrapolation, we can estimate exact shell-model energies.
Beer, M.
1980-12-01T23:59:59.000Z
The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that the use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.
Monte-Carlo Simulations for the optimisation of a TOF-MIEZE Instrument
Weber, T; Georgii, R; Häußler, W; Weichselbaumer, S; Böni, P; 10.1016/j.nima.2013.03.010
2013-01-01T23:59:59.000Z
The MIEZE (Modulation of Intensity with Zero Effort) technique is a variant of neutron resonance spin echo (NRSE), which has proven to be a unique neutron scattering technique for measuring with high energy resolution in magnetic fields. Its limitations in terms of flight path differences have already been investigated analytically for neutron beams with vanishing divergence. In the present work Monte-Carlo simulations for quasi-elastic MIEZE experiments taking into account beam divergence as well as the sample dimensions are presented. One application of the MIEZE technique could be a dedicated NRSE-MIEZE instrument at the European Spallation Source (ESS) in Sweden. The optimisation of a particular design based on Montel mirror optics with the help of Monte Carlo simulations will be discussed here in detail.
A user-friendly, graphical interface for the Monte Carlo neutron optics code MCLIB
Thelliez, T.; Daemen, L.; Hjelm, R.P. [Los Alamos National Lab., NM (United States); Seeger, P.A. [Seeger (Phil A.), Los Alamos, NM (United States)
1995-12-01T23:59:59.000Z
The authors describe a prototype of a new user interface for the Monte Carlo neutron optics simulation program MCLIB. At this point in its development the interface allows the user to define an instrument as a set of predefined instrument elements. The user can specify the intrinsic parameters of each element, its position and orientation. The interface then writes output to the MCLIB package and starts the simulation. The present prototype is an early development stage of a comprehensive Monte Carlo simulations package that will serve as a tool for the design, optimization and assessment of performance of new neutron scattering instruments. It will be an important tool for understanding the efficacy of new source designs in meeting the needs of these instruments.
Monte Carlo study of the performance of a time-of-flight multichopper spectrometer
Daemen, L.L.; Eckert, J.; Pynn, R. [and others
1995-12-01T23:59:59.000Z
The Monte Carlo method is a powerful technique for neutron transport studies. While it has been applied for many years to the study of nuclear systems, there are few codes available for neutron transport in the optical regime. The recent surge of interest in so-called next generation spallation neutron sources and the desire to design new and optimized instruments for these facilities has led us to develop a Monte Carlo code geared toward the simulation of neutron scattering instruments. The time-of-flight multichopper spectrometer, of which IN5 at the ILL is the prototypical example, is the first spectrometer studied with the code. Some of the results of a comparison between the IN5 performance at a reactor and at a Long Pulse Spallation Source (LPSS) are summarized here.
Yasuda, Shugo
2015-01-01T23:59:59.000Z
A Monte Carlo simulation for the chemotactic bacteria is developed on the basis of the kinetic modeling, i.e., the Boltzmann transport equation, and applied to the one-dimensional traveling population wave in a micro channel.In this method, the Monte Carlo method, which calculates the run-and-tumble motions of bacteria, is coupled with a finite volume method to solve the macroscopic transport of the chemical cues in the field. The simulation method can successfully reproduce the traveling population wave of bacteria which was observed experimentally. The microscopic dynamics of bacteria, e.g., the velocity autocorrelation function and velocity distribution function of bacteria, are also investigated. It is found that the bacteria which form the traveling population wave create quasi-periodic motions as well as a migratory movement along with the traveling population wave. Simulations are also performed with changing the sensitivity and modulation parameters in the response function of bacteria. It is found th...
Calculating alpha Eigenvalues in a Continuous-Energy Infinite Medium with Monte Carlo
Betzler, Benjamin R. [Los Alamos National Laboratory; Kiedrowski, Brian C. [Los Alamos National Laboratory; Brown, Forrest B. [Los Alamos National Laboratory; Martin, William R. [Los Alamos National Laboratory
2012-09-04T23:59:59.000Z
The {alpha} eigenvalue has implications for time-dependent problems where the system is sub- or supercritical. We present methods and results from calculating the {alpha}-eigenvalue spectrum for a continuous-energy infinite medium with a simplified Monte Carlo transport code. We formulate the {alpha}-eigenvalue problem, detail the Monte Carlo code physics, and provide verification and results. We have a method for calculating the {alpha}-eigenvalue spectrum in a continuous-energy infinite-medium. The continuous-time Markov process described by the transition rate matrix provides a way of obtaining the {alpha}-eigenvalue spectrum and kinetic modes. These are useful for the approximation of the time dependence of the system.
Study of nuclear pairing with Configuration-Space Monte-Carlo approach
Lingle, Mark
2015-01-01T23:59:59.000Z
Pairing correlations in nuclei play a decisive role in determining nuclear drip-lines, binding energies, and many collective properties. In this work a new Configuration-Space Monte-Carlo (CSMC) method for treating nuclear pairing correlations is developed, implemented, and demonstrated. In CSMC the Hamiltonian matrix is stochastically generated in Krylov subspace, resulting in the Monte-Carlo version of Lanczos-like diagonalization. The advantages of this approach over other techniques are discussed; the absence of the fermionic sign problem, probabilistic interpretation of quantum-mechanical amplitudes, and ability to handle truly large-scale problems with defined precision and error control, are noteworthy merits of CSMC. The features of our CSMC approach are shown using models and realistic examples. Special attention is given to difficult limits: situations with non-constant pairing strengths, cases with nearly degenerate excited states, limits when pairing correlations in finite systems are weak, and pr...
Rao-Blackwellised Interacting Markov Chain Monte Carlo for Electromagnetic Scattering Inversion
Giraud, François
2012-01-01T23:59:59.000Z
The following electromagnetism (EM) inverse problem is addressed. It consists in estimating local radioelectric properties of materials recovering an object from the global EM scattering measurement, at various incidences and wave frequencies. This large scale ill-posed inverse problem is explored by an intensive exploitation of an efficient 2D Maxwell solver, distributed on High Performance Computing (HPC) machines. Applied to a large training data set, a statistical analysis reduces the problem to a simpler probabilistic metamodel, on which Bayesian inference can be performed. Considering the radioelectric properties as a dynamic stochastic process, evolving in function of the frequency, it is shown how advanced Markov Chain Monte Carlo methods, called Sequential Monte Carlo (SMC) or interacting particles, can provide estimations of the EM properties of each material, and their associated uncertainties.
M. A. Novotny; Shannon M. Wheeler
2002-11-02T23:59:59.000Z
We present the Monte Carlo with Absorbing Markov Chains (MCAMC) method for extremely long kinetic Monte Carlo simulations. The MCAMC algorithm does not modify the system dynamics. It is extremely useful for models with discrete state spaces when low-temperature simulations are desired. To illustrate the strengths and limitations of this algorithm we introduce a simple model involving random walkers on an energy landscape. This simple model has some of the characteristics of protein folding and could also be experimentally realizable in domain motion in nanoscale magnets. We find that even the simplest MCAMC algorithm can speed up calculations by many orders of magnitude. More complicated MCAMC simulations can gain further increases in speed by orders of magnitude.
Using high performance computing and Monte Carlo simulation for pricing american options
Cvetanoska, Verche
2012-01-01T23:59:59.000Z
High performance computing (HPC) is a very attractive and relatively new area of research, which gives promising results in many applications. In this paper HPC is used for pricing of American options. Although the American options are very significant in computational finance; their valuation is very challenging, especially when the Monte Carlo simulation techniques are used. For getting the most accurate price for these types of options we use Quasi Monte Carlo simulation, which gives the best convergence. Furthermore, this algorithm is implemented on both GPU and CPU. Additionally, the CUDA architecture is used for harnessing the power and the capability of the GPU for executing the algorithm in parallel which is later compared with the serial implementation on the CPU. In conclusion this paper gives the reasons and the advantages of applying HPC in computational finance.
Rubery, M. S.; Horsfield, C. J. [Plasma Physics Department, AWE plc, Reading RG7 4PR (United Kingdom)] [Plasma Physics Department, AWE plc, Reading RG7 4PR (United Kingdom); Herrmann, H.; Kim, Y.; Mack, J. M.; Young, C.; Evans, S.; Sedillo, T.; McEvoy, A.; Caldwell, S. E. [Plasma Physics Department, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)] [Plasma Physics Department, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Grafil, E.; Stoeffl, W. [Physics, Lawrence Livermore National Laboratory, Livermore, California 94551 (United States)] [Physics, Lawrence Livermore National Laboratory, Livermore, California 94551 (United States); Milnes, J. S. [Photek Limited UK, 26 Castleham Road, St. Leonards-on-sea TN38 9NS (United Kingdom)] [Photek Limited UK, 26 Castleham Road, St. Leonards-on-sea TN38 9NS (United Kingdom)
2013-07-15T23:59:59.000Z
The gas Cherenkov detectors at NIF and Omega measure several ICF burn characteristics by detecting multi-MeV nuclear ? emissions from the implosion. Of primary interest are ? bang-time (GBT) and burn width defined as the time between initial laser-plasma interaction and peak in the fusion reaction history and the FWHM of the reaction history respectively. To accurately calculate such parameters the collaboration relies on Monte Carlo codes, such as GEANT4 and ACCEPT, for diagnostic properties that cannot be measured directly. This paper describes a series of experiments performed at the High Intensity ? Source (HI?S) facility at Duke University to validate the geometries and material data used in the Monte Carlo simulations. Results published here show that model-driven parameters such as intensity and temporal response can be used with less than 50% uncertainty for all diagnostics and facilities.
The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units
Hall, Clifford [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States) [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); Ji, Weixiao [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States)] [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); Blaisten-Barojas, Estela, E-mail: blaisten@gmu.edu [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States) [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States)
2014-02-01T23:59:59.000Z
We present a CPU–GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm, which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU–GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU–GPU duets. -- Highlights: •We parallelize the Metropolis Monte Carlo (MMC) algorithm on one CPU—GPU duet. •The Adaptive Tempering Monte Carlo employs MMC and profits from this CPU—GPU implementation. •Our benchmark shows a size scaling-up speedup of 62 for systems with 225,000 particles. •The testbed involves a polymeric system of oligopyrroles in the condensed phase. •The CPU—GPU parallelization includes dipole—dipole and Mie—Jones classic potentials.
Radiative transfer in the earth's atmosphere-ocean system using Monte Carlo techniques
Bradley, Paul Andrew
2012-06-07T23:59:59.000Z
are described in the next chapter. The books by Morgan and Hammersley and Handscomb describe the theory and some methods of variance reduction for general applications. One item that is required of any Monte Carlo simulation is a supply of randoni numbers... be checked through modification of the model since the same sequeiice of random numbers may be generated repeatedly. Discussions on the properties ot' random nuinbers and their generation may be found in the books by Morgan' and Hammersley and Handscomb...
Using Markov chain Monte Carlo methods for estimating parameters with gravitational radiation data
Nelson Christensen; Renate Meyer
2001-02-05T23:59:59.000Z
We present a Bayesian approach to the problem of determining parameters for coalescing binary systems observed with laser interferometric detectors. By applying a Markov Chain Monte Carlo (MCMC) algorithm, specifically the Gibbs sampler, we demonstrate the potential that MCMC techniques may hold for the computation of posterior distributions of parameters of the binary system that created the gravity radiation signal. We describe the use of the Gibbs sampler method, and present examples whereby signals are detected and analyzed from within noisy data.
Sima, Octavian [Physics Department, University of Bucharest, Bucharest-Magurele, POBoxMG-11 RO-077125 (Romania)
2008-08-14T23:59:59.000Z
A comprehensive calibration of gamma-ray spectrometers cannot be obtained purely on experimental basis. Problems like self-attenuation effects, coincidence-summing effects and non-uniform source distribution (resulting e.g. from neutron self-shielding in NAA) can be efficiently solved by Monte Carlo simulation. The application of the GESPECOR code to these problems is presented and the associated uncertainty is discussed.
The role of diagonalization within a diagonalization/Monte Carlo scheme
Dean Lee
2000-10-31T23:59:59.000Z
We discuss a method called quasi-sparse eigenvector diagonalization which finds the most important basis vectors of the low energy eigenstates of a quantum Hamiltonian. It can operate using any basis, either orthogonal or non-orthogonal, and any sparse Hamiltonian, either Hermitian, non-Hermitian, finite-dimensional, or infinite-dimensional. The method is part of a new computational approach which combines both diagonalization and Monte Carlo techniques.
Biggs, P.J. (Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston (United States))
1991-10-01T23:59:59.000Z
Shielding calculations for door thicknesses for megavoltage radiotherapy facilities with mazes are generally straightforward. To simplify the calculations, the standard formalism adopts several approximations relating to the average beam path, scattering coefficients, and the mean energy of the spectrum of scattered radiation. To test the accuracy of these calculations, the Monte Carlo program, ITS, was applied to this problem by determining the dose and energy spectrum of the radiation at the door for 4- and 10-MV bremsstrahlung beams incident on a phantom at isocenter. This was performed for mazes, one termed 'standard' and the other a shorter maze where the primary beam is incident on the wall adjacent to the door. The peak of the photon-energy spectrum at the door was found to be the same for both types of maze, independent of primary beam energy, and also, in the case of the conventional maze, of the primary beam orientation. The spectrum was harder for the short maze and for 10 MV vs. 4 MV. The thickness of the lead door for a short maze configuration was 1.5 cm for 10 MV and 1.2 cm for 4 MV vs. approximately less than 1 mm for a conventional maze. For the conventional maze, the Monte Carlo calculation predicts the dose at the door to be lower than given by NCRP 49 and NCRP 51 by about a factor of 2 at 4 MV but to be the same at 10 MV. For the short maze, the Monte Carlo predicts the dose to be a factor of 3 lower for 4 MV and about a factor of 1.5 lower for 10 MV. Experimental results support the Monte Carlo findings for the short maze.
Clutter, David John
1992-01-01T23:59:59.000Z
of the thesis is written with the intent of reviewing some of the significant pieces of literature relating to Monte Carlo simulated REDT and exploratory data analysis Box Plots. In 1964 David Hertz published an article in the Harvard Business Review... entitled, "Risk Analysis in Capital Investment" (Hertz 1964). While this article does not directly discuss range estimating, it is the foundation for the current REDT theory. In his atticle, Hertz discussed the problems associated with estimating...
Hybrid Monte Carlo with Wilson Dirac operator on the Fermi GPU
Abhijit Chakrabarty; Pushan Majumdar
2012-07-10T23:59:59.000Z
In this article we present our implementation of a Hybrid Monte Carlo algorithm for Lattice Gauge Theory using two degenerate flavours of Wilson-Dirac fermions on a Fermi GPU. We find that using registers instead of global memory speeds up the code by almost an order of magnitude. To map the array variables to scalars, so that the compiler puts them in the registers, we use code generators. Our final program is more than 10 times faster than a generic single CPU.
A unified Monte Carlo approach to fast neutron cross section data evaluation.
Smith, D.; Nuclear Engineering Division
2008-03-03T23:59:59.000Z
A unified Monte Carlo (UMC) approach to fast neutron cross section data evaluation that incorporates both model-calculated and experimental information is described. The method is based on applications of Bayes Theorem and the Principle of Maximum Entropy as well as on fundamental definitions from probability theory. This report describes the formalism, discusses various practical considerations, and examines a few numerical examples in some detail.
Hybrid Monte Carlo with Wilson Dirac operator on the Fermi GPU
Chakrabarty, Abhijit
2012-01-01T23:59:59.000Z
In this article we present our implementation of a Hybrid Monte Carlo algorithm for Lattice Gauge Theory using two degenerate flavours of Wilson-Dirac fermions on a Fermi GPU. We find that using registers instead of global memory speeds up the code by almost an order of magnitude. To map the array variables to scalars, so that the compiler puts them in the registers, we use code generators. Our final program is more than 10 times faster than a generic single CPU.
Xu, Zao
We present a numerical study of the near-surface underwater solar light statistics using the state-of-the-art Monte Carlo radiative transfer (RT) simulations in the coupled atmosphere-ocean system. Advanced variance-reduction ...
Spatial homogenization of thermal feedback regions in Monte Carlo reactor calculations
Hanna, B. R.; Gill, D. F.; Griesheimer, D. P. [Bertis Atomic Power Laboratory, Bechtel Marine Propulsion Corporation, P.O. Box 79, West Mifflin, PA 15122 (United States)
2012-07-01T23:59:59.000Z
An integrated thermal-hydraulic feedback module has previously been developed for the Monte Carlo transport solver, MC21. The module incorporates a flexible input format that allows the user to describe heat transfer and coolant flow paths within the geometric model at any level of spatial detail desired. The effect that the varying levels of spatial homogenization of thermal regions has on the accuracy of the Monte Carlo simulations is examined in this study. Six thermal feedback mappings are constructed from the same geometric model of the Calvert Cliffs core. The spatial homogenization of the thermal regions is varied, giving each scheme a different level of detail, and the adequacy of the spatial homogenization is determined based on the eigenvalue produced by each Monte Carlo calculation. The purpose of these numerical experiments is to determine the level of detail necessarily to accurately capture the thermal feedback effect on reactivity. Several different core models are considered: axial-flow only, axial and lateral flow, asymmetry due to control rod insertion, and fuel heating (temperature -dependent cross sections). The thermal results generated by the MC21 thermal feedback module are consistent with expectations. Based upon the numerical experiments conducted it is concluded that the amount of spatial detail necessary to accurately capture the feedback effect on reactivity is relatively small. Homogenization at the assembly level for the Calvert Cliffs PWR model results in a similar power defect to that calculated with individual pin-cells modeled as explicit thermal regions. (authors)
Nonequilibrium candidate Monte Carlo: A new tool for efficient equilibrium simulation
Nilmeier, Jerome P.; Crooks, Gavin E.; Minh, David D. L.; Chodera, John D.
2011-11-08T23:59:59.000Z
Metropolis Monte Carlo simulation is a powerful tool for studying the equilibrium properties of matter. In complex condensed-phase systems, however, it is difficult to design Monte Carlo moves with high acceptance probabilities that also rapidly sample uncorrelated configurations. Here, we introduce a new class of moves based on nonequilibrium dynamics: candidate configurations are generated through a finite-time process in which a system is actively driven out of equilibrium, and accepted with criteria that preserve the equilibrium distribution. The acceptance rule is similar to the Metropolis acceptance probability, but related to the nonequilibrium work rather than the instantaneous energy difference. Our method is applicable to sampling from both a single thermodynamic state or a mixture of thermodynamic states, and allows both coordinates and thermodynamic parameters to be driven in nonequilibrium proposals. While generating finite-time switching trajectories incurs an additional cost, driving some degrees of freedom while allowing others to evolve naturally can lead to large enhancements in acceptance probabilities, greatly reducing structural correlation times. Using nonequilibrium driven processes vastly expands the repertoire of useful Monte Carlo proposals in simulations of dense solvated systems.
MCNPX Monte Carlo burnup simulations of the isotope correlation experiments in the NPP obrigheim.
Cao, Y.; Gohar, Y.; Broeders, C. (Nuclear Engineering Division); (Inst. for Neutron Physics and Reactor Technology)
2010-10-01T23:59:59.000Z
This paper describes the simulation work of the Isotope Correlation Experiment (ICE) using the MCNPX Monte Carlo computer code package. The Monte Carlo simulation results are compared with the ICE-Experimental measurements for burnup up to 30 GWD/t. The comparison shows the good capabilities of the MCNPX computer code package for predicting the depletion of the uranium fuel and the buildup of the plutonium isotopes in a PWR thermal reactor. The Monte Carlo simulation results show also good agreements with the experimental data for calculating several long-lived and stable fission products. However, for the americium and curium actinides, it is difficult to judge the predication capabilities for these actinides due to the large uncertainties in the ICE-Experimental data. In the MCNPX numerical simulations, a pin cell model is utilized to simulate the fuel lattice of the nuclear power reactor. Temperature dependent libraries based on JEFF3.1 nuclear data files are utilized for the calculations. In addition, temperature dependent libraries based ENDF/B-VII nuclear data files are utilized and the obtained results are very close to the JEFF3.1 results, except for {approx}10% differences in the prediction of the minor actinide isotopes buildup.
Monte Carlo depletion calculations using VESTA 2.1 new features and perspectives
Haeck, W.; Cochet, B.; Aguiar, L. [Institut de Radioprotection et de Surete Nucleaire IRSN, BP 17, 92262 Fontenay-aux-Roses Cedex (France)
2012-07-01T23:59:59.000Z
VESTA is a Monte Carlo depletion interface code that is currently under development at IRSN. With VESTA, the emphasis lies on both accuracy and performance, so that the code will be capable of providing accurate and complete answers in an acceptable amount of time compared to other Monte Carlo depletion codes. From its inception, VESTA is intended to be a generic interface code so that it will ultimately be capable of using any Monte-Carlo code or depletion module and that can be tailored to the users needs. A new version of the code (version 2.1.x) will be released in 2012. The most important additions to the code are a burn up dependent isomeric branching ratio treatment to improve the prediction of metastable nuclides such as {sup 242m}Am and the integration of the PHOENIX point depletion module (also developed at IRSN) to overcome some of the limitations of the ORIGEN 2.2 module. The task of extracting and visualising the basic results and also the calculation of physical quantities or other data that can be derived from the basic output provided by VESTA will be the task of the AURORA depletion analysis tool which will be released at the same time as VESTA 2.1.x. The experimental validation database was also extended for this new version and it now contains a total of 35 samples with chemical assay data and 34 assembly decay heat measurements. (authors)
Monte Carlo Study of Patchy Nanostructures Self-Assembled from a Single Multiblock Chain
Jakub Krajniak; Michal Banaszak
2014-10-15T23:59:59.000Z
We present a lattice Monte Carlo simulation for a multiblock copolymer chain of length N=240 and microarchitecture $(10-10)_{12}$.The simulation was performed using the Monte Carlo method with the Metropolis algorithm. We measured average energy, heat capacity, the mean squared radius of gyration, and the histogram of cluster count distribution. Those quantities were investigated as a function of temperature and incompatibility between segments, quantified by parameter {\\omega}. We determined the temperature of the coil-globule transition and constructed the phase diagram exhibiting a variety of patchy nanostructures. The presented results yield a qualitative agreement with those of the off-lattice Monte Carlo method reported earlier, with a significant exception for small incompatibilities,{\\omega}, and low temperatures, where 3-cluster patchy nanostructures are observed in contrast to the 2-cluster structures observed for the off-lattice $(10-10)_{12}$ chain. We attribute this difference to a considerable stiffness of lattice chains in comparison to that of the off-lattice chains.
Papadimitroulas, Panagiotis; Loudos, George; Nikiforidis, George C.; Kagadis, George C. [Department of Medical Physics, School of Medicine, University of Patras, Rion, GR 265 04 (Greece) and Department of Medical Instruments Technology, Technological Educational institute of Athens, Ag. Spyridonos Street, Egaleo GR 122 10, Athens (Greece); Department of Medical Instruments Technology, Technological Educational institute of Athens, Ag. Spyridonos Street, Egaleo GR 122 10, Athens (Greece); Department of Medical Physics, School of Medicine, University of Patras, Rion, GR 265 04 (Greece)
2012-08-15T23:59:59.000Z
Purpose: GATE is a Monte Carlo simulation toolkit based on the Geant4 package, widely used for many medical physics applications, including SPECT and PET image simulation and more recently CT image simulation and patient dosimetry. The purpose of the current study was to calculate dose point kernels (DPKs) using GATE, compare them against reference data, and finally produce a complete dataset of the total DPKs for the most commonly used radionuclides in nuclear medicine. Methods: Patient-specific absorbed dose calculations can be carried out using Monte Carlo simulations. The latest version of GATE extends its applications to Radiotherapy and Dosimetry. Comparison of the proposed method for the generation of DPKs was performed for (a) monoenergetic electron sources, with energies ranging from 10 keV to 10 MeV, (b) beta emitting isotopes, e.g., {sup 177}Lu, {sup 90}Y, and {sup 32}P, and (c) gamma emitting isotopes, e.g., {sup 111}In, {sup 131}I, {sup 125}I, and {sup 99m}Tc. Point isotropic sources were simulated at the center of a sphere phantom, and the absorbed dose was stored in concentric spherical shells around the source. Evaluation was performed with already published studies for different Monte Carlo codes namely MCNP, EGS, FLUKA, ETRAN, GEPTS, and PENELOPE. A complete dataset of total DPKs was generated for water (equivalent to soft tissue), bone, and lung. This dataset takes into account all the major components of radiation interactions for the selected isotopes, including the absorbed dose from emitted electrons, photons, and all secondary particles generated from the electromagnetic interactions. Results: GATE comparison provided reliable results in all cases (monoenergetic electrons, beta emitting isotopes, and photon emitting isotopes). The observed differences between GATE and other codes are less than 10% and comparable to the discrepancies observed among other packages. The produced DPKs are in very good agreement with the already published data, which allowed us to produce a unique DPKs dataset using GATE. The dataset contains the total DPKs for {sup 67}Ga, {sup 68}Ga, {sup 90}Y, {sup 99m}Tc, {sup 111}In, {sup 123}I, {sup 124}I, {sup 125}I, {sup 131}I, {sup 153}Sm, {sup 177}Lu {sup 186}Re, and {sup 188}Re generated in water, bone, and lung. Conclusions: In this study, the authors have checked GATE's reliability for absorbed dose calculation when transporting different kind of particles, which indicates its robustness for dosimetry applications. A novel dataset of DPKs is provided, which can be applied in patient-specific dosimetry using analytical point kernel convolution algorithms.
Measured and Monte Carlo calculated k{sub Q} factors: Accuracy and comparison
Muir, B. R.; McEwen, M. R.; Rogers, D. W. O. [Ottawa Medical Physics Institute (OMPI), Ottawa Carleton Institute for Physics, Carleton University Campus, 1125 Colonel By Drive, Ottawa, Ontario K1S 5B6 (Canada); Institute for National Measurement Standards, National Research Council of Canada, Ottawa, Ontario K1A 0R6 (Canada); Ottawa Medical Physics Institute (OMPI), Ottawa Carleton Institute for Physics, Carleton University Campus, 1125 Colonel By Drive, Ottawa, Ontario K1S 5B6 (Canada)
2011-08-15T23:59:59.000Z
Purpose: The journal Medical Physics recently published two papers that determine beam quality conversion factors, k{sub Q}, for large sets of ion chambers. In the first paper [McEwen Med. Phys. 37, 2179-2193 (2010)], k{sub Q} was determined experimentally, while the second paper [Muir and Rogers Med. Phys. 37, 5939-5950 (2010)] provides k{sub Q} factors calculated using Monte Carlo simulations. This work investigates a variety of additional consistency checks to verify the accuracy of the k{sub Q} factors determined in each publication and a comparison of the two data sets. Uncertainty introduced in calculated k{sub Q} factors by possible variation of W/e with beam energy is investigated further. Methods: The validity of the experimental set of k{sub Q} factors relies on the accuracy of the NE2571 reference chamber measurements to which k{sub Q} factors for all other ion chambers are correlated. The stability of NE2571 absorbed dose to water calibration coefficients is determined and comparison to other experimental k{sub Q} factors is analyzed. Reliability of Monte Carlo calculated k{sub Q} factors is assessed through comparison to other publications that provide Monte Carlo calculations of k{sub Q} as well as an analysis of the sleeve effect, the effect of cavity length and self-consistencies between graphite-walled Farmer-chambers. Comparison between the two data sets is given in terms of the percent difference between the k{sub Q} factors presented in both publications. Results: Monitoring of the absorbed dose calibration coefficients for the NE2571 chambers over a period of more than 15 yrs exhibit consistency at a level better than 0.1%. Agreement of the NE2571 k{sub Q} factors with a quadratic fit to all other experimental data from standards labs for the same chamber is observed within 0.3%. Monte Carlo calculated k{sub Q} factors are in good agreement with most other Monte Carlo calculated k{sub Q} factors. Expected results are observed for the sleeve effect and the effect of cavity length on k{sub Q}. The mean percent differences between experimental and Monte Carlo calculated k{sub Q} factors are -0.08, -0.07, and -0.23% for the Elekta 6, 10, and 25 MV nominal beam energies, respectively. An upper limit on the variation of W/e in photon beams from cobalt-60 to 25 MV is determined as 0.4% with 95% confidence. The combined uncertainty on Monte Carlo calculated k{sub Q} factors is reassessed and amounts to between 0.40 and 0.49% depending on the wall material of the chamber. Conclusions: Excellent agreement (mean percent difference of only 0.13% for the entire data set) between experimental and calculated k{sub Q} factors is observed. For some chambers, k{sub Q} is measured for only one chamber of each type--the level of agreement observed in this study would suggest that for those chambers the measured k{sub Q} values are generally representative of the chamber type.
Charged-Particle Thermonuclear Reaction Rates: I. Monte Carlo Method and Statistical Distributions
Richard Longland; Christian Iliadis; Art Champagne; Joe Newton; Claudio Ugalde; Alain Coc; Ryan Fitzgerald
2010-04-23T23:59:59.000Z
A method based on Monte Carlo techniques is presented for evaluating thermonuclear reaction rates. We begin by reviewing commonly applied procedures and point out that reaction rates that have been reported up to now in the literature have no rigorous statistical meaning. Subsequently, we associate each nuclear physics quantity entering in the calculation of reaction rates with a specific probability density function, including Gaussian, lognormal and chi-squared distributions. Based on these probability density functions the total reaction rate is randomly sampled many times until the required statistical precision is achieved. This procedure results in a median (Monte Carlo) rate which agrees under certain conditions with the commonly reported recommended "classical" rate. In addition, we present at each temperature a low rate and a high rate, corresponding to the 0.16 and 0.84 quantiles of the cumulative reaction rate distribution. These quantities are in general different from the statistically meaningless "minimum" (or "lower limit") and "maximum" (or "upper limit") reaction rates which are commonly reported. Furthermore, we approximate the output reaction rate probability density function by a lognormal distribution and present, at each temperature, the lognormal parameters miu and sigma. The values of these quantities will be crucial for future Monte Carlo nucleosynthesis studies. Our new reaction rates, appropriate for bare nuclei in the laboratory, are tabulated in the second paper of this series (Paper II). The nuclear physics input used to derive our reaction rates is presented in the third paper of this series (Paper III). In the fourth paper of this series (Paper IV) we compare our new reaction rates to previous results.
A VAX version of the coupled Monte Carlo transport codes HETC and MORSE-CGA
Sanna, R.S.
1990-12-01T23:59:59.000Z
The three-dimensional Monte Carlo transport codes, HETC and MORSE-CGA, are distributed by the Radiation Shielding Information Center at Oak Ridge National Laboratory. These codes, written for IBM-3033 computers, have been installed on the Environmental Measurements Laboratory's VAX/11-750 computer for operation in a coupled mode to study the transport of neutrons over the energy range from thermal to several GeV. This report is a guide to their use on the VAX/11-750 computer. 26 refs., 6 figs., 14 tabs.
Probability of initiation and extinction in the Mercury Monte Carlo code
McKinley, M. S.; Brantley, P. S. [Lawrence Livermore National Laboratory, 7000 East Ave., Livermore, CA 94551 (United States)
2013-07-01T23:59:59.000Z
A Monte Carlo method for computing the probability of initiation has previously been implemented in Mercury. Recently, a new method based on the probability of extinction has been implemented as well. The methods have similarities from counting progeny to cycling in time, but they also have differences such as population control and statistical uncertainty reporting. The two methods agree very well for several test problems. Since each method has advantages and disadvantages, we currently recommend that both methods are used to compute the probability of criticality. (authors)
Nauchi, Y.; Kameyama, T. [Central Research Inst., Electric Power Industry, 2-11-1 Iwado-Kita, Komae-shi, Tokyo 201-8511 (Japan)
2006-07-01T23:59:59.000Z
New method is proposed to estimate effective fraction of delayed neutrons radiated from precursors categorized into 6 groups of decay constant. Instead of adjoint flux {Phi}*, an expected number of fission neutrons in next generations, M, is applied as a weight function [1]. Introduction of M enables us to calculate the fraction based on continuous energy Monte Carlo method. For the calculation of the fraction, an algorithm is established and implemented into the MCNP-5 code. The method is verified using reactor period data obtained in reactivity measurements. (authors)
Monte Carlo Generators for Studies of the 3D Structure of the Nucleon
Avagyan, Harut A. [JLAB
2015-01-01T23:59:59.000Z
Extraction of transverse momentum and space distributions of partons from measurements of spin and azimuthal asymmetries requires development of a self consistent analysis framework, accounting for evolution effects, and allowing control of systematic uncertainties due to variations of input parameters and models. Development of realistic Monte-Carlo generators, accounting for TMD evolution effects, spin-orbit and quark-gluon correlations will be crucial for future studies of quark-gluon dynamics in general and 3D structure of the nucleon in particular.
Finite-temperature quantum Monte Carlo study of the one-dimensional polarized Fermi gas
Wolak, M. J. [Centre for Quantum Technologies, National University of Singapore, 2 Science Drive 3, Singapore 117542 (Singapore); Rousseau, V. G. [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Miniatura, C. [Centre for Quantum Technologies, National University of Singapore, 2 Science Drive 3, Singapore 117542 (Singapore); INLN, Universite de Nice-Sophia Antipolis, CNRS, 1361 route des Lucioles, F-06560 Valbonne (France); Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117542 (Singapore); Gremaud, B. [Centre for Quantum Technologies, National University of Singapore, 2 Science Drive 3, Singapore 117542 (Singapore); Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117542 (Singapore); Laboratoire Kastler Brossel, UPMC-Paris 6, ENS, CNRS, 4 Place Jussieu, F-75005 Paris (France); Scalettar, R. T. [Physics Department, University of California, Davis, California 95616 (United States); Batrouni, G. G. [Centre for Quantum Technologies, National University of Singapore, 2 Science Drive 3, Singapore 117542 (Singapore); INLN, Universite de Nice-Sophia Antipolis, CNRS, 1361 route des Lucioles, F-06560 Valbonne (France)
2010-07-15T23:59:59.000Z
Quantum Monte Carlo (QMC) techniques are used to provide an approximation-free investigation of the phases of the one-dimensional attractive Hubbard Hamiltonian in the presence of population imbalance. The temperature at which the ''Fulde-Ferrell-Larkin-Ovchinnikov'' (FFLO) phase is destroyed by thermal fluctuations is determined as a function of the polarization. It is shown that the presence of a confining potential does not dramatically alter the FFLO regime and that recent experiments on trapped atomic gases likely lie just within the stable temperature range.
Yoon, Do-Kun; Jung, Joo-Young; Suk Suh, Tae, E-mail: suhsanta@catholic.ac.kr [Department of Biomedical Engineering and Research Institute of Biomedical Engineering, College of Medicine, Catholic University of Korea, Seoul 505 (Korea, Republic of); Jo Hong, Key [Molecular Imaging Program at Stanford (MIPS), Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, California 94305 (United States)
2014-02-24T23:59:59.000Z
Purpose of paper is to confirm the feasibility of acquisition of three dimensional single photon emission computed tomography image from boron neutron capture therapy using Monte Carlo simulation. Prompt gamma ray (478?keV) was used to reconstruct image with ordered subsets expectation maximization method. From analysis of receiver operating characteristic curve, area under curve values of three boron regions were 0.738, 0.623, and 0.817. The differences between length of centers of two boron regions and distance of maximum count points were 0.3?cm, 1.6?cm, and 1.4?cm.
Thermonuclear reaction rate of $^{18}$Ne($?$,$p$)$^{21}$Na from Monte-Carlo calculations
P. Mohr; R. Longland; C. Iliadis
2014-12-14T23:59:59.000Z
The $^{18}$Ne($\\alpha$,$p$)$^{21}$Na reaction impacts the break-out from the hot CNO-cycles to the $rp$-process in type I X-ray bursts. We present a revised thermonuclear reaction rate, which is based on the latest experimental data. The new rate is derived from Monte-Carlo calculations, taking into account the uncertainties of all nuclear physics input quantities. In addition, we present the reaction rate uncertainty and probability density versus temperature. Our results are also consistent with estimates obtained using different indirect approaches.
Thermonuclear reaction rate of $^{18}$Ne($\\alpha$,$p$)$^{21}$Na from Monte-Carlo calculations
Mohr, P; Iliadis, C
2014-01-01T23:59:59.000Z
The $^{18}$Ne($\\alpha$,$p$)$^{21}$Na reaction impacts the break-out from the hot CNO-cycles to the $rp$-process in type I X-ray bursts. We present a revised thermonuclear reaction rate, which is based on the latest experimental data. The new rate is derived from Monte-Carlo calculations, taking into account the uncertainties of all nuclear physics input quantities. In addition, we present the reaction rate uncertainty and probability density versus temperature. Our results are also consistent with estimates obtained using different indirect approaches.
Monte-Carlo study of the phase transition in the AA-stacked bilayer graphene
Nikolaev, A A
2014-01-01T23:59:59.000Z
Tight-binding model of the AA-stacked bilayer graphene with screened electron-electron interactions has been studied using the Hybrid Monte Carlo simulations on the original double-layer hexagonal lattice. Instantaneous screened Coulomb potential is taken into account using Hubbard-Stratonovich transformation. G-type antiferromagnetic ordering has been studied and the phase transition with spontaneous generation of the mass gap has been observed. Dependence of the antiferromagnetic condensate on the on-site electron-electron interaction is examined.
SIM-RIBRAS: A Monte-Carlo simulation package for RIBRAS system
Leistenschneider, E.; Lepine-Szily, A.; Lichtenthaeler, R. [Departamento de Fisica Nuclear, Instituto de Fisica, Universidade de Sao Paulo (Brazil)
2013-05-06T23:59:59.000Z
SIM-RIBRAS is a Root-based Monte-Carlo simulation tool designed to help RIBRAS users on experience planning and experimental setup enhancing and caracterization. It is divided into two main programs: CineRIBRAS, aiming beam kinematics, and SolFocus, aiming beam optics. SIM-RIBRAS replaces other methods and programs used in the past, providing more complete and accurate results and requiring much less manual labour. Moreover, the user can easily make modifications in the codes, adequating it for specific requirements of an experiment.
Monte-Carlo study of the phase transition in the AA-stacked bilayer graphene
A. A. Nikolaev; M. V. Ulybyshev
2014-12-04T23:59:59.000Z
Tight-binding model of the AA-stacked bilayer graphene with screened electron-electron interactions has been studied using the Hybrid Monte Carlo simulations on the original double-layer hexagonal lattice. Instantaneous screened Coulomb potential is taken into account using Hubbard-Stratonovich transformation. G-type antiferromagnetic ordering has been studied and the phase transition with spontaneous generation of the mass gap has been observed. Dependence of the antiferromagnetic condensate on the on-site electron-electron interaction is examined.
Monte Carlo wave packet approach to dissociative multiple ionization in diatomic molecules
Leth, Henriette Astrup; Madsen, Lars Bojer; Moelmer, Klaus [Lundbeck Foundation Theoretical Center for Quantum System Research, Department of Physics and Astronomy, Aarhus University, DK-8000 Aarhus C (Denmark)
2010-05-15T23:59:59.000Z
A detailed description of the Monte Carlo wave packet technique applied to dissociative multiple ionization of diatomic molecules in short intense laser pulses is presented. The Monte Carlo wave packet technique relies on the Born-Oppenheimer separation of electronic and nuclear dynamics and provides a consistent theoretical framework for treating simultaneously both ionization and dissociation. By simulating the detection of continuum electrons and collapsing the system onto either the neutral, singly ionized or doubly ionized states in every time step the nuclear dynamics can be solved separately for each molecular charge state. Our model circumvents the solution of a multiparticle Schroedinger equation and makes it possible to extract the kinetic energy release spectrum via the Coulomb explosion channel as well as the physical origin of the different structures in the spectrum. The computational effort is restricted and the model is applicable to any molecular system where electronic Born-Oppenheimer curves, dipole moment functions, and ionization rates as a function of nuclear coordinates can be determined.
Monte Carlo approach for hadron azimuthal correlations in high energy proton and nuclear collisions
Ayala, Alejandro; Jalilian-Marian, Jamal; Magnin, J; Tejeda-Yeomans, Maria Elena
2012-01-01T23:59:59.000Z
We use a Monte Carlo approach to study hadron azimuthal angular correlations in high energy proton-proton and central nucleus-nucleus collisions at the BNL Relativistic Heavy Ion Collider (RHIC) energies at mid-rapidity. We build a hadron event generator that incorporates the production of $2\\to 2$ and $2\\to 3$ parton processes and their evolution into hadron states. For nucleus-nucleus collisions we include the effect of parton energy loss in the Quark-Gluon Plasma using a modified fragmentation function approach. In the presence of the medium, for the case when three partons are produced in the hard scattering, we analyze the Monte Carlo sample in parton and hadron momentum bins to reconstruct the angular correlations. We characterize this sample by the number of partons that are able to hadronize by fragmentation within the selected bins. In the nuclear environment the model allows hadronization by fragmentation only for partons with momentum above a threshold $p_T^{{\\tiny{thresh}}}=2.4$ GeV. We argue that...
Monte Carlo approach for hadron azimuthal correlations in high energy proton and nuclear collisions
Alejandro Ayala; Isabel Dominguez; Jamal Jalilian-Marian; J. Magnin; Maria Elena Tejeda-Yeomans
2012-07-31T23:59:59.000Z
We use a Monte Carlo approach to study hadron azimuthal angular correlations in high energy proton-proton and central nucleus-nucleus collisions at the BNL Relativistic Heavy Ion Collider (RHIC) energies at mid-rapidity. We build a hadron event generator that incorporates the production of $2\\to 2$ and $2\\to 3$ parton processes and their evolution into hadron states. For nucleus-nucleus collisions we include the effect of parton energy loss in the Quark-Gluon Plasma using a modified fragmentation function approach. In the presence of the medium, for the case when three partons are produced in the hard scattering, we analyze the Monte Carlo sample in parton and hadron momentum bins to reconstruct the angular correlations. We characterize this sample by the number of partons that are able to hadronize by fragmentation within the selected bins. In the nuclear environment the model allows hadronization by fragmentation only for partons with momentum above a threshold $p_T^{{\\tiny{thresh}}}=2.4$ GeV. We argue that one should treat properly the effect of those partons with momentum below the threshold, since their interaction with the medium may lead to showers of low momentum hadrons along the direction of motion of the original partons as the medium becomes diluted.
Energy density matrix formalism for interacting quantum systems: a quantum Monte Carlo study
Krogel, Jaron T [ORNL] [ORNL; Kim, Jeongnim [ORNL] [ORNL; Reboredo, Fernando A [ORNL] [ORNL
2014-01-01T23:59:59.000Z
We develop an energy density matrix that parallels the one-body reduced density matrix (1RDM) for many-body quantum systems. Just as the density matrix gives access to the number density and occupation numbers, the energy density matrix yields the energy density and orbital occupation energies. The eigenvectors of the matrix provide a natural orbital partitioning of the energy density while the eigenvalues comprise a single particle energy spectrum obeying a total energy sum rule. For mean-field systems the energy density matrix recovers the exact spectrum. When correlation becomes important, the occupation energies resemble quasiparticle energies in some respects. We explore the occupation energy spectrum for the finite 3D homogeneous electron gas in the metallic regime and an isolated oxygen atom with ground state quantum Monte Carlo techniques imple- mented in the QMCPACK simulation code. The occupation energy spectrum for the homogeneous electron gas can be described by an effective mass below the Fermi level. Above the Fermi level evanescent behavior in the occupation energies is observed in similar fashion to the occupation numbers of the 1RDM. A direct comparison with total energy differences demonstrates a quantita- tive connection between the occupation energies and electron addition and removal energies for the electron gas. For the oxygen atom, the association between the ground state occupation energies and particle addition and removal energies becomes only qualitative. The energy density matrix provides a new avenue for describing energetics with quantum Monte Carlo methods which have traditionally been limited to total energies.
Massively parallel Monte Carlo for many-particle simulations on GPUs
Anderson, Joshua A.; Jankowski, Eric [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)] [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Grubb, Thomas L. [Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)] [Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Engel, Michael [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)] [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Glotzer, Sharon C., E-mail: sglotzer@umich.edu [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)
2013-12-01T23:59:59.000Z
Current trends in parallel processors call for the design of efficient massively parallel algorithms for scientific computing. Parallel algorithms for Monte Carlo simulations of thermodynamic ensembles of particles have received little attention because of the inherent serial nature of the statistical sampling. In this paper, we present a massively parallel method that obeys detailed balance and implement it for a system of hard disks on the GPU. We reproduce results of serial high-precision Monte Carlo runs to verify the method. This is a good test case because the hard disk equation of state over the range where the liquid transforms into the solid is particularly sensitive to small deviations away from the balance conditions. On a Tesla K20, our GPU implementation executes over one billion trial moves per second, which is 148 times faster than on a single Intel Xeon E5540 CPU core, enables 27 times better performance per dollar, and cuts energy usage by a factor of 13. With this improved performance we are able to calculate the equation of state for systems of up to one million hard disks. These large system sizes are required in order to probe the nature of the melting transition, which has been debated for the last forty years. In this paper we present the details of our computational method, and discuss the thermodynamics of hard disks separately in a companion paper.
Physics and Algorithm Enhancements for a Validated MCNP/X Monte Carlo Simulation Tool, Phase VII
McKinney, Gregg W [Los Alamos National Laboratory
2012-07-17T23:59:59.000Z
Currently the US lacks an end-to-end (i.e., source-to-detector) radiation transport simulation code with predictive capability for the broad range of DHS nuclear material detection applications. For example, gaps in the physics, along with inadequate analysis algorithms, make it difficult for Monte Carlo simulations to provide a comprehensive evaluation, design, and optimization of proposed interrogation systems. With the development and implementation of several key physics and algorithm enhancements, along with needed improvements in evaluated data and benchmark measurements, the MCNP/X Monte Carlo codes will provide designers, operators, and systems analysts with a validated tool for developing state-of-the-art active and passive detection systems. This project is currently in its seventh year (Phase VII). This presentation will review thirty enhancements that have been implemented in MCNPX over the last 3 years and were included in the 2011 release of version 2.7.0. These improvements include 12 physics enhancements, 4 source enhancements, 8 tally enhancements, and 6 other enhancements. Examples and results will be provided for each of these features. The presentation will also discuss the eight enhancements that will be migrated into MCNP6 over the upcoming year.
Monte Carlo methods and their analysis for Coulomb collisions in multicomponent plasmas
Bobylev, A.V., E-mail: alexander.bobylev@kau.se [Department of Mathematics, Karlstad University, SE-65188 Karlstad (Sweden); Potapenko, I.F., E-mail: firena@yandex.ru [Keldysh Institute for Applied Mathematics, RAS, 125047 Moscow (Russian Federation)
2013-08-01T23:59:59.000Z
Highlights: •A general approach to Monte Carlo methods for multicomponent plasmas is proposed. •We show numerical tests for the two-component (electrons and ions) case. •An optimal choice of parameters for speeding up the computations is discussed. •A rigorous estimate of the error of approximation is proved. -- Abstract: A general approach to Monte Carlo methods for Coulomb collisions is proposed. Its key idea is an approximation of Landau–Fokker–Planck equations by Boltzmann equations of quasi-Maxwellian kind. It means that the total collision frequency for the corresponding Boltzmann equation does not depend on the velocities. This allows to make the simulation process very simple since the collision pairs can be chosen arbitrarily, without restriction. It is shown that this approach includes the well-known methods of Takizuka and Abe (1977) [12] and Nanbu (1997) as particular cases, and generalizes the approach of Bobylev and Nanbu (2000). The numerical scheme of this paper is simpler than the schemes by Takizuka and Abe [12] and by Nanbu. We derive it for the general case of multicomponent plasmas and show some numerical tests for the two-component (electrons and ions) case. An optimal choice of parameters for speeding up the computations is also discussed. It is also proved that the order of approximation is not worse than O(?(?)), where ? is a parameter of approximation being equivalent to the time step ?t in earlier methods. A similar estimate is obtained for the methods of Takizuka and Abe and Nanbu.
An Evaluation of Monte Carlo Simulations of Neutron Multiplicity Measurements of Plutonium Metal
Mattingly, John [North Carolina State University; Miller, Eric [University of Michigan; Solomon, Clell J. Jr. [Los Alamos National Laboratory; Dennis, Ben [University of Michigan; Meldrum, Amy [University of Michigan; Clarke, Shaun [University of Michigan; Pozzi, Sara [University of Michigan
2012-06-21T23:59:59.000Z
In January 2009, Sandia National Laboratories conducted neutron multiplicity measurements of a polyethylene-reflected plutonium metal sphere. Over the past 3 years, those experiments have been collaboratively analyzed using Monte Carlo simulations conducted by University of Michigan (UM), Los Alamos National Laboratory (LANL), Sandia National Laboratories (SNL), and North Carolina State University (NCSU). Monte Carlo simulations of the experiments consistently overpredict the mean and variance of the measured neutron multiplicity distribution. This paper presents a sensitivity study conducted to evaluate the potential sources of the observed errors. MCNPX-PoliMi simulations of plutonium neutron multiplicity measurements exhibited systematic over-prediction of the neutron multiplicity distribution. The over-prediction tended to increase with increasing multiplication. MCNPX-PoliMi had previously been validated against only very low multiplication benchmarks. We conducted sensitivity studies to try to identify the cause(s) of the simulation errors; we eliminated the potential causes we identified, except for Pu-239 {bar {nu}}. A very small change (-1.1%) in the Pu-239 {bar {nu}} dramatically improved the accuracy of the MCNPX-PoliMi simulation for all 6 measurements. This observation is consistent with the trend observed in the bias exhibited by the MCNPX-PoliMi simulations: a very small error in {bar {nu}} is 'magnified' by increasing multiplication. We applied a scalar adjustment to Pu-239 {bar {nu}} (independent of neutron energy); an adjustment that depends on energy is probably more appropriate.
MONTE CARLO SIMULATION MODEL OF ENERGETIC PROTON TRANSPORT THROUGH SELF-GENERATED ALFVEN WAVES
Afanasiev, A.; Vainio, R., E-mail: alexandr.afanasiev@helsinki.fi [Department of Physics, University of Helsinki (Finland)
2013-08-15T23:59:59.000Z
A new Monte Carlo simulation model for the transport of energetic protons through self-generated Alfven waves is presented. The key point of the model is that, unlike the previous ones, it employs the full form (i.e., includes the dependence on the pitch-angle cosine) of the resonance condition governing the scattering of particles off Alfven waves-the process that approximates the wave-particle interactions in the framework of quasilinear theory. This allows us to model the wave-particle interactions in weak turbulence more adequately, in particular, to implement anisotropic particle scattering instead of isotropic scattering, which the previous Monte Carlo models were based on. The developed model is applied to study the transport of flare-accelerated protons in an open magnetic flux tube. Simulation results for the transport of monoenergetic protons through the spectrum of Alfven waves reveal that the anisotropic scattering leads to spatially more distributed wave growth than isotropic scattering. This result can have important implications for diffusive shock acceleration, e.g., affect the scattering mean free path of the accelerated particles in and the size of the foreshock region.
A Deterministic-Monte Carlo Hybrid Method for Time-Dependent Neutron Transport Problems
Justin Pounders; Farzad Rahnema
2001-10-01T23:59:59.000Z
A new deterministic-Monte Carlo hybrid solution technique is derived for the time-dependent transport equation. This new approach is based on dividing the time domain into a number of coarse intervals and expanding the transport solution in a series of polynomials within each interval. The solutions within each interval can be represented in terms of arbitrary source terms by using precomputed response functions. In the current work, the time-dependent response function computations are performed using the Monte Carlo method, while the global time-step march is performed deterministically. This work extends previous work by coupling the time-dependent expansions to space- and angle-dependent expansions to fully characterize the 1D transport response/solution. More generally, this approach represents and incremental extension of the steady-state coarse-mesh transport method that is based on global-local decompositions of large neutron transport problems. An example of a homogeneous slab is discussed as an example of the new developments.
Bianchini, G.; Burgio, N.; Carta, M. [ENEA C.R. CASACCIA, via Anguillarese, 301, 00123 S. Maria di Galeria Roma (Italy); Peluso, V. [ENEA C.R. BOLOGNA, Via Martiri di Monte Sole, 4, 40129 Bologna (Italy); Fabrizio, V.; Ricci, L. [Univ. of Rome La Sapienza, C/o ENEA C.R. CASACCIA, via Anguillarese, 301, 00123 S. Maria di Galeria Roma (Italy)
2012-07-01T23:59:59.000Z
The GUINEVERE experiment (Generation of Uninterrupted Intense Neutrons at the lead Venus Reactor) is an experimental program in support of the ADS technology presently carried out at SCK-CEN in Mol (Belgium). In the experiment a modified lay-out of the original thermal VENUS critical facility is coupled to an accelerator, built by the French body CNRS in Grenoble, working in both continuous and pulsed mode and delivering 14 MeV neutrons by bombardment of deuterons on a tritium-target. The modified lay-out of the facility consists of a fast subcritical core made of 30% U-235 enriched metallic Uranium in a lead matrix. Several off-line and on-line reactivity measurement techniques will be investigated during the experimental campaign. This report is focused on the simulation by deterministic (ERANOS French code) and Monte Carlo (MCNPX US code) calculations of three reactivity measurement techniques, Slope ({alpha}-fitting), Area-ratio and Source-jerk, applied to a GUINEVERE subcritical configuration (namely SC1). The inferred reactivity, in dollar units, by the Area-ratio method shows an overall agreement between the two deterministic and Monte Carlo computational approaches, whereas the MCNPX Source-jerk results are affected by large uncertainties and allow only partial conclusions about the comparison. Finally, no particular spatial dependence of the results is observed in the case of the GUINEVERE SC1 subcritical configuration. (authors)
A Monte Carlo Study of Multiplicity Fluctuations in Pb-Pb Collisions at LHC Energies
Gupta, Ramni
2015-01-01T23:59:59.000Z
With large volumes of data available from LHC, it has become possible to study the multiplicity distributions for the various possible behaviours of the multiparticle production in collisions of relativistic heavy ion collisions, where a system of dense and hot partons has been created. In this context it is important and interesting as well to check how well the Monte Carlo generators can describe the properties or the behaviour of multiparticle production processes. One such possible behaviour is the self-similarity in the particle production, which can be studied with the intermittency studies and further with chaoticity/erraticity, in the heavy ion collisions. We analyse the behaviour of erraticity index in central Pb-Pb collisions at centre of mass energy of 2.76 TeV per nucleon using the AMPT monte carlo event generator, following the recent proposal by R.C. Hwa and C.B. Yang, concerning the local multiplicity fluctuation study as a signature of critical hadronization in heavy-ion collisions. We report ...
Graaf, E. R. van der, E-mail: vandergraaf@kvi.nl; Dendooven, P.; Brandenburg, S. [KVI-Center for Advanced Radiation Technology (KVI-CART), University of Groningen, Zernikelaan 25, 9747 AA Groningen (Netherlands)
2014-06-15T23:59:59.000Z
A detector model optimization procedure based on matching Monte Carlo simulations with measurements for two experimentally calibrated sample geometries which are frequently used in radioactivity measurement laboratories results in relative agreement within 5% between simulated and measured efficiencies for a high purity germanium detector. The optimization procedure indicated that the increase in dead layer thickness is largely responsible for a detector efficiency decrease in time. The optimized detector model allows Monte Carlo efficiency calibration for all other samples of which the geometry and bulk composition is known. The presented method is a competitive and economic alternative to more elaborate detector scanning methods and results in a comparable accuracy.
Olivier Wantz
2010-01-14T23:59:59.000Z
This is the second in a series of papers that investigates the topological susceptibility in the interacting instanton liquid model (IILM) at finite temperature, and deals with the technical issues relating to the Monte Carlo simulations. The IILM reduces field theory to a molecular dynamics description, and for `physical' quark masses the system behaves like a strongly associating fluid. We will argue that this is a generic feature for very light Dirac quark in a non-trivial background, described in the semi-classical approach. To get rid of unnecessary complications, we will present the ideas of biased Monte Carlo, and implement the transition probabilities, for a toy model.
Nakano, Y., E-mail: nakano.yuuji@c.mbox.nagoya-u.ac.jp; Yamazaki, A.; Watanabe, K.; Uritani, A. [Graduate School of Engineering, Nagoya University, Nagoya 464-8603 (Japan); Ogawa, K.; Isobe, M. [National Institute for Fusion Science, Toki-city, GIFU 509-5292 (Japan)
2014-11-15T23:59:59.000Z
Neutron monitoring is important to manage safety of fusion experiment facilities because neutrons are generated in fusion reactions. Monte Carlo simulations play an important role in evaluating the influence of neutron scattering from various structures and correcting differences between deuterium plasma experiments and in situ calibration experiments. We evaluated these influences based on differences between the both experiments at Large Helical Device using Monte Carlo simulation code MCNP5. A difference between the both experiments in absolute detection efficiency of the fission chamber between O-ports is estimated to be the biggest of all monitors. We additionally evaluated correction coefficients for some neutron monitors.
Levin, Yan
Surface tension of an electrolyteÂair interface: a Monte Carlo study This article has been 24 (2012) 284115 (5pp) doi:10.1088/0953-8984/24/28/284115 Surface tension of an electrolyte for calculating the surface tension of an electrolyteÂair interface using Monte Carlo (MC) simulations
Davis JE, Eddy MJ, Sutton TM, Altomari TJ
2007-03-01T23:59:59.000Z
Solid modeling computer software systems provide for the design of three-dimensional solid models used in the design and analysis of physical components. The current state-of-the-art in solid modeling representation uses a boundary representation format in which geometry and topology are used to form three-dimensional boundaries of the solid. The geometry representation used in these systems is cubic B-spline curves and surfaces--a network of cubic B-spline functions in three-dimensional Cartesian coordinate space. Many Monte Carlo codes, however, use a geometry representation in which geometry units are specified by intersections and unions of half-spaces. This paper describes an algorithm for converting from a boundary representation to a half-space representation.
Resonating Valence Bond Quantum Monte Carlo: Application to the ozone molecule
Azadi, Sam; Kühne, Thomas D
2015-01-01T23:59:59.000Z
We study the potential energy surface of the ozone molecule by means of Quantum Monte Carlo simulations based on the resonating valence bond concept. The trial wave function consists of an antisymmetrized geminal power arranged in a single-determinant that is multiplied by a Jastrow correlation factor. Whereas the determinantal part incorporates static correlation effects, the augmented real-space correlation factor accounts for the dynamics electron correlation. The accuracy of this approach is demonstrated by computing the potential energy surface for the ozone molecule in three vibrational states: symmetric, asymmetric and scissoring. We find that the employed wave function provides a detailed description of rather strongly-correlated multi-reference systems, which is in quantitative agreement with experiment.
MaGe - a Geant4-based Monte Carlo framework for low-background experiments
Yuen-Dat Chan; Jason A. Detwiler; Reyco Henning; Victor M. Gehman; Rob A. Johnson; David V. Jordan; Kareem Kazkaz; Markus Knapp; Kevin Kroninger; Daniel Lenz; Jing Liu; Xiang Liu; Michael G. Marino; Akbar Mokhtarani; Luciano Pandola; Alexis G. Schubert; Claudia Tomei
2008-02-06T23:59:59.000Z
A Monte Carlo framework, MaGe, has been developed based on the Geant4 simulation toolkit. Its purpose is to simulate physics processes in low-energy and low-background radiation detectors, specifically for the Majorana and Gerda $^{76}$Ge neutrinoless double-beta decay experiments. This jointly-developed tool is also used to verify the simulation of physics processes relevant to other low-background experiments in Geant4. The MaGe framework contains simulations of prototype experiments and test stands, and is easily extended to incorporate new geometries and configurations while still using the same verified physics processes, tunings, and code framework. This reduces duplication of efforts and improves the robustness of and confidence in the simulation output.
Validation of GEANT4 Monte Carlo Models with a Highly Granular Scintillator-Steel Hadron Calorimeter
C. Adloff; J. Blaha; J. -J. Blaising; C. Drancourt; A. Espargilière; R. Gaglione; N. Geffroy; Y. Karyotakis; J. Prast; G. Vouters; K. Francis; J. Repond; J. Schlereth; J. Smith; L. Xia; E. Baldolemar; J. Li; S. T. Park; M. Sosebee; A. P. White; J. Yu; T. Buanes; G. Eigen; Y. Mikami; N. K. Watson; G. Mavromanolakis; M. A. Thomson; D. R. Ward; W. Yan; D. Benchekroun; A. Hoummada; Y. Khoulaki; J. Apostolakis; A. Dotti; G. Folger; V. Ivantchenko; V. Uzhinskiy; M. Benyamna; C. Cârloganu; F. Fehr; P. Gay; S. Manen; L. Royer; G. C. Blazey; A. Dyshkant; J. G. R. Lima; V. Zutshi; J. -Y. Hostachy; L. Morin; U. Cornett; D. David; G. Falley; K. Gadow; P. Göttlicher; C. Günter; B. Hermberg; S. Karstensen; F. Krivan; A. -I. Lucaci-Timoce; S. Lu; B. Lutz; S. Morozov; V. Morgunov; M. Reinecke; F. Sefkow; P. Smirnov; M. Terwort; A. Vargas-Trevino; N. Feege; E. Garutti; I. Marchesinik; M. Ramilli; P. Eckert; T. Harion; A. Kaplan; H. -Ch. Schultz-Coulon; W. Shen; R. Stamen; B. Bilki; E. Norbeck; Y. Onel; G. W. Wilson; K. Kawagoe; P. D. Dauncey; A. -M. Magnan; V. Bartsch; M. Wing; F. Salvatore; E. Calvo Alamillo; M. -C. Fouz; J. Puerta-Pelayo; B. Bobchenko; M. Chadeeva; M. Danilov; A. Epifantsev; O. Markin; R. Mizuk; E. Novikov; V. Popov; V. Rusinov; E. Tarkovsky; N. Kirikova; V. Kozlov; P. Smirnov; Y. Soloviev; P. Buzhan; A. Ilyin; V. Kantserov; V. Kaplin; A. Karakash; E. Popova; V. Tikhomirov; C. Kiesling; K. Seidel; F. Simon; C. Soldner; M. Szalay; M. Tesar; L. Weuste; M. S. Amjad; J. Bonis; S. Callier; S. Conforti di Lorenzo; P. Cornebise; Ph. Doublet; F. Dulucq; J. Fleury; T. Frisson; N. van der Kolk; H. Li; G. Martin-Chassard; F. Richard; Ch. de la Taille; R. Pöschl; L. Raux; J. Rouëné; N. Seguin-Moreau; M. Anduze; V. Boudry; J-C. Brient; D. Jeans; P. Mora de Freitas; G. Musat; M. Reinhard; M. Ruan; H. Videau; B. Bulanek; J. Zacek; J. Cvach; P. Gallus; M. Havranek; M. Janata; J. Kvasnicka; D. Lednicky; M. Marcisovsky; I. Polak; J. Popule; L. Tomasek; M. Tomasek; P. Ruzicka; P. Sicho; J. Smolik; V. Vrba; J. Zalesak; B. Belhorma; H. Ghazlane; T. Takeshita; S. Uozumi; M. Götze; O. Hartbrich; J. Sauer; S. Weber; C. Zeitnitz
2014-06-15T23:59:59.000Z
Calorimeters with a high granularity are a fundamental requirement of the Particle Flow paradigm. This paper focuses on the prototype of a hadron calorimeter with analog readout, consisting of thirty-eight scintillator layers alternating with steel absorber planes. The scintillator plates are finely segmented into tiles individually read out via Silicon Photomultipliers. The presented results are based on data collected with pion beams in the energy range from 8GeV to 100GeV. The fine segmentation of the sensitive layers and the high sampling frequency allow for an excellent reconstruction of the spatial development of hadronic showers. A comparison between data and Monte Carlo simulations is presented, concerning both the longitudinal and lateral development of hadronic showers and the global response of the calorimeter. The performance of several GEANT4 physics lists with respect to these observables is evaluated.
Uncertainties associated with the use of the KENO Monte Carlo criticality codes
Landers, N.F.; Petrie, L.M. (Oak Ridge National Lab., TN (USA))
1989-01-01T23:59:59.000Z
The KENO multi-group Monte Carlo criticality codes have earned the reputation of being efficient, user friendly tools especially suited for the analysis of situations commonly encountered in the storage and transportation of fissile materials. Throughout their twenty years of service, a continuing effort has been made to maintain and improve these codes to meet the needs of the nuclear criticality safety community. Foremost among these needs is the knowledge of how to utilize the results safely and effectively. Therefore it is important that code users be aware of uncertainties that may affect their results. These uncertainties originate from approximations in the problem data, methods used to process cross sections, and assumptions, limitations and approximations within the criticality computer code itself. 6 refs., 8 figs., 1 tab.
Monte-Carlo study of quasiparticle dispersion relation in monolayer graphene
P. V. Buividovich
2013-01-07T23:59:59.000Z
The density of electronic one-particle states in monolayer graphene is studied by performing the Hybrid Monte-Carlo simulations of the tight-binding model for electrons on the pi orbitals of carbon atoms which make up the graphene lattice. Density of states is approximated as a derivative of the number of particles over the chemical potential at sufficiently small temperature. Simulations are performed in the partially quenched approximation, in which virtual particles and holes have zero chemical potential. It is found that the Van Hove singularity becomes much sharper than in the free tight-binding model. Simulation results also suggest that the Fermi velocity increases with interaction strength up to the transition to the phase with spontaneously broken chiral symmetry.
From hypernuclei to the Inner Core of Neutron Stars: A Quantum Monte Carlo Study
Diego Lonardoni; Francesco Pederiva; Stefano Gandolfi
2014-08-19T23:59:59.000Z
Auxiliary Field Diffusion Monte Carlo (AFDMC) calculations have been employed to revise the interaction between $\\Lambda$-hyperons and nucleons in hypernuclei. The scheme used to describe the interaction, inspired by the phenomenological Argonne-Urbana forces, is the $\\Lambda N+\\Lambda NN$ potential firstly introduced by Bodmer, Usmani et al.. Within this framework, we performed calculations on light and medium mass hypernuclei in order to assess the extent of the repulsive contribution of the three-body part. By tuning this contribution in order to reproduce the $\\Lambda$ separation energy in $^5_\\Lambda$He and $^{17}_{~\\Lambda}$O, experimental findings are reproduced over a wide range of masses. Calculations have then been extended to $\\Lambda$-neutron matter in order to derive an analogous of the symmetry energy to be used in determining the equation of state of matter in the typical conditions found in the inner core of neutron stars.
The Auxiliary Field Diffusion Monte Carlo Method for Nuclear Physics and Nuclear Astrophysics
Stefano Gandolfi
2007-12-09T23:59:59.000Z
In this thesis, I discuss the use of the Auxiliary Field Diffusion Monte Carlo method to compute the ground state of nuclear Hamiltonians, and I show several applications to interesting problems both in nuclear physics and in nuclear astrophysics. In particular, the AFDMC algorithm is applied to the study of several nuclear systems, finite, and infinite matter. Results about the ground state of nuclei ($^4$He, $^8$He, $^{16}$O and $^{40}$Ca), neutron drops (with 8 and 20 neutrons) and neutron rich-nuclei (isotopes of oxygen and calcium) are discussed, and the equation of state of nuclear and neutron matter are calculated and compared with other many-body calculations. The $^1S_0$ superfluid phase of neutron matter in the low-density regime was also studied.
Auxiliary-field quantum Monte Carlo method for strongly paired fermions
Carlson, J.; Gandolfi, Stefano [Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Schmidt, Kevin E. [Department of Physics, Arizona State University, Tempe, Arizona 85287 (United States); Zhang, Shiwei [Department of Physics, College of William and Mary, Williamsburg, Virginia 23187 (United States)
2011-12-15T23:59:59.000Z
We solve the zero-temperature unitary Fermi gas problem by incorporating a BCS importance function into the auxiliary-field quantum Monte Carlo method. We demonstrate that this method does not suffer from a sign problem and that it increases the efficiency of standard techniques by many orders of magnitude for strongly paired fermions. We calculate the ground-state energies exactly for unpolarized systems with up to 66 particles on lattices of up to 27{sup 3} sites, obtaining an accurate result for the universal parameter {xi}. We also obtain results for interactions with different effective ranges and find that the energy is consistent with a universal linear dependence on the product of the Fermi momentum and the effective range. This method will have many applications in superfluid cold atom systems and in both electronic and nuclear structures where pairing is important.
Ab-initio molecular dynamics simulation of liquid water by Quantum Monte Carlo
Andrea Zen; Ye Luo; Guglielmo Mazzola; Leonardo Guidoni; Sandro Sorella
2014-12-09T23:59:59.000Z
Despite liquid water is ubiquitous in chemical reactions at roots of life and climate on earth, the prediction of its properties by high-level ab initio molecular dynamics simulations still represents a formidable task for quantum chemistry. In this article we present a room temperature simulation of liquid water based on the potential energy surface obtained by a many-body wave function through quantum Monte Carlo (QMC) methods. The simulated properties are in excellent agreement with recent neutron scattering and X-ray experiments, particularly concerning the position of the oxygen-oxygen peak in the radial distribution function, at variance of previous Density Functional Theory attempts. Given the excellent performances of QMC on large scale supercomputers, this work opens new perspectives for predictive and reliable ab-initio simulations of complex chemical systems.
A portable, parallel, object-oriented Monte Carlo neutron transport code in C++
Lee, S.R.; Cummings, J.C. [Los Alamos National Lab., NM (United States); Nolen, S.D. [Texas A and M Univ., College Station, TX (United States)]|[Los Alamos National Lab., NM (United States)
1997-05-01T23:59:59.000Z
We have developed a multi-group Monte Carlo neutron transport code using C++ and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and {alpha}-eigenvalues and is portable to and runs parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities of MC++ are discussed, along with physics and performance results on a variety of hardware, including all Accelerated Strategic Computing Initiative (ASCI) hardware. Current parallel performance indicates the ability to compute {alpha}-eigenvalues in seconds to minutes rather than hours to days. Future plans and the implementation of a general transport physics framework are also discussed.
A Monte-Carlo method for ex-core neutron response
Gamino, R.G.; Ward, J.T.; Hughes, J.C. [Lockheed Martin Corp., Schenectady, NY (United States)
1997-10-01T23:59:59.000Z
A Monte Carlo neutron transport kernel capability primarily for ex-core neutron response is described. The capability consists of the generation of a set of response kernels, which represent the neutron transport from the core to a specific ex-core volume. This is accomplished by tagging individual neutron histories from their initial source sites and tracking them throughout the problem geometry, tallying those that interact in the geometric regions of interest. These transport kernels can subsequently be combined with any number of core power distributions to determine detector response for a variety of reactor Thus, the transport kernels are analogous to an integrated adjoint response. Examples of pressure vessel response and ex-core neutron detector response are provided to illustrate the method.
MC++: A parallel, portable, Monte Carlo neutron transport code in C++
Lee, S.R.; Cummings, J.C. [Los Alamos National Lab., NM (United States); Nolen, S.D. [Texas A & M Univ., College Station, TX (United States)
1997-03-01T23:59:59.000Z
MC++ is an implicit multi-group Monte Carlo neutron transport code written in C++ and based on the Parallel Object-Oriented Methods and Applications (POOMA) class library. MC++ runs in parallel on and is portable to a wide variety of platforms, including MPPs, SMPs, and clusters of UNIX workstations. MC++ is being developed to provide transport capabilities to the Accelerated Strategic Computing Initiative (ASCI). It is also intended to form the basis of the first transport physics framework (TPF), which is a C++ class library containing appropriate abstractions, objects, and methods for the particle transport problem. The transport problem is briefly described, as well as the current status and algorithms in MC++ for solving the transport equation. The alpha version of the POOMA class library is also discussed, along with the implementation of the transport solution algorithms using POOMA. Finally, a simple test problem is defined and performance and physics results from this problem are discussed on a variety of platforms.
Kinetic Monte Carlo Simulation of Electrodeposition using the Embedded-Atom Method
Treeratanaphitak, Tanyakarn; Abukhdeir, Nasser Mohieddin
2013-01-01T23:59:59.000Z
A kinetic Monte Carlo (KMC) method is presented to simulate the electrodeposition of a metal on a single crystal surface of the same metal under galvanostatic conditions. This method utilizes the multi-body embedded-atom method (EAM) potential to characterize the interactions of metal atoms and adatoms. The KMC method accounts for deposition and surface diffusion processes including hopping, atom exchange and step-edge atom exchange. Steady-state deposition configurations obtained using the KMC method are validated by comparison with the structures obtained through the use of molecular dynamics (MD) simulations to relax KMC constraints. The results of this work support the use of the proposed KMC method to simulate electrodeposition processes at length (microns) and time (seconds) scales that are not feasible using other methods.
Bose, Tushar Kanti
2015-01-01T23:59:59.000Z
The realization of a spontaneous macroscopic ferroelectric order in fluids of anisotropic mesogens is a topic of both fundamental and technological interest. Recently, we demonstrated that a system of dipolar achiral disklike ellipsoids can exhibit long-searched ferroelectric liquid crystalline phases of dipolar origin. In the present work, extensive off-lattice Monte Carlo simulations are used to investigate the phase behavior of the system under the influences of the electrostatic boundary conditions that restrict any global polarization. We find that the system develops strongly ferroelectric slablike domains periodically arranged in an antiferroelectric fashion. Exploring the phase behavior at different dipole strengths, we find existence of the ferroelectric nematic and ferroelectric columnar order inside the domains. For higher dipole strengths, a biaxial phase is also obtained with a similar periodic array of ferroelectric slabs of antiparallel polarizations. We have studied the depolarizing effects by...
Monte Carlo procedure for protein folding in lattice model. Conformational rigidity
Olivier Collet
1999-07-19T23:59:59.000Z
A rigourous Monte Carlo method for protein folding simulation on lattice model is introduced. We show that a parameter which can be seen as the rigidity of the conformations has to be introduced in order to satisfy the detailed balance condition. Its properties are discussed and its role during the folding process is elucidated. This method is applied on small chains on two-dimensional lattice. A Bortz-Kalos-Lebowitz type algorithm which allows to study the kinetic of the chains at very low temperature is implemented in the presented method. We show that the coefficients of the Arrhenius law are in good agreement with the value of the main potential barrier of the system.
Validation of the Monte Carlo Criticality Program KENO V. a for highly-enriched uranium systems
Knight, J.R.
1984-11-01T23:59:59.000Z
A series of calculations based on critical experiments have been performed using the KENO V.a Monte Carlo Criticality Program for the purpose of validating KENO V.a for use in evaluating Y-12 Plant criticality problems. The experiments were reflected and unreflected systems of single units and arrays containing highly enriched uranium metal or uranium compounds. Various geometrical shapes were used in the experiments. The SCALE control module CSAS25 with the 27-group ENDF/B-4 cross-section library was used to perform the calculations. Some of the experiments were also calculated using the 16-group Hansen-Roach Library. Results are presented in a series of tables and discussed. Results show that the criteria established for the safe application of the KENO IV program may also be used for KENO V.a results.
Reversible jump Markov chain Monte Carlo computation and Bayesian model determination
Peter J. Green
1995-01-01T23:59:59.000Z
Markov chain Monte Carlo methods for Bayesian computation have until recently been restricted to problems where the joint distribution of all variables has a density with respect to some xed standard underlying measure. They have therefore not been available for application to Bayesian model determination, where the dimensionality of the parameter vector is typically not xed. This article proposes a new framework for the construction of reversible Markov chain samplers that jump between parameter subspaces of di ering dimensionality, which is exible and entirely constructive. It should therefore have wide applicability in model determination problems. The methodology is illustrated with applications to multiple change-point analysis in one and two dimensions, and toaBayesian comparison of binomial experiments.
Finite-Temperature Pairing Gap of a Unitary Fermi Gas by Quantum Monte Carlo Calculations
Magierski, Piotr; Wlazlowski, Gabriel [Faculty of Physics, Warsaw University of Technology, ulica Koszykowa 75, 00-662 Warsaw (Poland); Bulgac, Aurel; Drut, Joaquin E. [Department of Physics, University of Washington, Seattle, Washington 98195-1560 (United States)
2009-11-20T23:59:59.000Z
We calculate the one-body temperature Green's (Matsubara) function of the unitary Fermi gas via quantum Monte Carlo, and extract the spectral weight function A(p,omega) using the methods of maximum entropy and singular value decomposition. From A(p,omega) we determine the quasiparticle spectrum, which can be accurately parametrized by three functions of temperature: an effective mass m*, a mean-field potential U, and a gap DELTA. Below the critical temperature T{sub c}=0.15epsilon{sub F} the results for m*, U, and DELTA can be accurately reproduced using an independent quasiparticle model. We find evidence of a pseudogap in the fermionic excitation spectrum for temperatures up to T*{approx_equal}0.20{epsilon}{sub F}>T{sub c}.
Quantum Monte Carlo study of dilute neutron matter at finite temperatures
Wlazlowski, Gabriel; Magierski, Piotr [Faculty of Physics, Warsaw University of Technology, Ulica Koszykowa 75, PL-00-662 Warsaw (Poland)
2011-01-15T23:59:59.000Z
We report results of fully nonperturbative, path integral Monte Carlo calculations for dilute neutron matter. The neutron-neutron interaction in the s channel is parameterized by the scattering length and the effective range. We calculate the energy and the chemical potential as a function of temperature at density {rho}=0.003 fm{sup -3}. The critical temperature T{sub c} for the superfluid-normal phase transition is estimated from the finite size scaling of the condensate fraction. At low temperatures we extract the spectral weight function A(p,{omega}) from the imaginary time propagator using the methods of maximum entropy and singular value decomposition. We determine the quasiparticle spectrum, which can be accurately parameterized by three parameters: an effective mass m{sup *}, a mean-field potential U, and a gap {Delta}. Large values of {Delta}/T{sub c} indicate that the system is not a BCS-type superfluid at low temperatures.
Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations
Arampatzis, Georgios, E-mail: garab@math.uoc.gr [Department of Applied Mathematics, University of Crete (Greece) [Department of Applied Mathematics, University of Crete (Greece); Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 (United States); Katsoulakis, Markos A., E-mail: markos@math.umass.edu [Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 (United States)
2014-03-28T23:59:59.000Z
In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.
Development of a randomized 3D cell model for Monte Carlo microdosimetry simulations
Douglass, Michael; Bezak, Eva; Penfold, Scott [School of Chemistry and Physics, University of Adelaide, North Terrace, Adelaide 5005, South Australia (Australia) and Department of Medical Physics, Royal Adelaide Hospital, North Terrace, Adelaide 5000, South Australia (Australia)
2012-06-15T23:59:59.000Z
Purpose: The objective of the current work was to develop an algorithm for growing a macroscopic tumor volume from individual randomized quasi-realistic cells. The major physical and chemical components of the cell need to be modeled. It is intended to import the tumor volume into GEANT4 (and potentially other Monte Carlo packages) to simulate ionization events within the cell regions. Methods: A MATLAB Copyright-Sign code was developed to produce a tumor coordinate system consisting of individual ellipsoidal cells randomized in their spatial coordinates, sizes, and rotations. An eigenvalue method using a mathematical equation to represent individual cells was used to detect overlapping cells. GEANT4 code was then developed to import the coordinate system into GEANT4 and populate it with individual cells of varying sizes and composed of the membrane, cytoplasm, reticulum, nucleus, and nucleolus. Each region is composed of chemically realistic materials. Results: The in-house developed MATLAB Copyright-Sign code was able to grow semi-realistic cell distributions ({approx}2 Multiplication-Sign 10{sup 8} cells in 1 cm{sup 3}) in under 36 h. The cell distribution can be used in any number of Monte Carlo particle tracking toolkits including GEANT4, which has been demonstrated in this work. Conclusions: Using the cell distribution and GEANT4, the authors were able to simulate ionization events in the individual cell components resulting from 80 keV gamma radiation (the code is applicable to other particles and a wide range of energies). This virtual microdosimetry tool will allow for a more complete picture of cell damage to be developed.
Implementation of the probability table method in a continuous-energy Monte Carlo code system
Sutton, T.M.; Brown, F.B. [Lockheed Martin Corp., Schenectady, NY (United States)
1998-10-01T23:59:59.000Z
RACER is a particle-transport Monte Carlo code that utilizes a continuous-energy treatment for neutrons and neutron cross section data. Until recently, neutron cross sections in the unresolved resonance range (URR) have been treated in RACER using smooth, dilute-average representations. This paper describes how RACER has been modified to use probability tables to treat cross sections in the URR, and the computer codes that have been developed to compute the tables from the unresolved resonance parameters contained in ENDF/B data files. A companion paper presents results of Monte Carlo calculations that demonstrate the effect of the use of probability tables versus the use of dilute-average cross sections for the URR. The next section provides a brief review of the probability table method as implemented in the RACER system. The production of the probability tables for use by RACER takes place in two steps. The first step is the generation of probability tables from the nuclear parameters contained in the ENDF/B data files. This step, and the code written to perform it, are described in Section 3. The tables produced are at energy points determined by the ENDF/B parameters and/or accuracy considerations. The tables actually used in the RACER calculations are obtained in the second step from those produced in the first. These tables are generated at energy points specific to the RACER calculation. Section 4 describes this step and the code written to implement it, as well as modifications made to RACER to enable it to use the tables. Finally, some results and conclusions are presented in Section 5.
Zhang, Pengfei; Wang, Qiang, E-mail: q.wang@colostate.edu [Department of Chemical and Biological Engineering, Colorado State University, Fort Collins, Colorado 80523-1370 (United States)] [Department of Chemical and Biological Engineering, Colorado State University, Fort Collins, Colorado 80523-1370 (United States)
2014-01-28T23:59:59.000Z
Using fast lattice Monte Carlo (FLMC) simulations [Q. Wang, Soft Matter 5, 4564 (2009)] and the corresponding lattice self-consistent field (LSCF) calculations, we studied a model system of grafted homopolymers, in both the brush and mushroom regimes, in an explicit solvent compressed by an impenetrable surface. Direct comparisons between FLMC and LSCF results, both of which are based on the same Hamiltonian (thus without any parameter-fitting between them), unambiguously and quantitatively reveal the fluctuations/correlations neglected by the latter. We studied both the structure (including the canonical-ensemble averages of the height and the mean-square end-to-end distances of grafted polymers) and thermodynamics (including the ensemble-averaged reduced energy density and the related internal energy per chain, the differences in the Helmholtz free energy and entropy per chain from the uncompressed state, and the pressure due to compression) of the system. In particular, we generalized the method for calculating pressure in lattice Monte Carlo simulations proposed by Dickman [J. Chem. Phys. 87, 2246 (1987)], and combined it with the Wang-Landau–Optimized Ensemble sampling [S. Trebst, D. A. Huse, and M. Troyer, Phys. Rev. E 70, 046701 (2004)] to efficiently and accurately calculate the free energy difference and the pressure due to compression. While we mainly examined the effects of the degree of compression, the distance between the nearest-neighbor grafting points, the reduced number of chains grafted at each grafting point, and the system fluctuations/correlations in an athermal solvent, the ?-solvent is also considered in some cases.
Monte Carlo Simulations of the Dissolution of Borosilicate Glasses in Near-Equilibrium Conditions
Kerisit, Sebastien [Pacific Northwest National Laboratory (PNNL); Pierce, Eric M [ORNL
2012-01-01T23:59:59.000Z
Monte Carlo simulations were performed to investigate the mechanisms of glass dissolution as equilibrium conditions are approached in both static and flow-through conditions. The glasses studied are borosilicate glasses in the compositional range (80 x)% SiO2 (10 + x / 2)% B2O3 (10 + x / 2)% Na2O, where 5 < x < 30%. In static conditions, dissolution/condensation reactions lead to the formation, for all compositions studied, of a blocking layer composed of polymerized Si sites with principally 4 connections to nearest Si sites. This layer forms atop the altered glass layer and shows similar composition and density for all glass compositions considered. In flow-through conditions, three main dissolution regimes are observed: at high flow rates, the dissolving glass exhibits a thin alteration layer and congruent dissolution; at low flow rates, a blocking layer is formed as in static conditions but the simulations show that water can occasionally break through the blocking layer causing the corrosion process to resume; and, at intermediate flow rates, the glasses dissolve incongruently with an increasingly deepening altered layer. The simulation results suggest that, in geological disposal environments, small perturbations or slow flows could be enough to prevent the formation of a permanent blocking layer. Finally, a comparison between predictions of the linear rate law and the Monte Carlo simulation results indicates that, in flow-through conditions, the linear rate law is applicable at high flow rates and deviations from the linear rate law occur under low flow rates (e.g., at near-saturated conditions with respect to amorphous silica). This effect is associated with the complex dynamics of Si dissolution/condensation processes at the glass water interface.
Sunny, E. E.; Martin, W. R. [University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor MI 48109 (United States)
2013-07-01T23:59:59.000Z
Current Monte Carlo codes use one of three models to model neutron scattering in the epithermal energy range: (1) the asymptotic scattering model, (2) the free gas scattering model, or (3) the S({alpha},{beta}) model, depending on the neutron energy and the specific Monte Carlo code. The free gas scattering model assumes the scattering cross section is constant over the neutron energy range, which is usually a good approximation for light nuclei, but not for heavy nuclei where the scattering cross section may have several resonances in the epithermal region. Several researchers in the field have shown that using the free gas scattering model in the vicinity of the resonances in the lower epithermal range can under-predict resonance absorption due to the up-scattering phenomenon. Existing methods all involve performing the collision analysis in the center-of-mass frame, followed by a conversion back to the laboratory frame. In this paper, we will present a new sampling methodology that (1) accounts for the energy-dependent scattering cross sections in the collision analysis and (2) acts in the laboratory frame, avoiding the conversion to the center-of-mass frame. The energy dependence of the scattering cross section was modeled with even-ordered polynomials to approximate the scattering cross section in Blackshaw's equations for the moments of the differential scattering PDFs. These moments were used to sample the outgoing neutron speed and angle in the laboratory frame on-the-fly during the random walk of the neutron. Results for criticality studies on fuel pin and fuel assembly calculations using these methods showed very close comparison to results using the reference Doppler-broadened rejection correction (DBRC) scheme. (authors)
Calculation of Nonlinear Thermoelectric Coefficients of InAs1xSbx Using Monte Carlo Method
and increase the cooling power density when a lightly doped thermoelectric material is under a large electrical with local nonequi- librium charge distribution. InAs1Ã?xSbx is a favorable thermoelectric materialCalculation of Nonlinear Thermoelectric Coefficients of InAs1Ã?xSbx Using Monte Carlo Method RAMIN
Monte Carlo Simulation-based Sensitivity Analysis of the Model of a Thermal-Hydraulic Passive System
Paris-Sud XI, UniversitÃ© de
1 Monte Carlo Simulation-based Sensitivity Analysis of the Model of a Thermal-Hydraulic Passive, and for this reason are expected to improve the safety of nuclear power plants. However, uncertainties are present Engineering and System Safety 107 (2012) 90-106" DOI : 10.1016/j.ress.2011.08.006 #12;2 power plants because
Boas, David
September 1, 2001 / Vol. 26, No. 17 / OPTICS LETTERS 1335 Perturbation Monte Carlo methods to solve with respect to perturbations in background tissue optical properties. We then feed this derivative information to a nonlinear optimization algorithm to determine the optical properties of the tissue heterogeneity under
ATLAS-Experiment Abbildung 48: Monte-Carlo-Simulation eines tÂ¯t-Ereignisses in einem Layout fÂ¨ur den Inne- ren Detektor des ATLAS-Experiments mit vier Lagen von Silizium-Pixeldetektoren und fÂ¨unf Lagen von Silizium-Streifendetektoren. 82 #12;ATLAS-Experiment ATLAS-Experiment Gruppenleiter: M
Hiatt, Matthew Torgerson
2009-06-02T23:59:59.000Z
links three external codes together to create these libraries. The code creates an MCNP (Monte Carlo N-Particle) model of the reactor and calculates the zoneaveraged scalar flux in various tally regions and a core-averaged scalar flux tallied by energy...
Einstein, Theodore L.
PHYSICAL REVIEW B 83, 245414 (2011) Monte Carlo study of the honeycomb structure of anthraquinone model, we demonstrate a mechanism for the spontaneous formation of honeycomb structure of anthraquinoneÂ13 Pawin et al.14 observed the spontaneous formation of honeycomb structures of anthraquinone (AQ
Monte Carlo Simulations of Small Sulfuric Acid-Water Clusters S. M. Kathmann,* and B. N. Hale,*
Hale, Barbara N.
-to-liquid nucleation1-5 to acid rain formation6-8 and ozone depletion mechanisms.9-11 Doyle's early work2 predictedMonte Carlo Simulations of Small Sulfuric Acid-Water Clusters S. M. Kathmann,* and B. N. HaleÂ§,* En Form: August 7, 2001 Effective atom-atom potentials are developed for binary sulfuric acid
Quantum Monte Carlo Study of the Optical and Diffusive Properties of theVacancy Defect in Diamond
Kent, Paul
associated with radiation damage. It is also very interesting scientifically, with a wide variety of physicalQuantum Monte Carlo Study of the Optical and Diffusive Properties of theVacancy Defect in Diamond]. The best-known optical transition, GR1 at 1.673 eV [5], long associated with the neutral vacancy, cannot
Bendele, Travis Henry
2013-02-22T23:59:59.000Z
A honeycomb probe was designed to measure the optical properties of biological tissues using single Monte Carlo method. The ongoing project is intended to be a multi-wavelength, real time, and in-vivo technique to detect breast cancer. Preliminary...
Helton, J.C.; Shiver, A.W.
1994-10-01T23:59:59.000Z
A Monte Carlo procedure for the construction of complementary cumulative distribution functions (CCDFs) for comparison with the US Environmental Protection Agency (EPA) release limits for radioactive waste disposal (40 CFR 191, Subpart B) is described and illustrated with results from a recent performance assessment (PA) for the Waste Isolation Pilot Plant (WIPP). The Monte Carlo procedure produces CCDF estimates similar to those obtained with stratified sampling in several recent PAs for the WIPP. The advantages of the Monte Carlo procedure over stratified sampling include increased resolution in the calculation of probabilities for complex scenarios involving drilling intrusions and better use of the necessarily limited number of mechanistic calculations that underlie CCDF construction.
Lipid droplets fusion in adipocyte differentiated 3T3-L1 cells: A Monte Carlo simulation
Boschi, Federico, E-mail: federico.boschi@univr.it [Department of Neurological and Movement Sciences, University of Verona, Strada Le Grazie 8, 37134 Verona (Italy); Department of Computer Science, University of Verona, Strada Le Grazie 15, 37134 Verona (Italy); Rizzatti, Vanni; Zamboni, Mauro [Department of Medicine, Geriatric Section, University of Verona, Piazzale Stefani 1, 37126 Verona (Italy); Sbarbati, Andrea [Department of Neurological and Movement Sciences, University of Verona, Strada Le Grazie 8, 37134 Verona (Italy)
2014-02-15T23:59:59.000Z
Several human worldwide diseases like obesity, type 2 diabetes, hepatic steatosis, atherosclerosis and other metabolic pathologies are related to the excessive accumulation of lipids in cells. Lipids accumulate in spherical cellular inclusions called lipid droplets (LDs) whose sizes range from fraction to one hundred of micrometers in adipocytes. It has been suggested that LDs can grow in size due to a fusion process by which a larger LD is obtained with spherical shape and volume equal to the sum of the progenitors’ ones. In this study, the size distribution of two populations of LDs was analyzed in immature and mature (5-days differentiated) 3T3-L1 adipocytes (first and second populations, respectively) after Oil Red O staining. A Monte Carlo simulation of interaction between LDs has been developed in order to quantify the size distribution and the number of fusion events needed to obtain the distribution of the second population size starting from the first one. Four models are presented here based on different kinds of interaction: a surface weighted interaction (R2 Model), a volume weighted interaction (R3 Model), a random interaction (Random model) and an interaction related to the place where the LDs are born (Nearest Model). The last two models mimic quite well the behavior found in the experimental data. This work represents a first step in developing numerical simulations of the LDs growth process. Due to the complex phenomena involving LDs (absorption, growth through additional neutral lipid deposition in existing droplets, de novo formation and catabolism) the study focuses on the fusion process. The results suggest that, to obtain the observed size distribution, a number of fusion events comparable with the number of LDs themselves is needed. Moreover the MC approach results a powerful tool for investigating the LDs growth process. Highlights: • We evaluated the role of the fusion process in the synthesis of the lipid droplets. • We compared the size distribution of the lipid droplets in immature and mature cells. • We used the Monte Carlo simulation approach, simulating 10 thousand of fusion events. • Four different interaction models between the lipid droplets were tested. • The best model which mimics the experimental measures was selected.
Hardy, J. Jr.; Shore, J.M.
1981-11-01T23:59:59.000Z
The Savannah River Laboratory LTRIIA slightly-enriched uranium-D/sub 2/O critical experiment was analyzed with ENDF/B-IV data and the RCP01 Monte Carlo program, which modeled the entire assembly in explicit detail. The integral parameters delta/sup 25/ and delta/sup 28/ showed good agreement with experiment. However, calculated K/sub eff/ was 2 to 3% low, due primarily to an overprediction of U238 capture. This is consistent with results obtained in similar analyses of the H/sub 2/O-moderated TRX critical experiments. In comparisons with the VIM and MCNP2 Monte Carlo programs, good agreement was observed for calculated reeaction rates in the B/sup 2/=0 cell.
Beer, M.; Rose, P.
1981-04-01T23:59:59.000Z
The National Nuclear Data Center is continuing its program to improve the nuclear data base used as input for commercial reactor analysis and design. In the most recent phase of this project the Monte Carlo program SAM-CE, developed by the Mathematical Applications Group, Inc. (MAGI), was made operational at BNL. This program was implemented on the BNL-CDC-7600 Computer, and also on the PDP-10 in-house computer. The NNDC made operational and developed techniques for processing ENDF/B-V cross sections for SAM-CE. A limited ENDF/B-V based library was produced. Use of the SAM-CE program in thermal reactor problems was validated using detailed comparisons of results with other Monte Carlo codes such as RECAP, RCP01 and VIM as well as with experimental data.
Shulenburger, Luke; Desjarlais, M P
2015-01-01T23:59:59.000Z
Motivated by the disagreement between recent diffusion Monte Carlo calculations and experiments on the phase transition pressure between the ambient and beta-Sn phases of silicon, we present a study of the HCP to BCC phase transition in beryllium. This lighter element provides an oppor- tunity for directly testing many of the approximations required for calculations on silicon and may suggest a path towards increasing the practical accuracy of diffusion Monte Carlo calculations of solids in general. We demonstrate that the single largest approximation in these calculations is the pseudopotential approximation. After removing this we find excellent agreement with experiment for the ambient HCP phase and results similar to careful calculations using density functional theory for the phase transition pressure.
C. E. Berger; E. R. Anderson; J. E. Drut
2014-10-29T23:59:59.000Z
We determine the ground-state energy and Tan's contact of attractively interacting few-fermion systems in a one-dimensional harmonic trap, for a range of couplings and particle numbers. To this end, we implement a new lattice Monte Carlo approach based on a non-uniform discretization of space, defined via Gauss-Hermite quadrature points and weights. This particular coordinate basis is natural for systems in harmonic traps, and it yields a position-dependent coupling and a corresponding non-uniform Hubbard-Stratonovich transformation. The resulting path integral is performed with hybrid Monte Carlo as a proof of principle for calculations at finite temperature and in higher dimensions.
Marcus Mueller; Andreas Werner
1997-09-11T23:59:59.000Z
We investigate interfacial properties between two highly incompatible polymers of different stiffness. The extensive Monte Carlo simulations of the binary polymer melt yield detailed interfacial profiles and the interfacial tension via an analysis of capillary fluctuations. We extract an effective Flory-Huggins parameter from the simulations, which is used in self-consistent field calculations. These take due account of the chain architecture via a partial enumeration of the single chain partition function, using chain conformations obtained by Monte Carlo simulations of the pure phases. The agreement between the simulations and self-consistent field calculations is almost quantitative, however we find deviations from the predictions of the Gaussian chain model for high incompatibilities or large stiffness. The interfacial width at very high incompatibilities is smaller than the prediction of the Gaussian chain model, and decreases upon increasing the statistical segment length of the semi-flexible component.
Monte Carlo and Analytical Calculation of Lateral Deflection of Proton Beams in Homogeneous Targets
Pazianotto, Mauricio T.; Inocente, Guilherme F.; Silva, Danilo Anacleto A. d; Hormaza, Joel M. [Departamento de Fisica e Biofisica-Instituto de Biociencias, Universidade Estadual Paulista 'Julio de Mesquita Filho'-Botucatu-SP, Brasil and Distrito de Rubiao Junior s/no 18608-000 Botucatu, SP (Brazil)
2010-05-21T23:59:59.000Z
Proton radiation therapy is a precise form of radiation therapy, but the avoidance of damage to critical normal tissues and the prevention of geographical tumor misses require accurate knowledge of the dose delivered to the patient and the verification of his position demand a precise imaging technique. In proton therapy facilities, the X-ray Computed Tomography (xCT) is the preferred technique for the planning treatment of patients. This situation has been changing nowadays with the development of proton accelerators for health care and the increase in the number of treated patients. In fact, protons could be more efficient than xCT for this task. One essential difficulty in pCT image reconstruction systems came from the scattering of the protons inside the target due to the numerous small-angle deflections by nuclear Coulomb fields. The purpose of this study is the comparison of an analytical formulation for the determination of beam lateral deflection, based on Moliere's theory and Rutherford scattering with Monte Carlo calculations by SRIM 2008 and MCNPX codes.
Simulation of Watts Bar Unit 1 Initial Startup Tests with Continuous Energy Monte Carlo Methods
Godfrey, Andrew T [ORNL; Gehin, Jess C [ORNL; Bekar, Kursat B [ORNL; Celik, Cihangir [ORNL
2014-01-01T23:59:59.000Z
The Consortium for Advanced Simulation of Light Water Reactors* is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications. One component of the testing and validation plan for VERA is comparison of neutronics results to a set of continuous energy Monte Carlo solutions for a range of pressurized water reactor geometries using the SCALE component KENO-VI developed by Oak Ridge National Laboratory. Recent improvements in data, methods, and parallelism have enabled KENO, previously utilized predominately as a criticality safety code, to demonstrate excellent capability and performance for reactor physics applications. The highly detailed and rigorous KENO solutions provide a reliable nu-meric reference for VERAneutronics and also demonstrate the most accurate predictions achievable by modeling and simulations tools for comparison to operating plant data. This paper demonstrates the performance of KENO-VI for the Watts Bar Unit 1 Cycle 1 zero power physics tests, including reactor criticality, control rod worths, and isothermal temperature coefficients.
Thermodynamics and quark susceptibilities: a Monte-Carlo approach to the PNJL model
M. Cristoforetti; T. Hell; B. Klein; W. Weise
2010-02-11T23:59:59.000Z
The Monte-Carlo method is applied to the Polyakov-loop extended Nambu--Jona-Lasinio (PNJL) model. This leads beyond the saddle-point approximation in a mean-field calculation and introduces fluctuations around the mean fields. We study the impact of fluctuations on the thermodynamics of the model, both in the case of pure gauge theory and including two quark flavors. In the two-flavor case, we calculate the second-order Taylor expansion coefficients of the thermodynamic grand canonical partition function with respect to the quark chemical potential and present a comparison with extrapolations from lattice QCD. We show that the introduction of fluctuations produces only small changes in the behavior of the order parameters for chiral symmetry restoration and the deconfinement transition. On the other hand, we find that fluctuations are necessary in order to reproduce lattice data for the flavor non-diagonal quark susceptibilities. Of particular importance are pion fields, the contribution of which is strictly zero in the saddle point approximation.
Feasibility Study of Neutron Dose for Real Time Image Guided Proton Therapy: A Monte Carlo Study
Kim, Jin Sung; Kim, Daehyun; Shin, EunHyuk; Chung, Kwangzoo; Cho, Sungkoo; Ahn, Sung Hwan; Ju, Sanggyu; Chung, Yoonsun; Jung, Sang Hoon; Han, Youngyih
2015-01-01T23:59:59.000Z
Two full rotating gantry with different nozzles (Multipurpose nozzle with MLC, Scanning Dedicated nozzle) with conventional cyclotron system is installed and under commissioning for various proton treatment options at Samsung Medical Center in Korea. The purpose of this study is to investigate neutron dose equivalent per therapeutic dose, H/D, to x-ray imaging equipment under various treatment conditions with monte carlo simulation. At first, we investigated H/D with the various modifications of the beam line devices (Scattering, Scanning, Multi-leaf collimator, Aperture, Compensator) at isocenter, 20, 40, 60 cm distance from isocenter and compared with other research groups. Next, we investigated the neutron dose at x-ray equipments used for real time imaging with various treatment conditions. Our investigation showed the 0.07 ~ 0.19 mSv/Gy at x-ray imaging equipments according to various treatment options and intestingly 50% neutron dose reduction effect of flat panel detector was observed due to multi- lea...
RESPONSE FUNCTION OF THE BGO AND NAI(T1) DETECTORS USING MONTE CARLO SIMULATIONS.
Orion, I.; Wielopolski, L.
2001-01-31T23:59:59.000Z
The high efficiency of the BGO detectors makes them very attractive candidates to replace NaI(T1) detectors, which are widely used in studies of body composition. In this work, the response functions of the BGO and NaI(T1) detectors were determined at 0.662,4.4, and 10.0 MeV using three different Monte Carlo codes: EGS4, MCNP, and PHOTON. These codes differ in their input files and transport calculations, and were used to verify the internal consistency of the setup and of the input data. The energy range of 0.662 to 10 MeV was chosen to cover energies of interest in body composition-studies. The superior efficiency of the BGO-detectors has to be weighed-against their inferior resolution, and their higher price than that of the NaI detectors. Because the price of the BGO detectors strongly depends on the size of the crystal, its optimization is an important component in the design of the entire system.
Full-dispersion Monte Carlo simulation of phonon transport in micron-sized graphene nanoribbons
Mei, S., E-mail: smei4@wisc.edu; Knezevic, I., E-mail: knezevic@engr.wisc.edu [Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States); Maurer, L. N. [Department of Physics, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States); Aksamija, Z. [Department of Electrical and Computer Engineering, University of Massachusetts-Amherst, Amherst, Massachusetts 01003 (United States)
2014-10-28T23:59:59.000Z
We simulate phonon transport in suspended graphene nanoribbons (GNRs) with real-space edges and experimentally relevant widths and lengths (from submicron to hundreds of microns). The full-dispersion phonon Monte Carlo simulation technique, which we describe in detail, involves a stochastic solution to the phonon Boltzmann transport equation with the relevant scattering mechanisms (edge, three-phonon, isotope, and grain boundary scattering) while accounting for the dispersion of all three acoustic phonon branches, calculated from the fourth-nearest-neighbor dynamical matrix. We accurately reproduce the results of several experimental measurements on pure and isotopically modified samples [S. Chen et al., ACS Nano 5, 321 (2011);S. Chen et al., Nature Mater. 11, 203 (2012); X. Xu et al., Nat. Commun. 5, 3689 (2014)]. We capture the ballistic-to-diffusive crossover in wide GNRs: room-temperature thermal conductivity increases with increasing length up to roughly 100??m, where it saturates at a value of 5800?W/m K. This finding indicates that most experiments are carried out in the quasiballistic rather than the diffusive regime, and we calculate the diffusive upper-limit thermal conductivities up to 600?K. Furthermore, we demonstrate that calculations with isotropic dispersions overestimate the GNR thermal conductivity. Zigzag GNRs have higher thermal conductivity than same-size armchair GNRs, in agreement with atomistic calculations.
Calculated criticality for sup 235 U/graphite systems using the VIM Monte Carlo code
Collins, P.J.; Grasseschi, G.L.; Olsen, D.N. (Argonne National Lab.-West, Idaho Falls (United States)); Finck, P.J. (Argonne National Lab., IL (United States))
1992-01-01T23:59:59.000Z
Calculations for highly enriched uranium and graphite systems gained renewed interest recently for the new production modular high-temperature gas-cooled reactor (MHTGR). Experiments to validate the physics calculations for these systems are being prepared for the Transient Reactor Test Facility (TREAT) reactor at Argonne National Laboratory (ANL-West) and in the Compact Nuclear Power Source facility at Los Alamos National Laboratory. The continuous-energy Monte Carlo code VIM, or equivalently the MCNP code, can utilize fully detailed models of the MHTGR and serve as benchmarks for the approximate multigroup methods necessary in full reactor calculations. Validation of these codes and their associated nuclear data did not exist for highly enriched {sup 235}U/graphite systems. Experimental data, used in development of more approximate methods, dates back to the 1960s. The authors have selected two independent sets of experiments for calculation with the VIM code. The carbon-to-uranium (C/U) ratios encompass the range of 2,000, representative of the new production MHTGR, to the ratio of 10,000 in the fuel of TREAT. Calculations used the ENDF/B-V data.
Towards a frequency-dependent discrete maximum principle for the implicit Monte Carlo equations
Wollaber, Allan B [Los Alamos National Laboratory; Larsen, Edward W [Los Alamos National Laboratory; Densmore, Jeffery D [Los Alamos National Laboratory
2010-12-15T23:59:59.000Z
It has long been known that temperature solutions of the Implicit Monte Carlo (IMC) equations can exceed the external boundary temperatures, a so-called violation of the 'maximum principle.' Previous attempts at prescribing a maximum value of the time-step size {Delta}{sub t} that is sufficient to eliminate these violations have recommended a {Delta}{sub t} that is typically too small to be used in practice and that appeared to be much too conservative when compared to numerical solutions of the IMC equations for practical problems. In this paper, we derive a new estimator for the maximum time-step size that includes the spatial-grid size {Delta}{sub x}. This explicitly demonstrates that the effect of coarsening {Delta}{sub x} is to reduce the limitation on {Delta}{sub t}, which helps explain the overly conservative nature of the earlier, grid-independent results. We demonstrate that our new time-step restriction is a much more accurate means of predicting violations of the maximum principle. We discuss how the implications of the new, grid-dependent timestep restriction can impact IMC solution algorithms.
ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2008-04-01T23:59:59.000Z
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.
A BAYESIAN MONTE CARLO ANALYSIS OF THE M-{sigma} RELATION
Morabito, Leah K.; Dai Xinyu, E-mail: morabito@nhn.ou.edu, E-mail: dai@nhn.ou.edu [Homer L. Dodge Department of Physics and Astronomy, University of Oklahoma, Norman, OK 73019 (United States)
2012-10-01T23:59:59.000Z
We present an analysis of selection biases in the M{sub bh}-{sigma} relation using Monte Carlo simulations including the sphere of influence resolution selection bias and a selection bias in the velocity dispersion distribution. We find that the sphere of influence selection bias has a significant effect on the measured slope of the M{sub bh}-{sigma} relation, modeled as {beta}{sub intrinsic} = -4.69 + 2.22{beta}{sub measured}, where the measured slope is shallower than the model slope in the parameter range of {beta} > 4, with larger corrections for steeper model slopes. Therefore, when the sphere of influence is used as a criterion to exclude unreliable measurements, it also introduces a selection bias that needs to be modeled to restore the intrinsic slope of the relation. We find that the selection effect due to the velocity dispersion distribution of the sample, which might not follow the overall distribution of the population, is not important for slopes of {beta} {approx} 4-6 of a logarithmically linear M{sub bh}-{sigma} relation, which could impact some studies that measure low (e.g., {beta} < 4) slopes. Combining the selection biases in velocity dispersions and the sphere of influence cut, we find that the uncertainty of the slope is larger than the value without modeling these effects and estimate an intrinsic slope of {beta} = 5.28{sup +0.84}{sub -0.55}.
Vrugt, Jasper A [Los Alamos National Laboratory; Hyman, James M [Los Alamos National Laboratory; Robinson, Bruce A [Los Alamos National Laboratory; Higdon, Dave [Los Alamos National Laboratory; Ter Braak, Cajo J F [NETHERLANDS; Diks, Cees G H [UNIV OF AMSTERDAM
2008-01-01T23:59:59.000Z
Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.
Monte Carlo analysis of a monolithic interconnected module with a back surface reflector
Ballinger, C.T.; Charache, G.W. [Lockheed Martin Corp., Schenectady, NY (United States); Murray, C.S. [Bettis Atomic Power Lab., West Mifflin, PA (United States)
1998-10-01T23:59:59.000Z
Recently, the photon Monte Carlo code, RACER-X, was modified to include wave-length dependent absorption coefficients and indices of refraction. This work was done in an effort to increase the code`s capabilities to be more applicable to a wider range of problems. These new features make RACER-X useful for analyzing devices like monolithic interconnected modules (MIMs) which have etched surface features and incorporates a back surface reflector (BSR) for spectral control. A series of calculations were performed on various MIM structures to determine the impact that surface features and component reflectivities have on spectral utilization. The traditional concern of cavity photonics is replaced with intra-cell photonics in the MIM design. Like the cavity photonic problems previously discussed, small changes in optical properties and/or geometry can lead to large changes in spectral utilization. The calculations show that seemingly innocuous surface features (e.g., trenches and grid lines) can significantly reduce the spectral utilization due to the non-normal incident photon flux. Photons that enter the device through a trench edge are refracted onto a trajectory where they will not escape. This leads to a reduction in the number of reflected below bandgap photons that return to the radiator and reduce the spectral utilization. In addition, trenches expose a lateral conduction layer in this particular series of calculations which increase the absorption of above bandgap photons in inactive material.
von Wittenau, A; Aufderheide, M B; Henderson, G L
2010-05-07T23:59:59.000Z
Given the cost and lead-times involved in high-energy proton radiography, it is prudent to model proposed radiographic experiments to see if the images predicted would return useful information. We recently modified our raytracing transmission radiography modeling code HADES to perform simplified Monte Carlo simulations of the transport of protons in a proton radiography beamline. Beamline objects include the initial diffuser, vacuum magnetic fields, windows, angle-selecting collimators, and objects described as distorted 2D (planar or cylindrical) meshes or as distorted 3D hexahedral meshes. We present an overview of the algorithms used for the modeling and code timings for simulations through typical 2D and 3D meshes. We next calculate expected changes in image blur as scattering materials are placed upstream and downstream of a resolution test object (a 3 mm thick sheet of tantalum, into which 0.4 mm wide slits have been cut), and as the current supplied to the focusing magnets is varied. We compare and contrast the resulting simulations with the results of measurements obtained at the 800 MeV Los Alamos LANSCE Line-C proton radiography facility.
Byun, H. S.; Pirbadian, S.; Nakano, Aiichiro; Shi, Liang; El-Naggar, Mohamed Y.
2014-09-05T23:59:59.000Z
Microorganisms overcome the considerable hurdle of respiring extracellular solid substrates by deploying large multiheme cytochrome complexes that form 20 nanometer conduits to traffic electrons through the periplasm and across the cellular outer membrane. Here we report the first kinetic Monte Carlo simulations and single-molecule scanning tunneling microscopy (STM) measurements of the Shewanella oneidensis MR-1 outer membrane decaheme cytochrome MtrF, which can perform the final electron transfer step from cells to minerals and microbial fuel cell anodes. We find that the calculated electron transport rate through MtrF is consistent with previously reported in vitro measurements of the Shewanella Mtr complex, as well as in vivo respiration rates on electrode surfaces assuming a reasonable (experimentally verified) coverage of cytochromes on the cell surface. The simulations also reveal a rich phase diagram in the overall electron occupation density of the hemes as a function of electron injection and ejection rates. Single molecule tunneling spectroscopy confirms MtrF's ability to mediate electron transport between an STM tip and an underlying Au(111) surface, but at rates higher than expected from previously calculated heme-heme electron transfer rates for solvated molecules.
Noncovalent Interactions by Quantum Monte Carlo: A Speedup by a Smart Basis Set Reduction
Dubecký, Matúš
2015-01-01T23:59:59.000Z
A fixed-node diffusion Monte Carlo (FN-DMC) method provides a promising alternative to the commonly used coupled-cluster (CC) methods, in the domain of benchmark noncovalent interaction energy calculations. This is mainly true for a low-order polynomial CPU cost scaling of FN-DMC and favorable FN error cancellation leading to benchmark interaction energies accurate to 0.1 kcal/mol. While it is empirically accepted that the FN-DMC results depend weakly on the one-particle basis sets used to expand the guiding functions, limits of this assumption remain elusive. Our recent work indicates that augmented triple zeta basis sets are sufficient to achieve a benchmark level of 0.1 kcal/mol. Here we report on a possibility of significant truncation of the one-particle basis sets without any visible bias on the overall accuracy of the final FN-DMC energy differences. The approach is tested on a set of seven small noncovalent closed-shell complexes including a water dimer. The reported findings enable cheaper high-quali...
Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes
Harrisson, G.; Marleau, G. [Inst. of Nuclear Engineering, Ecole Polytechnique de Montreal (Canada)
2012-07-01T23:59:59.000Z
The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculation performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)
Evaluation of vectorized Monte Carlo algorithms on GPUs for a neutron Eigenvalue problem
Du, X.; Liu, T.; Ji, W.; Xu, X. G. [Nuclear Engineering Program, Rensselaer Polytechnic Institute, Troy, NY 12180 (United States); Brown, F. B. [Monte Carlo Codes Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)
2013-07-01T23:59:59.000Z
Conventional Monte Carlo (MC) methods for radiation transport computations are 'history-based', which means that one particle history at a time is tracked. Simulations based on such methods suffer from thread divergence on the graphics processing unit (GPU), which severely affects the performance of GPUs. To circumvent this limitation, event-based vectorized MC algorithms can be utilized. A versatile software test-bed, called ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - was used for this study. ARCHER facilitates the development and testing of a MC code based on the vectorized MC algorithm implemented on GPUs by using NVIDIA's Compute Unified Device Architecture (CUDA). The ARCHER{sub GPU} code was designed to solve a neutron eigenvalue problem and was tested on a NVIDIA Tesla M2090 Fermi card. We found that although the vectorized MC method significantly reduces the occurrence of divergent branching and enhances the warp execution efficiency, the overall simulation speed is ten times slower than the conventional history-based MC method on GPUs. By analyzing detailed GPU profiling information from ARCHER, we discovered that the main reason was the large amount of global memory transactions, causing severe memory access latency. Several possible solutions to alleviate the memory latency issue are discussed. (authors)
Chatterjee, Abhijit [Los Alamos National Laboratory; Voter, Arthur [Los Alamos National Laboratory
2009-01-01T23:59:59.000Z
We develop a variation of the temperature accelerated dynamics (TAD) method, called the p-TAD method, that efficiently generates an on-the-fly kinetic Monte Carlo (KMC) process catalog with control over the accuracy of the catalog. It is assumed that transition state theory is valid. The p-TAD method guarantees that processes relevant at the timescales of interest to the simulation are present in the catalog with a chosen confidence. A confidence measure associated with the process catalog is derived. The dynamics is then studied using the process catalog with the KMC method. Effective accuracy of a p-TAD calculation is derived when a KMC catalog is reused for conditions different from those the catalog was originally generated for. Different KMC catalog generation strategies that exploit the features of the p-TAD method and ensure higher accuracy and/or computational efficiency are presented. The accuracy and the computational requirements of the p-TAD method are assessed. Comparisons to the original TAD method are made. As an example, we study dynamics in sub-monolayer Ag/Cu(110) at the time scale of seconds using the p-TAD method. It is demonstrated that the p-TAD method overcomes several challenges plaguing the conventional KMC method.
Fractal space-times under the microscope: A Renormalization Group view on Monte Carlo data
Martin Reuter; Frank Saueressig
2011-10-24T23:59:59.000Z
The emergence of fractal features in the microscopic structure of space-time is a common theme in many approaches to quantum gravity. In this work we carry out a detailed renormalization group study of the spectral dimension $d_s$ and walk dimension $d_w$ associated with the effective space-times of asymptotically safe Quantum Einstein Gravity (QEG). We discover three scaling regimes where these generalized dimensions are approximately constant for an extended range of length scales: a classical regime where $d_s = d, d_w = 2$, a semi-classical regime where $d_s = 2d/(2+d), d_w = 2+d$, and the UV-fixed point regime where $d_s = d/2, d_w = 4$. On the length scales covered by three-dimensional Monte Carlo simulations, the resulting spectral dimension is shown to be in very good agreement with the data. This comparison also provides a natural explanation for the apparent puzzle between the short distance behavior of the spectral dimension reported from Causal Dynamical Triangulations (CDT), Euclidean Dynamical Triangulations (EDT), and Asymptotic Safety.
Feasibility of a Monte Carlo-deterministic hybrid method for fast reactor analysis
Heo, W.; Kim, W.; Kim, Y. [Korea Advanced Institute of Science and Technology - KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon, 305-701 (Korea, Republic of); Yun, S. [Korea Atomic Energy Research Institute - KAERI, 989-111 Daedeok-daero, Yuseong-gu, Daejeon, 305-353 (Korea, Republic of)
2013-07-01T23:59:59.000Z
A Monte Carlo and deterministic hybrid method is investigated for the analysis of fast reactors in this paper. Effective multi-group cross sections data are generated using a collision estimator in the MCNP5. A high order Legendre scattering cross section data generation module was added into the MCNP5 code. Both cross section data generated from MCNP5 and TRANSX/TWODANT using the homogeneous core model were compared, and were applied to DIF3D code for fast reactor core analysis of a 300 MWe SFR TRU burner core. For this analysis, 9 groups macroscopic-wise data was used. In this paper, a hybrid calculation MCNP5/DIF3D was used to analyze the core model. The cross section data was generated using MCNP5. The k{sub eff} and core power distribution were calculated using the 54 triangle FDM code DIF3D. A whole core calculation of the heterogeneous core model using the MCNP5 was selected as a reference. In terms of the k{sub eff}, 9-group MCNP5/DIF3D has a discrepancy of -154 pcm from the reference solution, 9-group TRANSX/TWODANT/DIF3D analysis gives -1070 pcm discrepancy. (authors)
Abdel-Khalik, Hany S.; Gardner, Robin; Mattingly, John; Sood, Avneet
2014-05-20T23:59:59.000Z
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calulations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10-10 times to properly characterize the few-group cross-sections for deownstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the faborable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.
Evaluation of a new commercial Monte Carlo dose calculation algorithm for electron beams
Vandervoort, Eric J., E-mail: evandervoort@toh.on.ca; Cygler, Joanna E. [Department of Medical Physics, The Ottawa Hospital Cancer Centre, The University of Ottawa, Ottawa, Ontario K1H 8L6 (Canada) [Department of Medical Physics, The Ottawa Hospital Cancer Centre, The University of Ottawa, Ottawa, Ontario K1H 8L6 (Canada); The Faculty of Medicine, The University of Ottawa, Ottawa, Ontario K1H 8M5 (Canada); Department of Physics, Carleton University, Ottawa, Ontario K1S 5B6 (Canada); Tchistiakova, Ekaterina [Department of Medical Physics, The Ottawa Hospital Cancer Centre, The University of Ottawa, Ottawa, Ontario K1H 8L6 (Canada) [Department of Medical Physics, The Ottawa Hospital Cancer Centre, The University of Ottawa, Ottawa, Ontario K1H 8L6 (Canada); Department of Medical Biophysics, University of Toronto, Ontario M5G 2M9 (Canada); Heart and Stroke Foundation Centre for Stroke Recovery, Sunnybrook Research Institute, University of Toronto, Ontario M4N 3M5 (Canada); La Russa, Daniel J. [Department of Medical Physics, The Ottawa Hospital Cancer Centre, The University of Ottawa, Ottawa, Ontario K1H 8L6 (Canada) and The Faculty of Medicine, The University of Ottawa, Ottawa, Ontario K1H 8M5 (Canada)] [Department of Medical Physics, The Ottawa Hospital Cancer Centre, The University of Ottawa, Ottawa, Ontario K1H 8L6 (Canada) and The Faculty of Medicine, The University of Ottawa, Ottawa, Ontario K1H 8M5 (Canada)
2014-02-15T23:59:59.000Z
Purpose: In this report the authors present the validation of a Monte Carlo dose calculation algorithm (XiO EMC from Elekta Software) for electron beams. Methods: Calculated and measured dose distributions were compared for homogeneous water phantoms and for a 3D heterogeneous phantom meant to approximate the geometry of a trachea and spine. Comparisons of measurements and calculated data were performed using 2D and 3D gamma index dose comparison metrics. Results: Measured outputs agree with calculated values within estimated uncertainties for standard and extended SSDs for open applicators, and for cutouts, with the exception of the 17 MeV electron beam at extended SSD for cutout sizes smaller than 5 × 5 cm{sup 2}. Good agreement was obtained between calculated and experimental depth dose curves and dose profiles (minimum number of measurements that pass a 2%/2 mm agreement 2D gamma index criteria for any applicator or energy was 97%). Dose calculations in a heterogeneous phantom agree with radiochromic film measurements (>98% of pixels pass a 3 dimensional 3%/2 mm ?-criteria) provided that the steep dose gradient in the depth direction is considered. Conclusions: Clinically acceptable agreement (at the 2%/2 mm level) between the measurements and calculated data for measurements in water are obtained for this dose calculation algorithm. Radiochromic film is a useful tool to evaluate the accuracy of electron MC treatment planning systems in heterogeneous media.
Lattice Monte Carlo calculations for unitary fermions in a harmonic trap
Michael G. Endres; David B. Kaplan; Jong-Wan Lee; Amy N. Nicholson
2011-11-03T23:59:59.000Z
We present a new lattice Monte Carlo approach developed for studying large numbers of strongly interacting nonrelativistic fermions, and apply it to a dilute gas of unitary fermions confined to a harmonic trap. Our lattice action is highly improved, with sources of discretization and finite volume errors systematically removed; we are able to demonstrate the expected volume scaling of energy levels of two and three untrapped fermions, and to reproduce the high precision calculations published previously for the ground state energies for N = 3 unitary fermions in a box (to within our 0.3% uncertainty), and for N = 3, . . ., 6 unitary fermions in a harmonic trap (to within our ~ 1% uncertainty). We use this action to determine the ground state energies of up to 70 unpolarized fermions trapped in a harmonic potential on a lattice as large as 64^3 x 72; our approach avoids the use of importance sampling or calculation of a fermion determinant and employs a novel statistical method for estimating observables, allowing us to generate ensembles as large as 10^8 while requiring only relatively modest computational resources.
Tushar Kanti Bose; Jayashree Saha
2015-03-06T23:59:59.000Z
The realization of a spontaneous macroscopic ferroelectric order in fluids of anisotropic mesogens is a topic of both fundamental and technological interest. Recently, we demonstrated that a system of dipolar achiral disklike ellipsoids can exhibit long-searched ferroelectric liquid crystalline phases of dipolar origin. In the present work, extensive off-lattice Monte Carlo simulations are used to investigate the phase behavior of the system under the influences of the electrostatic boundary conditions that restrict any global polarization. We find that the system develops strongly ferroelectric slablike domains periodically arranged in an antiferroelectric fashion. Exploring the phase behavior at different dipole strengths, we find existence of the ferroelectric nematic and ferroelectric columnar order inside the domains. For higher dipole strengths, a biaxial phase is also obtained with a similar periodic array of ferroelectric slabs of antiparallel polarizations. We have studied the depolarizing effects by using both the Ewald summation and the spherical cut-off techniques. We present and compare the results of the two different approaches of considering the depolarizing effects in this anisotropic system. It is explicitly shown that the domain size increases with the system size as a result of considering longer range of dipolar interactions. The system exhibits pronounced system size effects for stronger dipolar interactions. The results provide strong evidence to the novel understanding that the dipolar interactions are indeed sufficient to produce long range ferroelectric order in anisotropic fluids.
Monte Carlo Simulations of the Dissolution of Borosilicate Glasses in Near-Equilibrium Conditions
Kerisit, Sebastien N.; Pierce, Eric M.
2012-05-15T23:59:59.000Z
Monte Carlo simulations were performed to investigate the mechanisms of glass dissolution as equilibrium conditions are approached in both static and flow-through conditions. The glasses studied are borosilicate glasses in the compositional range (80-x)% SiO2 (10+x/2)% B2O3 (10+x/2)% Na2O, where 5 < x < 30%. In static conditions, dissolution/condensation reactions lead to the formation, for all compositions studied, of a blocking layer composed of polymerized Si sites with principally 4 connections to nearest Si sites. This layer forms atop the altered glass layer and shows similar composition and density for all glass compositions considered. In flow-through conditions, three main dissolution regimes are observed: at high flow rates, the dissolving glass exhibits a thin alteration layer and congruent dissolution; at low flow rates, a blocking layer is formed as in static conditions but the simulations show that water can occasionally break through the blocking layer causing the corrosion process to resume; and, at intermediate flow rates, the glasses dissolve incongruently with an increasingly deepening altered layer. The simulation results suggest that, in geological disposal environments, small perturbations or slow flows could be enough to prevent the formation of a permanent blocking layer.
Intra-Globular Structures in Multiblock Copolymer Chains from a Monte Carlo Simulation
Krzysztof Lewandowski; Michal Banaszak
2014-10-16T23:59:59.000Z
Multiblock copolymer chains in implicit nonselective solvents are studied by Monte Carlo method which employs a parallel tempering algorithm. Chains consisting of 120 $A$ and 120 $B$ monomers, arranged in three distinct microarchitectures: $(10-10)_{12}$, $(6-6)_{20}$, and $(3-3)_{40}$, collapse to globular states upon cooling, as expected. By varying both the reduced temperature $T^*$ and compatibility between monomers $\\omega$, numerous intra-globular structures are obtained: diclusters (handshake, spiral, torus with a core, etc.), triclusters, and $n$-clusters with $n>3$ (lamellar and other), which are reminiscent of the block copolymer nanophases for spherically confined geometries. Phase diagrams for various chains in the $(T^*, \\omega)$-space are mapped. The structure factor $S(k)$, for a selected microarchitecture and $\\omega$, is calculated. Since $S(k)$ can be measured in scattering experiments, it can be used to relate simulation results to an experiment. Self-assembly in those systems is interpreted in term of competition between minimization of the interfacial area separating different types of monomers and minimization of contacts between chain and solvent. Finally, the relevance of this model to the protein folding is addressed.
Hsiao-Ping Hsu; Bernd A. Berg; Peter Grassberger
2004-08-26T23:59:59.000Z
Treating realistically the ambient water is one of the main difficulties in applying Monte Carlo methods to protein folding. The solvent-accessible area method, a popular method for treating water implicitly, is investigated by means of Metropolis simulations of the brain peptide Met-Enkephalin. For the phenomenological energy function ECEPP/2 nine atomic solvation parameter (ASP) sets are studied that had been proposed by previous authors. The simulations are compared with each other, with simulations with a distance dependent electrostatic permittivity $\\epsilon (r)$, and with vacuum simulations ($\\epsilon =2$). Parallel tempering and a recently proposed biased Metropolis technique are employed and their performances are evaluated. The measured observables include energy and dihedral probability densities (pds), integrated autocorrelation times, and acceptance rates. Two of the ASP sets turn out to be unsuitable for these simulations. For all other sets, selected configurations are minimized in search of the global energy minima. Unique minima are found for the vacuum and the $\\epsilon(r)$ system, but for none of the ASP models. Other observables show a remarkable dependence on the ASPs. In particular, autocorrelation times vary dramatically with the ASP parameters. Three ASP sets have much smaller autocorrelations at 300 K than the vacuum simulations, opening the possibility that simulations can be speeded up vastly by judiciously chosing details of the force
Monte Carlo simulation of the data acquisition chain of scintillation detectors
Binda, F.; Ericsson, G.; Hellesen, C.; Hjalmarsson, A.; Eriksson, J.; Skiba, M.; Conroy, S.; Weiszflog, M. [Uppsala University, Department of Physics and Astronomy, Division of Applied Nuclear Physics, 75120 Uppsala (Sweden)
2014-08-21T23:59:59.000Z
The good performance of a detector can be strongly affected by the instrumentation used to acquire the data. The possibility of anticipating how the acquisition chain will affect the signal can help in finding the best solution among different set-ups. In this work we developed a Monte Carlo code that aims to simulate the effect of the various components of a digital Data Acquisition system (DAQ) applied to scintillation detectors. The components included in the model are: the scintillator, the photomultiplier tube (PMT), the signal cable and the digitizer. We benchmarked the code against real data acquired with a NE213 scintillator, comparing simulated and real signal pulses induced by gamma-ray interaction. Then we studied the dependence of the energy resolution of a pulse height spectrum (PHS) on the sampling frequency and the bit resolution of the digitizer. We found that exceeding some values of the sampling frequency and the bit resolution improves only marginally the performance of the system. The method can be applied for the study of various detector systems relevant for nuclear techniques, such as in fusion diagnostics.
MONTE CARLO SIMULATIONS OF NONLINEAR PARTICLE ACCELERATION IN PARALLEL TRANS-RELATIVISTIC SHOCKS
Ellison, Donald C.; Warren, Donald C. [Physics Department, North Carolina State University, Box 8202, Raleigh, NC 27695 (United States); Bykov, Andrei M., E-mail: don_ellison@ncsu.edu, E-mail: ambykov@yahoo.com [Ioffe Institute for Physics and Technology, 194021 St. Petersburg (Russian Federation)
2013-10-10T23:59:59.000Z
We present results from a Monte Carlo simulation of a parallel collisionless shock undergoing particle acceleration. Our simulation, which contains parameterized scattering and a particular thermal leakage injection model, calculates the feedback between accelerated particles ahead of the shock, which influence the shock precursor and 'smooth' the shock, and thermal particle injection. We show that there is a transition between nonrelativistic shocks, where the acceleration efficiency can be extremely high and the nonlinear compression ratio can be substantially greater than the Rankine-Hugoniot value, and fully relativistic shocks, where diffusive shock acceleration is less efficient and the compression ratio remains at the Rankine-Hugoniot value. This transition occurs in the trans-relativistic regime and, for the particular parameters we use, occurs around a shock Lorentz factor ?{sub 0} = 1.5. We also find that nonlinear shock smoothing dramatically reduces the acceleration efficiency presumed to occur with large-angle scattering in ultra-relativistic shocks. Our ability to seamlessly treat the transition from ultra-relativistic to trans-relativistic to nonrelativistic shocks may be important for evolving relativistic systems, such as gamma-ray bursts and Type Ibc supernovae. We expect a substantial evolution of shock accelerated spectra during this transition from soft early on to much harder when the blast-wave shock becomes nonrelativistic.
MONTE CARLO SIMULATIONS OF THE PHOTOSPHERIC EMISSION IN GAMMA-RAY BURSTS
Begue, D.; Siutsou, I. A.; Vereshchagin, G. V. [University of Roma ''Sapienza'', I-00185, p.le A. Moro 5, Rome (Italy)
2013-04-20T23:59:59.000Z
We studied the decoupling of photons from ultra-relativistic spherically symmetric outflows expanding with constant velocity by means of Monte Carlo simulations. For outflows with finite widths we confirm the existence of two regimes: photon-thick and photon-thin, introduced recently by Ruffini et al. (RSV). The probability density function of the last scattering of photons is shown to be very different in these two cases. We also obtained spectra as well as light curves. In the photon-thick case, the time-integrated spectrum is much broader than the Planck function and its shape is well described by the fuzzy photosphere approximation introduced by RSV. In the photon-thin case, we confirm the crucial role of photon diffusion, hence the probability density of decoupling has a maximum near the diffusion radius well below the photosphere. The time-integrated spectrum of the photon-thin case has a Band shape that is produced when the outflow is optically thick and its peak is formed at the diffusion radius.
Tutt, Teresa Elizabeth
2009-05-15T23:59:59.000Z
MEDIA?????????? 123 APPENDIX B: REPEATABLE GEOMETRY FOR TARGET IRRADIATION????????????????????. 128 APPENDIX C: VARIATION OF MONTE-CARLO PARAMETERS FOR 5.5 MM PHANTOM????????????????. 131 VITA???????????????????????????????? 137 ix LIST... errors in simple structure??... 10 2.5 Two dimensional illustration of coarse element errors in cilantro leaf????. 10 2.6 Coarse element error produced by averaging the densities in voxel?????. 11 2.7 Electron step-size artifact for 20 mm cylinder...
Erickson, Lori
1995-01-01T23:59:59.000Z
's application of Monte Carlo simulation methods to the spread of geographic phenomena, more specifically, the spread of innovations or ideas from person to person (Pitts 1965; Chorley and Haggett 1967; Marble and Bowlby 1968; Gould 1969; Cliff et al. 1981... are responsible for the safety of these park users, are concerned about several important factors. These include unusually high temperatures, lack of potable water, and other desert hazards such as steep ridges and cliffs, spiny plants, and poisonous animals...
Hin, Celine Nathalie
Kinetic Monte Carlo simulations, based on parameters obtained with density-functional theory in the local-density approximation and experimental data, are used to study bulk precipitation of Y[subscript 2]O[subscript 3] ...
Landon, Colin Donald
2014-01-01T23:59:59.000Z
We present a deviational Monte Carlo method for solving the Boltzmann equation for phonon transport subject to the linearized ab initio 3-phonon scattering operator. Phonon dispersion relations and transition rates are ...
Sadeghi, Mahdi; Raisali, Gholamreza; Hosseini, S. Hamed; Shavar, Arzhang [Nuclear Medicine Research Group, Agricultural, Medical and Industrial Research School, P.O. Box 31485-498, Karaj (Iran, Islamic Republic of) and Engineering Faculty, Science and Research Campus, Islamic Azad University, P.O. Box 14515-775, Tehran (Iran, Islamic Republic of); Radiation Applications Research School, Nuclear Science and Technology Research Institute, Tehran (Iran, Islamic Republic of); Engineering Faculty, Science and Research Campus, Islamic Azad University, P.O. Box 14515-775, Tehran (Iran, Islamic Republic of); SSDL Group, Agricultural, Medical and Industrial Research School, Karaj (Iran, Islamic Republic of)
2008-04-15T23:59:59.000Z
This article presents a brachytherapy source having {sup 103}Pd adsorbed onto a cylindrical silver rod that has been developed by the Agricultural, Medical, and Industrial Research School for permanent implant applications. Dosimetric characteristics (radial dose function, anisotropy function, and anisotropy factor) of this source were experimentally and theoretically determined in terms of the updated AAPM Task group 43 (TG-43U1) recommendations. Monte Carlo simulations were used to calculate the dose rate constant. Measurements were performed using TLD-GR200A circular chip dosimeters using standard methods employing thermoluminescent dosimeters in a Perspex phantom. Precision machined bores in the phantom located the dosimeters and the source in a reproducible fixed geometry, providing for transverse-axis and angular dose profiles over a range of distances from 0.5 to 5 cm. The Monte Carlo N-particle (MCNP) code, version 4C simulation techniques have been used to evaluate the dose-rate distributions around this model {sup 103}Pd source in water and Perspex phantoms. The Monte Carlo calculated dose rate constant of the IRA-{sup 103}Pd source in water was found to be 0.678 cGy h{sup -1} U{sup -1} with an approximate uncertainty of {+-}0.1%. The anisotropy function, F(r,{theta}), and the radial dose function, g(r), of the IRA-{sup 103}Pd source were also measured in a Perspex phantom and calculated in both Perspex and liquid water phantoms.
Toulouse, Julien; Reinhardt, Peter; Hoggan, Philip E; Umrigar, C J
2010-01-01T23:59:59.000Z
We report state-of-the-art quantum Monte Carlo calculations of the singlet $n \\to \\pi^*$ (CO) vertical excitation energy in the acrolein molecule, extending the recent study of Bouab\\c{c}a {\\it et al.} [J. Chem. Phys. {\\bf 130}, 114107 (2009)]. We investigate the effect of using a Slater basis set instead of a Gaussian basis set, and of using state-average versus state-specific complete-active-space (CAS) wave functions, with or without reoptimization of the coefficients of the configuration state functions (CSFs) and of the orbitals in variational Monte Carlo (VMC). It is found that, with the Slater basis set used here, both state-average and state-specific CAS(6,5) wave functions give an accurate excitation energy in diffusion Monte Carlo (DMC), with or without reoptimization of the CSF and orbital coefficients in the presence of the Jastrow factor. In contrast, the CAS(2,2) wave functions require reoptimization of the CSF and orbital coefficients to give a good DMC excitation energy. Our best estimates of ...
Da, B.; Sun, Y.; Ding, Z. J. [Hefei National Laboratory for Physical Sciences at Microscale and Department of Physics, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026, People's Republic of China (China)] [Hefei National Laboratory for Physical Sciences at Microscale and Department of Physics, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026, People's Republic of China (China); Mao, S. F. [School of Nuclear Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026, People's Republic of China (China)] [School of Nuclear Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026, People's Republic of China (China); Zhang, Z. M. [Centre of Physical Experiments, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026, People's Republic of China (China)] [Centre of Physical Experiments, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026, People's Republic of China (China); Jin, H.; Yoshikawa, H.; Tanuma, S. [Advanced Surface Chemical Analysis Group, National Institute for Materials Science, 1-2-1 Sengen Tsukuba, Ibaraki 305-0047 (Japan)] [Advanced Surface Chemical Analysis Group, National Institute for Materials Science, 1-2-1 Sengen Tsukuba, Ibaraki 305-0047 (Japan)
2013-06-07T23:59:59.000Z
A reverse Monte Carlo (RMC) method is developed to obtain the energy loss function (ELF) and optical constants from a measured reflection electron energy-loss spectroscopy (REELS) spectrum by an iterative Monte Carlo (MC) simulation procedure. The method combines the simulated annealing method, i.e., a Markov chain Monte Carlo (MCMC) sampling of oscillator parameters, surface and bulk excitation weighting factors, and band gap energy, with a conventional MC simulation of electron interaction with solids, which acts as a single step of MCMC sampling in this RMC method. To examine the reliability of this method, we have verified that the output data of the dielectric function are essentially independent of the initial values of the trial parameters, which is a basic property of a MCMC method. The optical constants derived for SiO{sub 2} in the energy loss range of 8-90 eV are in good agreement with other available data, and relevant bulk ELFs are checked by oscillator strength-sum and perfect-screening-sum rules. Our results show that the dielectric function can be obtained by the RMC method even with a wide range of initial trial parameters. The RMC method is thus a general and effective method for determining the optical properties of solids from REELS measurements.
Kim, Beop-Min
2012-06-07T23:59:59.000Z
AN ANALYSIS OF THE EFFECT OF COUPLING BETWEEN TEMPERATURE RISE AND LIGHT DISTRIBUTION IN LASER IRRADIATED TISSUE USING FINITE ELEMENT AND MONTE- CARLO METHODS A Thesis BEOP-MIN KIM Submitted to the Office of Graduate Studies of Texas A... FINITE ELEMENT AND MONTE- CARLO METHODS A Thesis BEOP-MIN KIM Approved as to style and content by: Sohi Rastegar (Chair of Committee) Gerald E. Miller (member) He F. Taylor (member) . Kemble ennett (Head of Department) August 1991 ABSTRACT...
Silva-Rodríguez, Jesús, E-mail: jesus.silva.rodriguez@sergas.es; Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es [Fundación Ramón Domínguez, Santiago de Compostela, Galicia (Spain) [Fundación Ramón Domínguez, Santiago de Compostela, Galicia (Spain); Servicio de Medicina Nuclear, Complexo Hospitalario Universidade de Santiago de Compostela (USC), 15782, Galicia (Spain); Grupo de Imaxe Molecular, Instituto de Investigación Sanitarias (IDIS), Santiago de Compostela, 15706, Galicia (Spain); Sánchez, Manuel; Mosquera, Javier; Luna-Vega, Víctor [Servicio de Radiofísica y Protección Radiológica, Complexo Hospitalario Universidade de Santiago de Compostela (USC), 15782, Galicia (Spain)] [Servicio de Radiofísica y Protección Radiológica, Complexo Hospitalario Universidade de Santiago de Compostela (USC), 15782, Galicia (Spain); Cortés, Julia; Garrido, Miguel [Servicio de Medicina Nuclear, Complexo Hospitalario Universitario de Santiago de Compostela, 15706, Galicia, Spain and Grupo de Imaxe Molecular, Instituto de Investigación Sanitarias (IDIS), Santiago de Compostela, 15706, Galicia (Spain)] [Servicio de Medicina Nuclear, Complexo Hospitalario Universitario de Santiago de Compostela, 15706, Galicia, Spain and Grupo de Imaxe Molecular, Instituto de Investigación Sanitarias (IDIS), Santiago de Compostela, 15706, Galicia (Spain); Pombar, Miguel [Servicio de Radiofísica y Protección Radiológica, Complexo Hospitalario Universitario de Santiago de Compostela, 15706, Galicia (Spain)] [Servicio de Radiofísica y Protección Radiológica, Complexo Hospitalario Universitario de Santiago de Compostela, 15706, Galicia (Spain); Ruibal, Álvaro [Servicio de Medicina Nuclear, Complexo Hospitalario Universidade de Santiago de Compostela (USC), 15782, Galicia (Spain) [Servicio de Medicina Nuclear, Complexo Hospitalario Universidade de Santiago de Compostela (USC), 15782, Galicia (Spain); Grupo de Imaxe Molecular, Instituto de Investigación Sanitarias (IDIS), Santiago de Compostela, 15706, Galicia (Spain); Fundación Tejerina, 28003, Madrid (Spain)
2014-05-15T23:59:59.000Z
Purpose: Current procedure guidelines for whole body [18F]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) state that studies with visible dose extravasations should be rejected for quantification protocols. Our work is focused on the development and validation of methods for estimating extravasated doses in order to correct standard uptake value (SUV) values for this effect in clinical routine. Methods: One thousand three hundred sixty-seven consecutive whole body FDG-PET studies were visually inspected looking for extravasation cases. Two methods for estimating the extravasated dose were proposed and validated in different scenarios using Monte Carlo simulations. All visible extravasations were retrospectively evaluated using a manual ROI based method. In addition, the 50 patients with higher extravasated doses were also evaluated using a threshold-based method. Results: Simulation studies showed that the proposed methods for estimating extravasated doses allow us to compensate the impact of extravasations on SUV values with an error below 5%. The quantitative evaluation of patient studies revealed that paravenous injection is a relatively frequent effect (18%) with a small fraction of patients presenting considerable extravasations ranging from 1% to a maximum of 22% of the injected dose. A criterion based on the extravasated volume and maximum concentration was established in order to identify this fraction of patients that might be corrected for paravenous injection effect. Conclusions: The authors propose the use of a manual ROI based method for estimating the effectively administered FDG dose and then correct SUV quantification in those patients fulfilling the proposed criterion.
Quantum Monte Carlo benchmark of exchange-correlation functionals for bulk water
Morales, Miguel A [Lawrence Livermore National Laboratory (LLNL); Gergely, John [University of Illinois, Urbana-Champaign; McMinis, Jeremy [Lawrence Livermore National Laboratory (LLNL); McMahon, Jeffrey [University of Illinois, Urbana-Champaign; Kim, Jeongnim [ORNL; Ceperley, David M. [University of Illinois, Urbana-Champaign
2014-01-01T23:59:59.000Z
The accurate description of the thermodynamic and dynamical properties of liquid water from first-principles is a very important challenge to the theoretical community. This represents not only a critical test of the predictive capabilities of first-principles methods, but it will also shed light into the microscopic properties of such an important substance. Density Functional Theory, the main workhorse in the field of first-principles methods, has been so far unable to properly describe water and its unusual properties in the liquid state. With the recent introduction of exact exchange and an improved description of dispersion interaction, the possibility of an accurate description of the liquid is finally within reach. Unfortunately, there is still no way to systematically improve exchange-correlation functionals and the number of available functionals is very large. In this article we use highly accurate quantum Monte Carlo calculations to benchmark a selection of exchange-correlation functionals typically used in Density Functional Theory simulations of bulk water. This allows us to test the predictive capabilities of these functionals in water, giving us a way not only to choose optimal functionals for first-principles simulations, but also giving us a route for the optimization of the functionals for the system at hand. We compare and contrast the importance of different features of functionals, including the hybrid component, the vdW component, and their importance within different aspects of the PES. In addition, we test a recently introduce scheme that combines Density Functional Theory with Coupled Cluster Calculations through a Many-Body expansion of the energy, in order to correct the inaccuracies in the description of short range interactions in the liquid.
Adsorption of branched and dendritic polymers onto flat surfaces: A Monte Carlo study
Sommer, J.-U. [Leibniz Institute of Polymer Research Dresden e. V., 01069 Dresden (Germany) [Leibniz Institute of Polymer Research Dresden e. V., 01069 Dresden (Germany); Institute for Theoretical Physics, Technische Universität Dresden, 01069 Dresden (Germany); K?os, J. S. [Leibniz Institute of Polymer Research Dresden e. V., 01069 Dresden (Germany) [Leibniz Institute of Polymer Research Dresden e. V., 01069 Dresden (Germany); Faculty of Physics, A. Mickiewicz University, Umultowska 85, 61-614 Pozna? (Poland); Mironova, O. N. [Leibniz Institute of Polymer Research Dresden e. V., 01069 Dresden (Germany)] [Leibniz Institute of Polymer Research Dresden e. V., 01069 Dresden (Germany)
2013-12-28T23:59:59.000Z
Using Monte Carlo simulations based on the bond fluctuation model we study the adsorption of starburst dendrimers with flexible spacers onto a flat surface. The calculations are performed for various generation number G and spacer length S in a wide range of the reduced temperature ? as the measure of the interaction strength between the monomers and the surface. Our simulations indicate a two-step adsorption scenario. Below the critical point of adsorption, ?{sub c}, a weakly adsorbed state of the dendrimer is found. Here, the dendrimer retains its shape but sticks to the surface by adsorbed spacers. By lowering the temperature below a spacer-length dependent value, ?*(S) < ?{sub c}, a step-like transition into a strongly adsorbed state takes place. In the flatly adsorbed state the shape of the dendrimer is well described by a mean field model of a dendrimer in two dimensions. We also performed simulations of star-polymers which display a simple crossover-behavior in full analogy to linear chains. By analyzing the order parameter of the adsorption transition, we determine the critical point of adsorption of the dendrimers which is located close to the critical point of adsorption for star-polymers. While the order parameter for the adsorbed spacers displays a critical crossover scaling, the overall order parameter, which combines both critical and discontinuous transition effects, does not display simple scaling. The step-like transition from the weak into the strong adsorbed regime is confirmed by analyzing the shape-anisotropy of the dendrimers. We present a mean-field model based on the concept of spacer adsorption which predicts a discontinuous transition of dendrimers due to an excluded volume barrier. The latter results from an increased density of the dendrimer in the flatly adsorbed state which has to be overcome before this state is thermodynamically stable.
Monte Carlo simulation of nitrogen dissociation based on state-resolved cross sections
Kim, Jae Gang, E-mail: jaegkim@umich.edu; Boyd, Iain D., E-mail: iainboyd@umich.edu [Department of Aerospace Engineering, University of Michigan, 1320 Beal Avenue, Ann Arbor, Michigan 48109-2140 (United States)
2014-01-15T23:59:59.000Z
State-resolved analyses of N + N{sub 2} are performed using the direct simulation Monte Carlo (DSMC) method. In describing the elastic collisions by a state-resolved method, a state-specific total cross section is proposed. The state-resolved method is constructed from the state-specific total cross section and the rovibrational state-to-state transition cross sections for bound-bound and bound-free transitions taken from a NASA database. This approach makes it possible to analyze the rotational-to-translational, vibrational-to-translational, and rotational-to-vibrational energy transfers and the chemical reactions without relying on macroscopic properties and phenomenological models. In nonequilibrium heat bath calculations, the results of present state-resolved DSMC calculations are validated with those of the master equation calculations and the existing shock-tube experimental data for bound-bound and bound-free transitions. In various equilibrium and nonequilibrium heat bath conditions and 2D cylindrical flows, the DSMC calculations by the state-resolved method are compared with those obtained with previous phenomenological DSMC models. In these previous DSMC models, the variable soft sphere, phenomenological Larsen-Borgnakke, quantum kinetic, and total collision energy models are considered. From these studies, it is concluded that the state-resolved method can accurately describe the rotational-to-translational, vibrational-to-translational, and rotational-to-vibrational transfers and quasi-steady state of rotational and vibrational energies in nonequilibrium chemical reactions by state-to-state kinetics.
Analytical, experimental, and Monte Carlo system response matrix for pinhole SPECT reconstruction
Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es [Fundación Ramón Domínguez, Medicina Nuclear, CHUS, Spain and Grupo de Imaxe Molecular, IDIS, Santiago de Compostela 15706 (Spain)] [Fundación Ramón Domínguez, Medicina Nuclear, CHUS, Spain and Grupo de Imaxe Molecular, IDIS, Santiago de Compostela 15706 (Spain); Pino, Francisco [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Spain and Servei de Física Médica i Protecció Radiológica, Institut Catalá d'Oncologia, Barcelona 08036 (Spain)] [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Spain and Servei de Física Médica i Protecció Radiológica, Institut Catalá d'Oncologia, Barcelona 08036 (Spain); Silva-Rodríguez, Jesús [Fundación Ramón Domínguez, Medicina Nuclear, CHUS, Santiago de Compostela 15706 (Spain)] [Fundación Ramón Domínguez, Medicina Nuclear, CHUS, Santiago de Compostela 15706 (Spain); Pavía, Javier [Servei de Medicina Nuclear, Hospital Clínic, Barcelona (Spain) [Servei de Medicina Nuclear, Hospital Clínic, Barcelona (Spain); Institut d'Investigacions Biomèdiques August Pí i Sunyer (IDIBAPS) (Spain); CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); Ros, Doménec [Unitat de Biofísica, Facultat de Medicina, Casanova 143 (Spain) [Unitat de Biofísica, Facultat de Medicina, Casanova 143 (Spain); Institut d'Investigacions Biomèdiques August Pí i Sunyer (IDIBAPS) (Spain); CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); Ruibal, Álvaro [Servicio Medicina Nuclear, CHUS (Spain) [Servicio Medicina Nuclear, CHUS (Spain); Grupo de Imaxe Molecular, Facultade de Medicina (USC), IDIS, Santiago de Compostela 15706 (Spain); Fundación Tejerina, Madrid (Spain)] [Spain; and others
2014-03-15T23:59:59.000Z
Purpose: To assess the performance of two approaches to the system response matrix (SRM) calculation in pinhole single photon emission computed tomography (SPECT) reconstruction. Methods: Evaluation was performed using experimental data from a low magnification pinhole SPECT system that consisted of a rotating flat detector with a monolithic scintillator crystal. The SRM was computed following two approaches, which were based on Monte Carlo simulations (MC-SRM) and analytical techniques in combination with an experimental characterization (AE-SRM). The spatial response of the system, obtained by using the two approaches, was compared with experimental data. The effect of the MC-SRM and AE-SRM approaches on the reconstructed image was assessed in terms of image contrast, signal-to-noise ratio, image quality, and spatial resolution. To this end, acquisitions were carried out using a hot cylinder phantom (consisting of five fillable rods with diameters of 5, 4, 3, 2, and 1?mm and a uniform cylindrical chamber) and a custom-made Derenzo phantom, with center-to-center distances between adjacent rods of 1.5, 2.0, and 3.0?mm. Results: Good agreement was found for the spatial response of the system between measured data and results derived from MC-SRM and AE-SRM. Only minor differences for point sources at distances smaller than the radius of rotation and large incidence angles were found. Assessment of the effect on the reconstructed image showed a similar contrast for both approaches, with values higher than 0.9 for rod diameters greater than 1?mm and higher than 0.8 for rod diameter of 1?mm. The comparison in terms of image quality showed that all rods in the different sections of a custom-made Derenzo phantom could be distinguished. The spatial resolution (FWHM) was 0.7?mm at iteration 100 using both approaches. The SNR was lower for reconstructed images using MC-SRM than for those reconstructed using AE-SRM, indicating that AE-SRM deals better with the projection noise than MC-SRM. Conclusions: The authors' findings show that both approaches provide good solutions to the problem of calculating the SRM in pinhole SPECT reconstruction. The AE-SRM was faster to create and handle the projection noise better than MC-SRM. Nevertheless, the AE-SRM required a tedious experimental characterization of the intrinsic detector response. Creation of the MC-SRM required longer computation time and handled the projection noise worse than the AE-SRM. Nevertheless, the MC-SRM inherently incorporates extensive modeling of the system and therefore experimental characterization was not required.
A novel approach in electron beam radiation therapy of lips carcinoma: A Monte Carlo study
Shokrani, Parvaneh [Medical Physics and Medical Engineering Department, School of Medicine, Isfahan University of Medical Sciences, Isfahan 81746-73461 (Iran, Islamic Republic of); Baradaran-Ghahfarokhi, Milad [Medical Physics and Medical Engineering Department, School of Medicine, Isfahan University of Medical Sciences, Isfahan 81746-73461, Iran and Medical Radiation Engineering Department, Faculty of Advanced Sciences and Technologies, Isfahan University, Isfahan 81746-73441 (Iran, Islamic Republic of); Zadeh, Maryam Khorami [Medical Physics Department, School of Medicine, Ahwaz Jundishapour University of Medical Sciences, Ahwaz 15794-61357 (Iran, Islamic Republic of)
2013-04-15T23:59:59.000Z
Purpose: Squamous cell carcinoma (SCC) is commonly treated by electron beam radiotherapy (EBRT) followed by a boost via brachytherapy. Considering the limitations associated with brachytherapy, in this study, a novel boosting technique in EBRT of lip carcinoma using an internal shield as an internal dose enhancer tool (IDET) was evaluated. An IDET is referred to a partially covered internal shield located behind the lip. It was intended to show that while the backscattered electrons are absorbed in the portion covered with a low atomic number material, they will enhance the target dose in the uncovered area. Methods: Monte-Carlo models of 6 and 8 MeV electron beams were developed using BEAMnrc code and were validated against experimental measurements. Using the developed models, dose distributions in a lip phantom were calculated and the effect of an IDET on target dose enhancement was evaluated. Typical lip thicknesses of 1.5 and 2.0 cm were considered. A 5 Multiplication-Sign 5 cm{sup 2} of lead covered by 0.5 cm of polystyrene was used as an internal shield, while a 4 Multiplication-Sign 4 cm{sup 2} uncovered area of the shield was used as the dose enhancer. Results: Using the IDET, the maximum dose enhancement as a percentage of dose at d{sub max} of the unshielded field was 157.6% and 136.1% for 6 and 8 MeV beams, respectively. The best outcome was achieved for lip thickness of 1.5 cm and target thickness of less than 0.8 cm. For lateral dose coverage of planning target volume, the 80% isodose curve at the lip-IDET interface showed a 1.2 cm expansion, compared to the unshielded field. Conclusions: This study showed that a boost concomitant EBRT of lip is possible by modifying an internal shield into an IDET. This boosting method is especially applicable to cases in which brachytherapy faces limitations, such as small thicknesses of lips and targets located at the buccal surface of the lip.
Characteristics of elliptical sources in BEAMnrc Monte Carlo system: Implementation and application
Kim, Sangroh [Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States)
2009-04-15T23:59:59.000Z
Recently, several papers noticed that the electron focal spot of a linear accelerator (linac) could be elliptical which would cause dosimetric discrepancies between measurements and Monte Carlo simulations. To resolve the mismatch, two elliptical source models were developed in BEAMnrc code. The first was a parallel beam elliptical source with uniform distribution where the shape of the source was primarily considered. The other was a parallel beam elliptical source with Gaussian distribution whose source distribution follows the normal distribution. To validate the elliptical source models, uniform and Gaussian electron beams were impinged on a thin air target. Both models successfully reproduced the elliptical shapes and source distributions. Then, this study investigated the characteristics of the elliptical Gaussian source for a 6 MV photon beam in a Varian 21EX linac. The linac head model was implemented in the BEAMnrc/EGSnrc system and commissioned by comparing the lateral and depth dose profiles to the ion chamber measurements acquired from the annual quality assurance (QA). It was found that the circular Gaussian beam with 6 MeV/0.2 cm full width half maximum (FWHM) produces the best matches to the QA data. To explore the characteristics of the elliptical Gaussian source, this study employed an elliptical Gaussian electron source with 0.1 cm FWHM in the x axis and 0.2 cm FWHM in the y axis which was incident on the target of the linac head. Two circular Gaussian beams with 0.1 and 0.2 cm FWHM were employed to compare the differences between circular and elliptical sources. For all the sources, planar and energy fluences were acquired and analyzed. This study also compared the lateral and depth dose profiles in a water phantom by using a DOSXYZnrc user code. In results, a constricted shoulder effect was observed in both planar and energy fluence plots when the FWHM value was increased and the field size is larger than 30x30 cm{sup 2}. The same effect was also noticed in the lateral dose profiles, while the depth dose profile did not vary much.
Long, Daniel J.; Lee, Choonsik; Tien, Christopher; Fisher, Ryan; Hoerner, Matthew R.; Hintenlang, David; Bolch, Wesley E. [J Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, Florida 32611-6131 (United States); National Cancer Institute, National Institute of Health, Bethesda, Maryland 20892-1502 (United States); J Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, Florida 32611-6131 (United States); Department of Radiology, University of Florida, Gainesville, Florida 32610-0374 (United States); J Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, Florida 32611-6131 (United States)
2013-01-15T23:59:59.000Z
Purpose: To validate the accuracy of a Monte Carlo source model of the Siemens SOMATOM Sensation 16 CT scanner using organ doses measured in physical anthropomorphic phantoms. Methods: The x-ray output of the Siemens SOMATOM Sensation 16 multidetector CT scanner was simulated within the Monte Carlo radiation transport code, MCNPX version 2.6. The resulting source model was able to perform various simulated axial and helical computed tomographic (CT) scans of varying scan parameters, including beam energy, filtration, pitch, and beam collimation. Two custom-built anthropomorphic phantoms were used to take dose measurements on the CT scanner: an adult male and a 9-month-old. The adult male is a physical replica of University of Florida reference adult male hybrid computational phantom, while the 9-month-old is a replica of University of Florida Series B 9-month-old voxel computational phantom. Each phantom underwent a series of axial and helical CT scans, during which organ doses were measured using fiber-optic coupled plastic scintillator dosimeters developed at University of Florida. The physical setup was reproduced and simulated in MCNPX using the CT source model and the computational phantoms upon which the anthropomorphic phantoms were constructed. Average organ doses were then calculated based upon these MCNPX results. Results: For all CT scans, good agreement was seen between measured and simulated organ doses. For the adult male, the percent differences were within 16% for axial scans, and within 18% for helical scans. For the 9-month-old, the percent differences were all within 15% for both the axial and helical scans. These results are comparable to previously published validation studies using GE scanners and commercially available anthropomorphic phantoms. Conclusions: Overall results of this study show that the Monte Carlo source model can be used to accurately and reliably calculate organ doses for patients undergoing a variety of axial or helical CT examinations on the Siemens SOMATOM Sensation 16 scanner.
Stoller, Roger E [ORNL; Golubov, Stanislav I [ORNL; Becquart, C. S. [Universite de Lille; Domain, C. [EDF R& D, Clamart, France
2007-08-01T23:59:59.000Z
The multiscale modeling scheme encompasses models from the atomistic to the continuum scale. Phenomena at the mesoscale are typically simulated using reaction rate theory, Monte Carlo, or phase field models. These mesoscale models are appropriate for application to problems that involve intermediate length scales, and timescales from those characteristic of diffusion to long-term microstructural evolution (~?s to years). Although the rate theory and Monte Carlo models can be used simulate the same phenomena, some of the details are handled quite differently in the two approaches. Models employing the rate theory have been extensively used to describe radiation-induced phenomena such as void swelling and irradiation creep. The primary approximations in such models are time- and spatial averaging of the radiation damage source term, and spatial averaging of the microstructure into an effective medium. Kinetic Monte Carlo models can account for these spatial and temporal correlations; their primary limitation is the computational burden which is related to the size of the simulation cell. A direct comparison of RT and object kinetic MC simulations has been made in the domain of point defect cluster dynamics modeling, which is relevant to the evolution (both nucleation and growth) of radiation-induced defect structures. The primary limitations of the OKMC model are related to computational issues. Even with modern computers, the maximum simulation cell size and the maximum dose (typically much less than 1 dpa) that can be simulated are limited. In contrast, even very detailed RT models can simulate microstructural evolution for doses up 100 dpa or greater in clock times that are relatively short. Within the context of the effective medium, essentially any defect density can be simulated. Overall, the agreement between the two methods is best for irradiation conditions which produce a high density of defects (lower temperature and higher displacement rate), and for materials that have a relatively high density of fixed sinks such as dislocations.
Sanna, R.S.; O'Brien, K.
1987-12-01T23:59:59.000Z
SWIFT is a FORTRAN-77 program written for the VAX-11/750 computer. Its purpose is to unfold neutron spectra from multisphere spectrometer measurements using the Monte Carlo technique. This guide describes the code in sufficient detail to enable a user, with a background in FORTRAN programming, to alter the code for use with other spectrometers and/or to install it on computers other than the VAX-11/750. The code and the required input and resulting output are described. As an aid to its implementation, input and output for a sample problem are also presented. 19 refs., 1 fig., 6 tabs.
Search for New Heavy Higgs Boson in B-L model at the LHC using Monte Carlo Simulation
Hesham Mansour; Nady Bakhet
2013-04-24T23:59:59.000Z
The aim of this work is to search for a new heavy Higgs boson in the B-L extension of the Standard Model at LHC using the data produced from simulated collisions between two protons at different center of mass energies by Monte Carlo event generator programs to find new Higgs boson signatures at the LHC. Also we study the production and decay channels for Higgs boson in this model and its interactions with the other new particles of this model namely the new neutral gauge massive boson and the new fermionic right-handed heavy neutrinos .
Lopez-Pino, N.; Padilla-Cabal, F.; Garcia-Alvarez, J. A.; Vazquez, L.; D'Alessandro, K.; Correa-Alfonso, C. M. [Departamento de Fisica Nuclear, Instituto Superior de Tecnologia y Ciencias Aplicadas (InSTEC) Ave. Salvador Allende y Luaces. Quinta de los Molinos. Habana 10600. A.P. 6163, La Habana (Cuba); Godoy, W.; Maidana, N. L.; Vanin, V. R. [Laboratorio do Acelerador Linear, Instituto de Fisica - Universidade de Sao Paulo Rua do Matao, Travessa R, 187, 05508-900, SP (Brazil)
2013-05-06T23:59:59.000Z
A detailed characterization of a X-ray Si(Li) detector was performed to obtain the energy dependence of efficiency in the photon energy range of 6.4 - 59.5 keV, which was measured and reproduced by Monte Carlo (MC) simulations. Significant discrepancies between MC and experimental values were found when the manufacturer parameters of the detector were used in the simulation. A complete Computerized Tomography (CT) detector scan allowed to find the correct crystal dimensions and position inside the capsule. The computed efficiencies with the resulting detector model differed with the measured values no more than 10% in most of the energy range.
M. V. Ulybyshev; M. I. Katsnelson
2015-02-04T23:59:59.000Z
We study electronic properties of graphene with finite concentration of vacancies or other resonant scatterers by a straightforward lattice Quantum Monte Carlo calculations. Taking into account realistic long-range Coulomb interaction we calculate distribution of spin density associated to midgap states and demonstrate antiferromagnetic ordering. Energy gap are open due to the interaction effects, both in the bare graphene spectrum and in the vacancy/impurity bands. In the case of 5 % concentration of resonant scatterers the latter gap is estimated as 0.7 eV and 1.1 eV for graphene on boron nitride and freely suspended graphene, respectively.
Ulybyshev, M V
2015-01-01T23:59:59.000Z
We study electronic properties of graphene with finite concentration of vacancies or other resonant scatterers by a straightforward lattice Quantum Monte Carlo calculations. Taking into account realistic long-range Coulomb interaction we calculate distribution of spin density associated to midgap states and demonstrate antiferromagnetic ordering. Energy gap are open due to the interaction effects, both in the bare graphene spectrum and in the vacancy/impurity bands. In the case of 5 % concentration of resonant scatterers the latter gap is estimated as 0.7 eV and 1.1 eV for graphene on boron nitride and freely suspended graphene, respectively.
Hardiansyah, D.; Haryanto, F. [Nuclear Physics and Biophysics Research Laboratory, Physics Department, Institut Teknologi Bandung (ITB) (Indonesia); Male, S. [Radiotherapy Division, Research Hospital of Hassanudin University (Indonesia)
2014-09-30T23:59:59.000Z
Prism is a non-commercial Radiotherapy Treatment Planning System (RTPS) develop by Ira J. Kalet from Washington University. Inhomogeneity factor is included in Prism TPS dose calculation. The aim of this study is to investigate the sensitivity of dose calculation on Prism using Monte Carlo simulation. Phase space source from head linear accelerator (LINAC) for Monte Carlo simulation is implemented. To achieve this aim, Prism dose calculation is compared with EGSnrc Monte Carlo simulation. Percentage depth dose (PDD) and R50 from both calculations are observed. BEAMnrc is simulated electron transport in LINAC head and produced phase space file. This file is used as DOSXYZnrc input to simulated electron transport in phantom. This study is started with commissioning process in water phantom. Commissioning process is adjusted Monte Carlo simulation with Prism RTPS. Commissioning result is used for study of inhomogeneity phantom. Physical parameters of inhomogeneity phantom that varied in this study are: density, location and thickness of tissue. Commissioning result is shown that optimum energy of Monte Carlo simulation for 6 MeV electron beam is 6.8 MeV. This commissioning is used R50 and PDD with Practical length (R{sub p}) as references. From inhomogeneity study, the average deviation for all case on interest region is below 5 %. Based on ICRU recommendations, Prism has good ability to calculate the radiation dose in inhomogeneity tissue.
Axel Hoefer; Oliver Buss; Maik Hennebach; Michael Schmid; Dieter Porsch
2014-11-12T23:59:59.000Z
MOCABA is a combination of Monte Carlo sampling and Bayesian updating algorithms for the prediction of integral functions of nuclear data, such as reactor power distributions or neutron multiplication factors. Similarly to the established Generalized Linear Least Squares (GLLS) methodology, MOCABA offers the capability to utilize integral experimental data to reduce the prior uncertainty of integral observables. The MOCABA approach, however, does not involve any series expansions and, therefore, does not suffer from the breakdown of first-order perturbation theory for large nuclear data uncertainties. This is related to the fact that, in contrast to the GLLS method, the updating mechanism within MOCABA is applied directly to the integral observables without having to "adjust" any nuclear data. A central part of MOCABA is the nuclear data Monte Carlo program NUDUNA, which performs random sampling of nuclear data evaluations according to their covariance information and converts them into libraries for transport code systems like MCNP or SCALE. What is special about MOCABA is that it can be applied to any integral function of nuclear data, and any integral measurement can be taken into account to improve the prediction of an integral observable of interest. In this paper we present two example applications of the MOCABA framework: the prediction of the neutron multiplication factor of a water-moderated PWR fuel assembly based on 21 criticality safety benchmark experiments and the prediction of the power distribution within a toy model reactor containing 100 fuel assemblies.
O'Brien, M J; Procassini, R J; Joy, K I
2009-03-09T23:59:59.000Z
Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running.
Dominik Smith; Lorenz von Smekal
2014-03-14T23:59:59.000Z
We report on Hybrid-Monte-Carlo simulations of the tight-binding model with long-range Coulomb interactions for the electronic properties of graphene. We investigate the spontaneous breaking of sublattice symmetry corresponding to a transition from the semimetal to an antiferromagnetic insulating phase. Our short-range interactions thereby include the partial screening due to electrons in higher energy states from ab initio calculations based on the constrained random phase approximation [T.O.Wehling {\\it et al.}, Phys.Rev.Lett.{\\bf 106}, 236805 (2011)]. In contrast to a similar previous Monte-Carlo study [M.V.Ulybyshev {\\it et al.}, Phys.Rev.Lett.{\\bf 111}, 056801 (2013)] we also include a phenomenological model which describes the transition to the unscreened bare Coulomb interactions of graphene at half filling in the long-wavelength limit. Our results show, however, that the critical coupling for the antiferromagnetic Mott transition is largely insensitive to the strength of these long-range Coulomb tails. They hence confirm the prediction that suspended graphene remains in the semimetal phase when a realistic static screening of the Coulomb interactions is included.
Shell Model Monte Carlo method in the $pn$-formalism and applications to the Zr and Mo isotopes
C. Ozen; D. J. Dean
2005-08-05T23:59:59.000Z
We report on the development of a new shell-model Monte Carlo algorithm which uses the proton-neutron formalism. Shell model Monte Carlo methods, within the isospin formulation, have been successfully used in large-scale shell-model calculations. Motivation for this work is to extend the feasibility of these methods to shell-model studies involving non-identical proton and neutron valence spaces. We show the viability of the new approach with some test results. Finally, we use a realistic nucleon-nucleon interaction in the model space described by (1p_1/2,0g_9/2) proton and (1d_5/2,2s_1/2,1d_3/2,0g_7/2,0h_11/2) neutron orbitals above the Sr-88 core to calculate ground-state energies, binding energies, B(E2) strengths, and to study pairing properties of the even-even 90-104 Zr and 92-106 Mo isotope chains.
Griesheimer, D. P. [Bertis Atomic Power Laboratory, P.O. Box 79, West Mifflin, PA 15122 (United States); Stedry, M. H. [Knolls Atomic Power Laboratory, P.O. Box 1072, Schenectady, NY 12301 (United States)
2013-07-01T23:59:59.000Z
A rigorous treatment of energy deposition in a Monte Carlo transport calculation, including coupled transport of all secondary and tertiary radiations, increases the computational cost of a simulation dramatically, making fully-coupled heating impractical for many large calculations, such as 3-D analysis of nuclear reactor cores. However, in some cases, the added benefit from a full-fidelity energy-deposition treatment is negligible, especially considering the increased simulation run time. In this paper we present a generalized framework for the in-line calculation of energy deposition during steady-state Monte Carlo transport simulations. This framework gives users the ability to select among several energy-deposition approximations with varying levels of fidelity. The paper describes the computational framework, along with derivations of four energy-deposition treatments. Each treatment uses a unique set of self-consistent approximations, which ensure that energy balance is preserved over the entire problem. By providing several energy-deposition treatments, each with different approximations for neglecting the energy transport of certain secondary radiations, the proposed framework provides users the flexibility to choose between accuracy and computational efficiency. Numerical results are presented, comparing heating results among the four energy-deposition treatments for a simple reactor/compound shielding problem. The results illustrate the limitations and computational expense of each of the four energy-deposition treatments. (authors)
Zen, Andrea; Sorella, Sandro; Guidoni, Leonardo
2013-01-01T23:59:59.000Z
Quantum Monte Carlo methods are accurate and promising many body techniques for electronic structure calculations which, in the last years, are encountering a growing interest thanks to their favorable scaling with the system size and their efficient parallelization, particularly suited for the modern high performance computing facilities. The ansatz of the wave function and its variational flexibility are crucial points for both the accurate description of molecular properties and the capabilities of the method to tackle large systems. In this paper, we extensively analyze, using different variational ansatzes, several properties of the water molecule, namely: the total energy, the dipole and quadrupole momenta, the ionization and atomization energies, the equilibrium configuration, and the harmonic and fundamental frequencies of vibration. The investigation mainly focuses on variational Monte Carlo calculations, although several lattice regularized diffusion Monte Carlo calculations are also reported. Throu...
Particle-In-Cell/Monte Carlo Simulation of Ion Back Bombardment in Photoinjectors
Qiang, Ji
2009-01-01T23:59:59.000Z
on the photocathode in rf guns are order of magnitude lowerthan that in a dc gun. A higher rf frequency helpsof the cathode in rf guns. Keywords: particle-in-cell/Monte
Statistical Exploration of Electronic Structure of Molecules from Quantum Monte-Carlo Simulations
Prabhat, Mr; Zubarev, Dmitry; Lester, Jr., William A.
2010-12-22T23:59:59.000Z
In this report, we present results from analysis of Quantum Monte Carlo (QMC) simulation data with the goal of determining internal structure of a 3N-dimensional phase space of an N-electron molecule. We are interested in mining the simulation data for patterns that might be indicative of the bond rearrangement as molecules change electronic states. We examined simulation output that tracks the positions of two coupled electrons in the singlet and triplet states of an H2 molecule. The electrons trace out a trajectory, which was analyzed with a number of statistical techniques. This project was intended to address the following scientific questions: (1) Do high-dimensional phase spaces characterizing electronic structure of molecules tend to cluster in any natural way? Do we see a change in clustering patterns as we explore different electronic states of the same molecule? (2) Since it is hard to understand the high-dimensional space of trajectories, can we project these trajectories to a lower dimensional subspace to gain a better understanding of patterns? (3) Do trajectories inherently lie in a lower-dimensional manifold? Can we recover that manifold? After extensive statistical analysis, we are now in a better position to respond to these questions. (1) We definitely see clustering patterns, and differences between the H2 and H2tri datasets. These are revealed by the pamk method in a fairly reliable manner and can potentially be used to distinguish bonded and non-bonded systems and get insight into the nature of bonding. (2) Projecting to a lower dimensional subspace ({approx}4-5) using PCA or Kernel PCA reveals interesting patterns in the distribution of scalar values, which can be related to the existing descriptors of electronic structure of molecules. Also, these results can be immediately used to develop robust tools for analysis of noisy data obtained during QMC simulations (3) All dimensionality reduction and estimation techniques that we tried seem to indicate that one needs 4 or 5 components to account for most of the variance in the data, hence this 5D dataset does not necessarily lie on a well-defined, low dimensional manifold. In terms of specific clustering techniques, K-means was generally useful in exploring the dataset. The partition around medoids (pam) technique produced the most definitive results for our data showing distinctive patterns for both a sample of the complete data and time-series. The gap statistic with tibshirani criteria did not provide any distinction across the 2 dataset. The gap statistic w/DandF criteria, Model based clustering and hierarchical modeling simply failed to run on our datasets. Thankfully, the vanilla PCA technique was successful in handling our entire dataset. PCA revealed some interesting patterns for the scalar value distribution. Kernel PCA techniques (vanilladot, RBF, Polynomial) and MDS failed to run on the entire dataset, or even a significant fraction of the dataset, and we resorted to creating an explicit feature map followed by conventional PCA. Clustering using K-means and PAM in the new basis set seems to produce promising results. Understanding the new basis set in the scientific context of the problem is challenging, and we are currently working to further examine and interpret the results.
An anatomically realistic lung model for Monte Carlo-based dose calculations
Liang Liang; Larsen, Edward W.; Chetty, Indrin J. [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, Michigan 48109-2104 (United States); Department of Radiation Oncology, University of Nebraska Medical Center, Omaha, Nebraska 68198-7521 (United States)
2007-03-15T23:59:59.000Z
Treatment planning for disease sites with large variations of electron density in neighboring tissues requires an accurate description of the geometry. This self-evident statement is especially true for the lung, a highly complex organ having structures with a wide range of sizes that range from about 10{sup -4} to 1 cm. In treatment planning, the lung is commonly modeled by a voxelized geometry obtained using computed tomography (CT) data at various resolutions. The simplest such model, which is often used for QA and validation work, is the atomic mix or mean density model, in which the entire lung is homogenized and given a mean (volume-averaged) density. The purpose of this paper is (i) to describe a new heterogeneous random lung model, which is based on morphological data of the human lung, and (ii) use this model to assess the differences in dose calculations between an actual lung (as represented by our model) and a mean density (homogenized) lung. Eventually, we plan to use the random lung model to assess the accuracy of CT-based treatment plans of the lung. For this paper, we have used Monte Carlo methods to make accurate comparisons between dose calculations for the random lung model and the mean density model. For four realizations of the random lung model, we used a single photon beam, with two different energies (6 and 18 MV) and four field sizes (1x1, 5x5, 10x10, and 20x20 cm{sup 2}). We found a maximum difference of 34% of D{sub max} with the 1x1, 18 MV beam along the central axis (CAX). A ''shadow'' region distal to the lung, with dose reduction up to 7% of D{sub max}, exists for the same realization. The dose perturbations decrease for larger field sizes, but the magnitude of the differences in the shadow region is nearly independent of the field size. We also observe that, compared to the mean density model, the random structures inside the heterogeneous lung can alter the shape of the isodose lines, leading to a broadening or shrinking of the penumbra region. For small field sizes, the mean lung doses significantly depend on the structures' relative locations to the beam. In addition to these comparisons between the random lung and mean density models, we also provide a preliminary comparison between dose calculations for the random lung model and a voxelized version of this model at 0.4x0.4x0.4 cm{sup 3} resolution. Overall, this study is relevant to treatment planning for lung tumors, especially in situations where small field sizes are used. Our results show that for such situations, the mean density model of the lung is inadequate, and a more accurate CT model of the lung is required. Future work with our model will involve patient motion, setup errors, and recommendations for the resolution of CT models.
{sup 103}Pd strings: Monte Carlo assessment of a new approach to brachytherapy source design
Rivard, Mark J., E-mail: mark.j.rivard@gmail.com [Department of Radiation Oncology, Tufts University School of Medicine, Boston, Massachusetts 02111 (United States); Reed, Joshua L.; DeWerd, Larry A. [Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States)] [Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States)
2014-01-15T23:59:59.000Z
Purpose: A new type of{sup 103}Pd source (CivaString and CivaThin by CivaTech Oncology, Inc.) is examined. The source contains {sup 103}Pd and Au radio-opaque marker(s), all contained within low-Z{sub eff} organic polymers that permit source flexibility. The CivaString source is available in lengths L of 10, 20, 30, 40, 50, and 60 mm, and referred to in the current study as CS10–CS60, respectively. A thinner design, CivaThin, has sources designated as CT10–CT60, respectively. The CivaString and CivaThin sources are 0.85 and 0.60 mm in diameter, respectively. The source design is novel and offers an opportunity to examine its interesting dosimetric properties in comparison to conventional {sup 103}Pd seeds. Methods: The MCNP5 radiation transport code was used to estimate air-kerma rate and dose rate distributions with polar and cylindrical coordinate systems. Doses in water and prostate tissue phantoms were compared to determine differences between the TG-43 formalism and realistic clinical circumstances. The influence of Ti encapsulation and 2.7 keV photons was examined. The accuracy of superposition of dose distributions from shorter sources to create longer source dose distributions was also assessed. Results: The normalized air-kerma rate was not highly dependent onL or the polar angle ?, with results being nearly identical between the CivaString and CivaThin sources for common L. The air-kerma strength was also weakly dependent on L. The uncertainty analysis established a standard uncertainty of 1.3% for the dose-rate constant ?, where the largest contributors were ?{sub en}/? and ?/?. The ? values decreased with increasing L, which was largely explained by differences in solid angle. The radial dose function did not substantially vary among the CivaString and CivaThin sources for r ? 1 cm. However, behavior for r < 1 cm indicated that the Au marker(s) shielded radiation for the sources having L = 10, 30, and 50 mm. The 2D anisotropy function exhibited peaks and valleys that corresponded to positions adjacent to {sup 103}Pd wells and Au markers, respectively. Dose distributions of both source types had minimal anisotropy in comparison to conventional {sup 103}Pd seeds. Contributions by 2.7 keV photons comprised ?0.1% of the dose from all photons at positions farther than 0.13 mm from the polymer source surface. Differences between absorbed dose to water and prostate became more substantial as distance from the sources increased, with prostate dose being about 13% lower for r = 5 cm. Using a cylindrical coordinate system, dose superposition of small length sources to replicate the dose distribution for a long length source proved to be a robust technique; a 2.0% tolerance compared with the reference dose distribution did not exceed 0.1 cm{sup 3} for any of the examined source combinations. Conclusions: By design, the CivaString and CivaThin sources have novel dosimetric characteristics in comparison to Ti-encapsulated{sup 103}Pd seeds. The dosimetric characterization has determined the reasons for these differences through analysis using Monte Carlo-based radiation transport simulations.
Liew, S.L.; Ku, L.P. (Princeton Univ., NJ (United States). Plasma Physics Lab.)
1991-02-01T23:59:59.000Z
This paper reports on the delayed gamma dose rate problem formulated in terms of the effective delayed gamma production cross section. The coupled neutron-delayed gamma transport equations take the same form as the coupled neutron-prompt gamma transport equations and they can, therefore, be solved directly in the same manner. This eliminates the flux coupling step required in conventional calculations and makes it easier to handle complex, multidimensional problems, especially those that call for Monte Carlo calculations. Mathematical formulation and solution algorithms are derived. The advantages of this method in complex geometry are illustrated by its application in the Monte Carlo solution of a practical design problem.
Bashkatov, A N; Genina, Elina A; Kochubei, V I; Tuchin, Valerii V [Department of Optics and Biomedical Physics, N.G.Chernyshevskii Saratov State University (Russian Federation)
2006-12-31T23:59:59.000Z
Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates. (special issue devoted to multiple radiation scattering in random media)
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
MiniDFT MiniDFT Description MiniDFT is a plane-wave denstity functional theory (DFT) mini-app for modeling materials. Given an set of atomic coordinates and pseudopotentials,...
Karney, Charles
. The theory of neutral particle kinetics[1] treats the transport of mass, momen- tum, and energy in a plasma;Monte Carlo neutral transport codes can build on the techniques developed for neutron transportAbstract This is the user's manual for DEGAS 2 - A Monte Carlo code for the study of neutral atom
Tattersall, W J; Boyle, G J; White, R D
2015-01-01T23:59:59.000Z
We generalize a simple Monte Carlo (MC) model for dilute gases to consider the transport behavior of positrons and electrons in Percus-Yevick model liquids under highly non-equilibrium conditions, accounting rigorously for coherent scattering processes. The procedure extends an existing technique [Wojcik and Tachiya, Chem. Phys. Lett. 363, 3--4 (1992)], using the static structure factor to account for the altered anisotropy of coherent scattering in structured material. We identify the effects of the approximation used in the original method, and develop a modified method that does not require that approximation. We also present an enhanced MC technique that has been designed to improve the accuracy and flexibility of simulations in spatially-varying electric fields. All of the results are found to be in excellent agreement with an independent multi-term Boltzmann equation solution, providing benchmarks for future transport models in liquids and structured systems.
Nicolas Puech; Serge Mora; Ty Phou; Gregoire Porte; Jacques Jestin; Julian Oberdisse
2010-12-04T23:59:59.000Z
The effect of silica nanoparticles on transient microemulsion networks made of microemulsion droplets and telechelic copolymer molecules in water is studied, as a function of droplet size and concentration, amount of copolymer, and nanoparticle volume fraction. The phase diagram is found to be affected, and in particular the percolation threshold characterized by rheology is shifted upon addition of nanoparticles, suggesting participation of the particles in the network. This leads to a peculiar reinforcement behaviour of such microemulsion nanocomposites, the silica influencing both the modulus and the relaxation time. The reinforcement is modelled based on nanoparticles connected to the network via droplet adsorption. Contrast-variation Small Angle Neutron Scattering coupled to a reverse Monte Carlo approach is used to analyse the microstructure. The rather surprising intensity curves are shown to be in good agreement with the adsorption of droplets on the nanoparticle surface.
Chang, Q; Herbst, E
2007-01-01T23:59:59.000Z
AIM: We have recently developed a microscopic Monte Carlo approach to study surface chemistry on interstellar grains and the morphology of ice mantles. The method is designed to eliminate the problems inherent in the rate-equation formalism to surface chemistry. Here we report the first use of this method in a chemical model of cold interstellar cloud cores that includes both gas-phase and surface chemistry. The surface chemical network consists of a small number of diffusive reactions that can produce molecular oxygen, water, carbon dioxide, formaldehyde, methanol and assorted radicals. METHOD: The simulation is started by running a gas-phase model including accretion onto grains but no surface chemistry or evaporation. The starting surface consists of either flat or rough olivine. We introduce the surface chemistry of the three species H, O and CO in an iterative manner using our stochastic technique. Under the conditions of the simulation, only atomic hydrogen can evaporate to a significant extent. Althoug...
Domin, D.; Braida, Benoit; Lester Jr., William A.
2008-05-30T23:59:59.000Z
This study explores the use of breathing orbital valence bond (BOVB) trial wave functions for diffusion Monte Carlo (DMC). The approach is applied to the computation of the carbon-hydrogen (C-H) bond dissociation energy (BDE) of acetylene. DMC with BOVB trial wave functions yields a C-H BDE of 132.4 {+-} 0.9 kcal/mol, which is in excellent accord with the recommended experimental value of 132.8 {+-} 0.7 kcal/mol. These values are to be compared with DMC results obtained with single determinant trial wave functions, using Hartree-Fock orbitals (137.5 {+-} 0.5 kcal/mol) and local spin density (LDA) Kohn-Sham orbitals (135.6 {+-} 0.5 kcal/mol).
Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation
Pecchia, M.; D'Auria, F. [San Piero A Grado Nuclear Research Group GRNSPG, Univ. of Pisa, via Diotisalvi, 2, 56122 - Pisa (Italy); Mazzantini, O. [Nucleo-electrica Argentina Societad Anonima NA-SA, Buenos Aires (Argentina)
2012-07-01T23:59:59.000Z
Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI for performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)
Zink, K., E-mail: klemens.zink@kmub.thm.de [Institute of Medical Physics and Radiation Protection (IMPS), University of Applied Sciences Giessen, Giessen D-35390, Germany and Department of Radiotherapy and Radiooncology, University Medical Center Giessen-Marburg, Marburg D-35043 (Germany); Czarnecki, D.; Voigts-Rhetz, P. von [Institute of Medical Physics and Radiation Protection (IMPS), University of Applied Sciences Giessen, Giessen D-35390 (Germany); Looe, H. K. [Clinic for Radiation Therapy, Pius-Hospital, Oldenburg D-26129, Germany and WG Medical Radiation Physics, Carl von Ossietzky University, Oldenburg D-26129 (Germany); Harder, D. [Prof. em., Medical Physics and Biophysics, Georg August University, Göttingen D-37073 (Germany)
2014-11-01T23:59:59.000Z
Purpose: The electron fluence inside a parallel-plate ionization chamber positioned in a water phantom and exposed to a clinical electron beam deviates from the unperturbed fluence in water in absence of the chamber. One reason for the fluence perturbation is the well-known “inscattering effect,” whose physical cause is the lack of electron scattering in the gas-filled cavity. Correction factors determined to correct for this effect have long been recommended. However, more recent Monte Carlo calculations have led to some doubt about the range of validity of these corrections. Therefore, the aim of the present study is to reanalyze the development of the fluence perturbation with depth and to review the function of the guard rings. Methods: Spatially resolved Monte Carlo simulations of the dose profiles within gas-filled cavities with various radii in clinical electron beams have been performed in order to determine the radial variation of the fluence perturbation in a coin-shaped cavity, to study the influences of the radius of the collecting electrode and of the width of the guard ring upon the indicated value of the ionization chamber formed by the cavity, and to investigate the development of the perturbation as a function of the depth in an electron-irradiated phantom. The simulations were performed for a primary electron energy of 6 MeV. Results: The Monte Carlo simulations clearly demonstrated a surprisingly large in- and outward electron transport across the lateral cavity boundary. This results in a strong influence of the depth-dependent development of the electron field in the surrounding medium upon the chamber reading. In the buildup region of the depth-dose curve, the in–out balance of the electron fluence is positive and shows the well-known dose oscillation near the cavity/water boundary. At the depth of the dose maximum the in–out balance is equilibrated, and in the falling part of the depth-dose curve it is negative, as shown here the first time. The influences of both the collecting electrode radius and the width of the guard ring are reflecting the deep radial penetration of the electron transport processes into the gas-filled cavities and the need for appropriate corrections of the chamber reading. New values for these corrections have been established in two forms, one converting the indicated value into the absorbed dose to water in the front plane of the chamber, the other converting it into the absorbed dose to water at the depth of the effective point of measurement of the chamber. In the Appendix, the in–out imbalance of electron transport across the lateral cavity boundary is demonstrated in the approximation of classical small-angle multiple scattering theory. Conclusions: The in–out electron transport imbalance at the lateral boundaries of parallel-plate chambers in electron beams has been studied with Monte Carlo simulation over a range of depth in water, and new correction factors, covering all depths and implementing the effective point of measurement concept, have been developed.
Bznuni, S A; Zhamkochyan, V M; Polanski, A; Sosnin, A N; Khudaverdyan, A H
2001-01-01T23:59:59.000Z
Parameters of a subcritical cascade reactor driven by a proton accelerator and based on a primary lead-bismuth target, main reactor constructed analogously to the molten salt breeder (MSBR) reactor core and a booster-reactor analogous to the core of the BN-350 liquid metal cooled fast breeder reactor (LMFBR). It is shown by means of Monte-Carlo modeling that the reactor under study provides safe operation modes (k_{eff}=0.94-0.98), is apable to transmute effectively radioactive nuclear waste and reduces by an order of magnitude the requirements on the accelerator beam current. Calculations show that the maximal neutron flux in the thermal zone is 10^{14} cm^{12}\\cdot s^_{-1}, in the fast booster zone is 5.12\\cdot10^{15} cm^{12}\\cdot s{-1} at k_{eff}=0.98 and proton beam current I=2.1 mA.
Morozov, Alexey A., E-mail: morozov@itp.nsc.ru [Institute of Thermophysics SB RAS, 1 Lavrentyev Ave., 630090 Novosibirsk (Russian Federation)
2013-12-21T23:59:59.000Z
A theoretical study of the time-of-flight (TOF) distributions under pulsed laser evaporation in vacuum has been performed. A database of TOF distributions has been calculated by the direct simulation Monte Carlo (DSMC) method. It is shown that describing experimental TOF signals through the use of the calculated TOF database combined with a simple analysis of evaporation allows determining the irradiated surface temperature and the rate of evaporation. Analysis of experimental TOF distributions under laser ablation of niobium, copper, and graphite has been performed, with the evaluated surface temperature being well agreed with results of the thermal model calculations. General empirical dependences are proposed, which allow indentifying the regime of the laser induced thermal ablation from the TOF distributions for neutral particles without invoking the DSMC-calculated database.
Morris, M. F. [Motorola, Mesa, Arizona 85202 (United States); Tian, S. [Avante, Fremont, California 94538 (United States); Chen, Y.; Tasch, A. [Department of Electrical and Computer Engineering, University of Texas, Austin, Texas 78723 (United States); Baumann, S. [Evans Texas, Round Rock, Texas 78681 (United States); Kirchhoff, J. F. [Charles Evans and Assoc., California 94603 (United States); Hummel, R. [Department of Materials Science and Engineering, University of Florida, Gainesville, Florida 32611 (United States); Prussin, S. [Electrical Engineering Department, UCLA, Los Angeles, California, 90024 (United States); Kamenitsa, D. [Eaton Corporation, Austin, Texas 78717 (United States); Jackson, J. [Eaton Corporation, Beverly, Massachusetts 01915 (United States)
1999-06-10T23:59:59.000Z
The Monte Carlo ion implant simulator UT-MARLOWE has usually been verified using a large array of Secondary Ion Mass Spectroscopy (SIMS) data ({approx}200 profiles per ion species)(1). A model has recently been developed (1) to explicitly simulate defect production, diffusion, and their interactions during the picosecond 'defect production stage' of ion implantation. In order to thoroughly validate this model, both SIMS and various damage measurements were obtained (primarily channeling-Rutherford Backscattering Spectroscopy, Differential Reflectometry and Tapered Groove Profilometry, but supported with SEM and XTEM data). In general, the data from the various experimental techniques was consistent, and the Kinetic Accumulation Damage Model (KADM) was developed and validated using this data. This paper discusses the gathering of damage data in conjunction with SIMS in support of the development of an ion implantation simulator.
Paolini, Stefano; Ancilotto, Francesco; Toigo, Flavio [Dipartimento di Fisica 'G. Galilei', Universita' di Padova, via Marzolo 8, I-35131 Padova, Italy and CNR-INFM-DEMOCRITOS National Simulation Center, Trieste (Italy)
2007-03-28T23:59:59.000Z
The local order around alkali (Li{sup +} and Na{sup +}) and alkaline-earth (Be{sup +}, Mg{sup +}, and Ca{sup +}) ions in {sup 4}He clusters has been studied using ground-state path integral Monte Carlo calculations. The authors apply a criterion based on multipole dynamical correlations to discriminate between solidlike and liquidlike behaviors of the {sup 4}He shells coating the ions. As it was earlier suggested by experimental measurements in bulk {sup 4}He, their findings indicate that Be{sup +} produces a solidlike ('snowball') structure, similar to alkali ions and in contrast to the more liquidlike {sup 4}He structure embedding heavier alkaline-earth ions.
Monte Carlo study for optimal conditions in single-shot imaging with femtosecond x-ray laser pulses
Park, Jaehyun; Ishikawa, Tetsuya; Song, Changyong [RIKEN SPring-8 Center, 1-1-1 Kouto, Sayo, Hyogo 679-5148 (Japan)] [RIKEN SPring-8 Center, 1-1-1 Kouto, Sayo, Hyogo 679-5148 (Japan); Joti, Yasumasa [Japan Synchrotron Radiation Research Institute, 1-1-1 Kouto, Sayo, Hyogo 679-5198 (Japan)] [Japan Synchrotron Radiation Research Institute, 1-1-1 Kouto, Sayo, Hyogo 679-5198 (Japan)
2013-12-23T23:59:59.000Z
Intense x-ray pulses from x-ray free electron lasers (XFELs) enable the unveiling of atomic structure in material and biological specimens via ultrafast single-shot exposures. As the radiation is intense enough to destroy the sample, a new sample must be provided for each x-ray pulse. These single-particle delivery schemes require careful optimization, though systematic study to find such optimal conditions is still lacking. We have investigated two major single-particle delivery methods: particle injection as flying objects and membrane-mount as fixed targets. The optimal experimental parameters were searched for via Monte Carlo simulations to discover that the maximum single-particle hit rate achievable is close to 40%.
Structure of Cu64.5Zr35.5 Metallic glass by reverse Monte Carlo simulations
Fang, Xikui W. [Ames Laboratory; Huang, Li [Ames Laboratory; Wang, Cai-Zhuang [Ames Laboratory; Ho, Kai-Ming [Ames Laboratory; Ding, Z. J. [University of Science and Technology of China
2014-02-07T23:59:59.000Z
Reverse Monte Carlo simulations (RMC) have been widely used to generate three dimensional (3D) atomistic models for glass systems. To examine the reliability of the method for metallic glass, we use RMC to predict the atomic configurations of a “known” structure from molecular dynamics (MD) simulations, and then compare the structure obtained from the RMC with the target structure from MD. We show that when the structure factors and partial pair correlation functions from the MD simulations are used as inputs for RMC simulations, the 3D atomistic structure of the glass obtained from the RMC gives the short- and medium-range order in good agreement with those from the target structure by the MD simulation. These results suggest that 3D atomistic structure model of the metallic glass alloys can be reasonably well reproduced by RMC method with a proper choice of input constraints.
Leon, Stephanie M., E-mail: Stephanie.Leon@uth.tmc.edu; Wagner, Louis K. [Department of Diagnostic and Interventional Imaging, University of Texas Medical School at Houston, Houston, Texas 77030 (United States); Brateman, Libby F. [Department of Radiology, University of Florida, Gainesville, Florida 32610 (United States)
2014-11-01T23:59:59.000Z
Purpose: Monte Carlo simulations were performed with the goal of verifying previously published physical measurements characterizing scatter as a function of apparent thickness. A secondary goal was to provide a way of determining what effect tissue glandularity might have on the scatter characteristics of breast tissue. The overall reason for characterizing mammography scatter in this research is the application of these data to an image processing-based scatter-correction program. Methods: MCNPX was used to simulate scatter from an infinitesimal pencil beam using typical mammography geometries and techniques. The spreading of the pencil beam was characterized by two parameters: mean radial extent (MRE) and scatter fraction (SF). The SF and MRE were found as functions of target, filter, tube potential, phantom thickness, and the presence or absence of a grid. The SF was determined by separating scatter and primary by the angle of incidence on the detector, then finding the ratio of the measured scatter to the total number of detected events. The accuracy of the MRE was determined by placing ring-shaped tallies around the impulse and fitting those data to the point-spread function (PSF) equation using the value for MRE derived from the physical measurements. The goodness-of-fit was determined for each data set as a means of assessing the accuracy of the physical MRE data. The effect of breast glandularity on the SF, MRE, and apparent tissue thickness was also considered for a limited number of techniques. Results: The agreement between the physical measurements and the results of the Monte Carlo simulations was assessed. With a grid, the SFs ranged from 0.065 to 0.089, with absolute differences between the measured and simulated SFs averaging 0.02. Without a grid, the range was 0.28–0.51, with absolute differences averaging ?0.01. The goodness-of-fit values comparing the Monte Carlo data to the PSF from the physical measurements ranged from 0.96 to 1.00 with a grid and 0.65 to 0.86 without a grid. Analysis of the data suggested that the nongrid data could be better described by a biexponential function than the single exponential used here. The simulations assessing the effect of breast composition on SF and MRE showed only a slight impact on these quantities. When compared to a mix of 50% glandular/50% adipose tissue, the impact of substituting adipose or glandular breast compositions on the apparent thickness of the tissue was about 5%. Conclusions: The findings show agreement between the physical measurements published previously and the Monte Carlo simulations presented here; the resulting data can therefore be used more confidently for an application such as image processing-based scatter correction. The findings also suggest that breast composition does not have a major impact on the scatter characteristics of breast tissue. Application of the scatter data to the development of a scatter-correction software program can be simplified by ignoring the variations in density among breast tissues.
Böcklin, Christoph, E-mail: boecklic@ethz.ch; Baumann, Dirk; Fröhlich, Jürg [Institute of Electromagnetic Fields, ETH Zurich, 8092 Zurich (Switzerland)
2014-02-14T23:59:59.000Z
A novel way to attain three dimensional fluence rate maps from Monte-Carlo simulations of photon propagation is presented in this work. The propagation of light in a turbid medium is described by the radiative transfer equation and formulated in terms of radiance. For many applications, particularly in biomedical optics, the fluence rate is a more useful quantity and directly derived from the radiance by integrating over all directions. Contrary to the usual way which calculates the fluence rate from absorbed photon power, the fluence rate in this work is directly calculated from the photon packet trajectory. The voxel based algorithm works in arbitrary geometries and material distributions. It is shown that the new algorithm is more efficient and also works in materials with a low or even zero absorption coefficient. The capabilities of the new algorithm are demonstrated on a curved layered structure, where a non-scattering, non-absorbing layer is sandwiched between two highly scattering layers.
Comparative Dosimetric Estimates of a 25 keV Electron Micro-beam with three Monte Carlo Codes
Mainardi, Enrico; Donahue, Richard J.; Blakely, Eleanor A.
2002-09-11T23:59:59.000Z
The calculations presented compare the different performances of the three Monte Carlo codes PENELOPE-1999, MCNP-4C and PITS, for the evaluation of Dose profiles from a 25 keV electron micro-beam traversing individual cells. The overall model of a cell is a water cylinder equivalent for the three codes but with a different internal scoring geometry: hollow cylinders for PENELOPE and MCNP, whereas spheres are used for the PITS code. A cylindrical cell geometry with scoring volumes with the shape of hollow cylinders was initially selected for PENELOPE and MCNP because of its superior simulation of the actual shape and dimensions of a cell and for its improved computer-time efficiency if compared to spherical internal volumes. Some of the transfer points and energy transfer that constitute a radiation track may actually fall in the space between spheres, that would be outside the spherical scoring volume. This internal geometry, along with the PENELOPE algorithm, drastically reduced the computer time when using this code if comparing with event-by-event Monte Carlo codes like PITS. This preliminary work has been important to address dosimetric estimates at low electron energies. It demonstrates that codes like PENELOPE can be used for Dose evaluation, even with such small geometries and energies involved, which are far below the normal use for which the code was created. Further work (initiated in Summer 2002) is still needed however, to create a user-code for PENELOPE that allows uniform comparison of exact cell geometries, integral volumes and also microdosimetric scoring quantities, a field where track-structure codes like PITS, written for this purpose, are believed to be superior.
Sadeghi, Mahdi; Taghdiri, Fatemeh; Hamed Hosseini, S.; Tenreiro, Claudio [Agricultural, Medical and Industrial School, P.O. Box 31485-498, Karaj (Iran, Islamic Republic of); Engineering Faculty, Research and Science Campus, Islamic Azad University, Tehran (Iran, Islamic Republic of); Department of Energy Science, SungKyunKwan University, 300 Cheoncheon-dong, Suwon (Korea, Republic of)
2010-10-15T23:59:59.000Z
Purpose: The formalism recommended by Task Group 60 (TG-60) of the American Association of Physicists in Medicine (AAPM) is applicable for {beta} sources. Radioactive biocompatible and biodegradable {sup 153}Sm glass seed without encapsulation is a {beta}{sup -} emitter radionuclide with a short half-life and delivers a high dose rate to the tumor in the millimeter range. This study presents the results of Monte Carlo calculations of the dosimetric parameters for the {sup 153}Sm brachytherapy source. Methods: Version 5 of the (MCNP) Monte Carlo radiation transport code was used to calculate two-dimensional dose distributions around the source. The dosimetric parameters of AAPM TG-60 recommendations including the reference dose rate, the radial dose function, the anisotropy function, and the one-dimensional anisotropy function were obtained. Results: The dose rate value at the reference point was estimated to be 9.21{+-}0.6 cGy h{sup -1} {mu}Ci{sup -1}. Due to the low energy beta emitted from {sup 153}Sm sources, the dose fall-off profile is sharper than the other beta emitter sources. The calculated dosimetric parameters in this study are compared to several beta and photon emitting seeds. Conclusions: The results show the advantage of the {sup 153}Sm source in comparison with the other sources because of the rapid dose fall-off of beta ray and high dose rate at the short distances of the seed. The results would be helpful in the development of the radioactive implants using {sup 153}Sm seeds for the brachytherapy treatment.
Shang, Yu; Lin, Yu; Yu, Guoqiang, E-mail: guoqiang.yu@uky.edu [Department of Biomedical Engineering, University of Kentucky, Lexington, Kentucky 40506 (United States); Li, Ting [Department of Biomedical Engineering, University of Kentucky, Lexington, Kentucky 40506 (United States); State Key Laboratory for Electronic Thin Film and Integrated Device, University of Electronic Science and Technology of China, Chengdu 610054 (China); Chen, Lei; Toborek, Michal [Department of Neurosurgery, University of Kentucky, Lexington, Kentucky 40536 (United States)
2014-05-12T23:59:59.000Z
Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (?D{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of ?D{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo stroke model of mouse. Computer simulations shows that the high-order (N???5) linear algorithm was more accurate in extracting ?D{sub B} (errors?
Paris-Sud XI, UniversitÃ© de
produced by an earthquake and its aftershocks (the external events) on a nuclear power plant (the critical plant) embedded in the connected power and water distribution, and transportation networks which support1 Goal Tree Success Tree - Dynamic Master Logic Diagram and Monte Carlo Simulation for the Safety
Meirovitch, Hagai
Lower and upper bounds for the absolute free energy by the hypothetical scanning Monte Carlo method The hypothetical scanning HS method is a general approach for calculating the absolute entropy S and free energy F to provide the free energy through the analysis of a single configuration. Â© 2004 American Institute
Gu, Heng
2010-01-14T23:59:59.000Z
by winding continuous fiber on a spinning mold (mandrel) with high-speed, computer-programmed precision, injecting liquid resin as the part is formed, and then curing the resin. This process is used for forming rocket motor casings; spherical containers... 2.1 Percolation theory and Monte Carlo method ................................. 13 2.2 Model generation............................................................................ 16 2.3 Connection criteria...
MONTE CARLO SIMULATIONS OF SMALL H2SO4-H2O CLUSTERS* B.N. HALE AND S.M. KATHMANN
Hale, Barbara N.
rain, and ozone depletion mechanisms involving sulfuric acid tetrahydrate (SAT) ice. At present Abstract - Small binary clusters of water and sulfuric acid are simulated with effective atom- atom pair for the free energy differences are given. Keywords - Monte Carlo, binary nucleation, sulfuric acid and water
Nabben, Reinhard
Light-induced oxygen-ordering dynamics in ,,Y,Pr...Ba2Cu3O6.7: A Raman spectroscopy and Monte Carlo energy barrier which impedes oxygen movement in the plane unless the oxygen atoms are excited by light for oxygen reordering in the chain plane being at the origin of Raman photobleaching and related effects. DOI
Phase Behavior of the Restricted Primitive Model and Square-Well Fluids from Monte Carlo of Chemical Engineering, Cornell University, Ithaca, NY 14853-5201 and Institute for Physical Science and Technology and Department of Chemical Engineering, University of Maryland, College Park, MD 20742
Glyde, Henry R.
Bose-Einstein condensation in trapped bosons: A variational Monte Carlo analysis J. L. DuBois and H describes the whole gas well. Effects of atoms excited above the condensate have been incorporated within correlations is used to study the sensitivity of condensate and noncondensate properties to the hard- sphere
Glyde, Henry R.
Natural orbitals and Bose-Einstein condensates in traps: A diffusion Monte Carlo analysis J. L. Du of the atoms in an ideal Bose gas can condense into a single quantum state. London 3,4 postulated in harmonic traps over a wide range of densities. Bose- Einstein condensation is formulated using the one
Optimal sampling efficiency in Monte Carlo sampling with an approximate potential
Coe, Joshua D [Los Alamos National Laboratory; Shaw, M Sam [Los Alamos National Laboratory; Sewell, Thomas D [U MISSOURI
2009-01-01T23:59:59.000Z
Building on the work of Iftimie et al., Boltzmann sampling of an approximate potential (the 'reference' system) is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is evaluated at a higher level of approximation (the 'full' system) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. For reference system chains of sufficient length, consecutive full energies are statistically decorrelated and thus far fewer are required to build ensemble averages with a given variance. Without modifying the original algorithm, however, the maximum reference chain length is too short to decorrelate full configurations without dramatically lowering the acceptance probability of the composite move. This difficulty stems from the fact that the reference and full potentials sample different statistical distributions. By manipulating the thermodynamic variables characterizing the reference system (pressure and temperature, in this case), we maximize the average acceptance probability of composite moves, lengthening significantly the random walk between consecutive full energy evaluations. In this manner, the number of full energy evaluations needed to precisely characterize equilibrium properties is dramatically reduced. The method is applied to a model fluid, but implications for sampling high-dimensional systems with ab initio or density functional theory (DFT) potentials are discussed.
Monte Carlo Fundamentals E B. BROWN and T M. S N
Office of Scientific and Technical Information (OSTI)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel),Feet) Year Jan Feb Mar Apr May Jun Jul(Summary) "ofEarlyEnergyDepartmentNationalRestart ofMeasuringInformation 9StructureContactWindFINALi MineSeisMonte
Chandler, David [ORNL; Maldonado, G Ivan [ORNL; Primm, Trent [ORNL
2010-01-01T23:59:59.000Z
The purpose of this study is to validate a Monte Carlo based depletion methodology by comparing calculated post-irradiation uranium isotopic compositions in the fuel elements of the High Flux Isotope Reactor (HFIR) core to values measured using uranium mass-spectrographic analysis. Three fuel plates were analyzed: two from the outer fuel element (OFE) and one from the inner fuel element (IFE). Fuel plates O-111-8, O-350-1, and I-417-24 from outer fuel elements 5-O and 21-O and inner fuel element 49-I, respectively, were selected for examination. Fuel elements 5-O, 21-O, and 49-1 were loaded into HFIR during cycles 4, 16, and 35, respectively (mid to late 1960s). Approximately one year after each of these elements were irradiated, they were transferred to the High Radiation Level Examination Laboratory (HRLEL) where samples from these fuel plates were sectioned and examined via uranium mass-spectrographic analysis. The isotopic composition of each of the samples was used to determine the atomic percent of the uranium isotopes. A Monte Carlo based depletion computer program, ALEPH, which couples the MCNP and ORIGEN codes, was utilized to calculate the nuclide inventory at the end-of-cycle (EOC). A current ALEPH/MCNP input for HFIR fuel cycle 400 was modified to replicate cycles 4, 16, and 35. The control element withdrawal curves and flux trap loadings were revised, as well as the radial zone boundaries and nuclide concentrations in the MCNP model. The calculated EOC uranium isotopic compositions for the analyzed plates were found to be in good agreement with measurements, which reveals that ALEPH/MCNP can accurately calculate burn-up dependent uranium isotopic concentrations for the HFIR core. The spatial power distribution in HFIR changes significantly as irradiation time increases due to control element movement. Accurate calculation of the end-of-life uranium isotopic inventory is a good indicator that the power distribution variation as a function of space and time is accurately calculated, i.e. an integral check. Hence, the time dependent heat generation source terms needed for reactor core thermal hydraulic analysis, if derived from this methodology, have been shown to be accurate for highly enriched uranium (HEU) fuel.
Chibani, Omar, E-mail: omar.chibani@fccc.edu; C-M Ma, Charlie [Fox Chase Cancer Center, Philadelphia, Pennsylvania 19111 (United States)] [Fox Chase Cancer Center, Philadelphia, Pennsylvania 19111 (United States)
2014-05-15T23:59:59.000Z
Purpose: To present a new accelerated Monte Carlo code for CT-based dose calculations in high dose rate (HDR) brachytherapy. The new code (HDRMC) accounts for both tissue and nontissue heterogeneities (applicator and contrast medium). Methods: HDRMC uses a fast ray-tracing technique and detailed physics algorithms to transport photons through a 3D mesh of voxels representing the patient anatomy with applicator and contrast medium included. A precalculated phase space file for the{sup 192}Ir source is used as source term. HDRM is calibrated to calculated absolute dose for real plans. A postprocessing technique is used to include the exact density and composition of nontissue heterogeneities in the 3D phantom. Dwell positions and angular orientations of the source are reconstructed using data from the treatment planning system (TPS). Structure contours are also imported from the TPS to recalculate dose-volume histograms. Results: HDRMC was first benchmarked against the MCNP5 code for a single source in homogenous water and for a loaded gynecologic applicator in water. The accuracy of the voxel-based applicator model used in HDRMC was also verified by comparing 3D dose distributions and dose-volume parameters obtained using 1-mm{sup 3} versus 2-mm{sup 3} phantom resolutions. HDRMC can calculate the 3D dose distribution for a typical HDR cervix case with 2-mm resolution in 5 min on a single CPU. Examples of heterogeneity effects for two clinical cases (cervix and esophagus) were demonstrated using HDRMC. The neglect of tissue heterogeneity for the esophageal case leads to the overestimate of CTV D90, CTV D100, and spinal cord maximum dose by 3.2%, 3.9%, and 3.6%, respectively. Conclusions: A fast Monte Carlo code for CT-based dose calculations which does not require a prebuilt applicator model is developed for those HDR brachytherapy treatments that use CT-compatible applicators. Tissue and nontissue heterogeneities should be taken into account in modern HDR brachytherapy planning.
Reverse Monte Carlo simulation of Se{sub 80}Te{sub 20} and Se{sub 80}Te{sub 15}Sb{sub 5} glasses
Abdel-Baset, A. M.; Rashad, M. [Physics Department, Faculty of Science , Assiut University, Assiut, P.O. Box 71516 (Egypt); Moharram, A. H. [Faculty of Science, King Abdul Aziz Univ., Rabigh Branch, P.O. Box 433 (Saudi Arabia)
2013-12-16T23:59:59.000Z
Two-dimensional Monte Carlo of the total pair distribution functions g(r) is determined for Se{sub 80}Te{sub 20} and Se{sub 80}Te{sub 15}Sb{sub 5} alloys, and then it used to assemble the three-dimensional atomic configurations using the reverse Monte Carlo simulation. The partial pair distribution functions g{sub ij}(r) indicate that the basic structure unit in the Se{sub 80}Te{sub 15}Sb{sub 5} glass is di-antimony tri-selenide units connected together through Se-Se and Se-Te chain. The structure of Se{sub 80}Te{sub 20} alloys is a chain of Se-Te and Se-Se in addition to some rings of Se atoms.
Kuss, M.; Markel, T.; Kramer, W.
2011-01-01T23:59:59.000Z
Concentrated purchasing patterns of plug-in vehicles may result in localized distribution transformer overload scenarios. Prolonged periods of transformer overloading causes service life decrements, and in worst-case scenarios, results in tripped thermal relays and residential service outages. This analysis will review distribution transformer load models developed in the IEC 60076 standard, and apply the model to a neighborhood with plug-in hybrids. Residential distribution transformers are sized such that night-time cooling provides thermal recovery from heavy load conditions during the daytime utility peak. It is expected that PHEVs will primarily be charged at night in a residential setting. If not managed properly, some distribution transformers could become overloaded, leading to a reduction in transformer life expectancy, thus increasing costs to utilities and consumers. A Monte-Carlo scheme simulated each day of the year, evaluating 100 load scenarios as it swept through the following variables: number of vehicle per transformer, transformer size, and charging rate. A general method for determining expected transformer aging rate will be developed, based on the energy needs of plug-in vehicles loading a residential transformer.
An OpenCL-based Monte Carlo dose calculation engine (oclMC) for coupled photon-electron transport
Tian, Zhen; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun
2015-01-01T23:59:59.000Z
Monte Carlo (MC) method has been recognized the most accurate dose calculation method for radiotherapy. However, its extremely long computation time impedes clinical applications. Recently, a lot of efforts have been made to realize fast MC dose calculation on GPUs. Nonetheless, most of the GPU-based MC dose engines were developed in NVidia CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a fast cross-platform MC dose engine oclMC using OpenCL environment for external beam photon and electron radiotherapy in MeV energy range. Coupled photon-electron MC simulation was implemented with analogue simulations for photon transports and a Class II condensed history scheme for electron transports. To test the accuracy and efficiency of our dose engine oclMC, we compared dose calculation results of oclMC and gDPM, our previously developed GPU-based MC code, for a 15 MeV electron ...
Yangsen Yao; S. Nan Zhang; Xiaoling Zhang; Yuxin Feng; Craig R. Robinson
2004-10-10T23:59:59.000Z
Understanding the properties of the hot corona is important for studying the accretion disks in black hole X-ray binary systems. Using the Monte-Carlo technique to simulate the inverse Compton scattering between photons emitted from the cold disk and electrons in the hot corona, we have produced two table models in the $XSPEC$ format for the spherical corona case and the disk-like (slab) corona case. All parameters in our table models are physical properties of the system and can be derived from data fitting directly. Applying the models to broad-band spectra of the black hole candidate XTE J2012+381 observed with BeppoSAX, we estimated the size of the corona and the inner radius of the disk. The size of the corona in this system is several tens of gravitational radius, and the substantial increase of the inner disk radius during the transit from hard-state to soft-state is not found.
Wes Armour; Simon Hands; Costas Strouthos
2013-02-07T23:59:59.000Z
We formulate a model of N_f=4 flavors of relativistic fermion in 2+1d in the presence of a chemical potential mu coupled to two flavor doublets with opposite sign, akin to isopsin chemical potential in QCD. This is argued to be an effective theory for low energy electronic excitations in bilayer graphene, in which an applied voltage between the layers ensures equal populations of particles on one layer and holes on the other. The model is then reformulated on a spacetime lattice using staggered fermions, and in the absence of a sign problem, simulated using an orthodox hybrid Monte Carlo algorithm. With the coupling strength chosen to be close to a quantum critical point believed to exist for N_f
Umbreit, Stefan; Rasio, Frederic A. [Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA) and Department of Physics and Astronomy, Northwestern University, 2145 Sheridan Rd., Evanston, IL 60208 (United States); Fregeau, John M. [Kavli Institute of Theoretical Physics, University of California, Santa Barbara, CA 93106 (United States); Chatterjee, Sourav, E-mail: s-umbreit@northwestern.edu [Department of Astronomy, University of Florida, 211 Bryant Space Science Center, Gainesville, FL (United States)
2012-05-01T23:59:59.000Z
We present results from a series of Monte Carlo (MC) simulations investigating the imprint of a central intermediate-mass black hole (IMBH) on the structure of a globular cluster. We investigate the three-dimensional and projected density profiles, and stellar disruption rates for idealized as well as realistic cluster models, taking into account a stellar mass spectrum and stellar evolution, and allowing for a larger, more realistic number of stars than was previously possible with direct N-body methods. We compare our results to other N-body and Fokker-Planck simulations published previously. We find, in general, very good agreement for the overall cluster structure and dynamical evolution between direct N-body simulations and our MC simulations. Significant differences exist in the number of stars that are tidally disrupted by the IMBH, and this is most likely caused by the wandering motion of the IMBH, not included in the MC scheme. These differences, however, are negligible for the final IMBH masses in realistic cluster models, as the disruption rates are generally much lower than for single-mass clusters. As a direct comparison to observations we construct a detailed model for the cluster NGC 5694, which is known to possess a central surface brightness cusp consistent with the presence of an IMBH. We find that not only the inner slope but also the outer part of the surface brightness profile agree well with observations. However, there is only a slight preference for models harboring an IMBH compared to models without.
Monte Carlo Studies of Identified Two-particle Correlations in p-p and Pb-Pb Collisions
G. Bencedi; G. G. Barnaföldi; L. Molnar
2014-03-21T23:59:59.000Z
Azimuthal particle correlations have been extensively studied in the past at various collider energies in p-p, p-A, and A-A collisions. Hadron-correlation measurements in heavy-ion collisions have mainly focused on studies of collective (flow) effects at low-$p_T$ and parton energy loss via jet quenching in the high-$p_T$ regime. This was usually done without event-by-event particle identification. In this paper, we present two-particle correlations with identified trigger hadrons and identified associated hadrons at mid-rapidity in Monte Carlo generated events. The primary purpose of this study was to investigate the effect of quantum number conservation and the flavour balance during parton fragmentation and hadronization. The simulated p-p events were generated with PYTHIA 6.4 with the Perugia-0 tune at $\\sqrt{s}=7$ TeV. HIJING was used to generate $0-10\\%$ central Pb-Pb events at $\\sqrt{s_{\\rm NN}}=2.76$ TeV. We found that the extracted identified associated hadron spectra for charged pion, kaon, and proton show identified trigger-hadron dependent splitting. Moreover, the identified trigger-hadron dependent correlation functions vary in different $p_T$ bins, which may show the presence of collective/nuclear effects.
Garrod, Robin T
2013-01-01T23:59:59.000Z
The first off-lattice Monte Carlo kinetics model of interstellar dust-grain surface chemistry is presented. The positions of all surface particles are determined explicitly, according to the local potential minima resulting from the pair-wise interactions of contiguous atoms and molecules, rather than by a pre-defined lattice structure. The model is capable of simulating chemical kinetics on any arbitrary dust-grain morphology, as determined by the user-defined positions of each individual dust-grain atom. A simple method is devised for the determination of the most likely diffusion pathways and their associated energy barriers for surface species. The model is applied to a small, idealized dust grain, adopting various gas densities and using a small chemical network. Hydrogen and oxygen atoms accrete onto the grain, to produce H2O, H2, O2 and H2O2. The off-lattice method allows the ice structure to evolve freely; ice mantle porosity is found to be dependent on the gas density, which controls the accretion ra...
Stoller, Roger E [ORNL; Golubov, Stanislav I [ORNL; Becquart, C. S. [Universite de Lille; Domain, C. [EDF R& D, Clamart, France
2006-09-01T23:59:59.000Z
The multiscale modeling scheme encompasses models from the atomistic to the continuum scale. Phenomena at the mesoscale are typically simulated using reaction rate theory (RT), Monte Carlo (MC), or phase field models. These mesoscale models are appropriate for application to problems that involve intermediate length scales ( m to >mm), and timescales from diffusion (~ s) to long-term microstructural evolution (~years). Phenomena at this scale have the most direct impact on mechanical properties in structural materials of interest to nuclear energy systems, and are also the most accessible to direct comparison between the results of simulations and experiments. Recent advances in computational power have substantially expanded the range of application for MC models. Although the RT and MC models can be used simulate the same phenomena, many of the details are handled quite differently in the two approaches. A direct comparison of the RT and MC descriptions has been made in the domain of point defect cluster dynamics modeling, which is relevant to both the nucleation and evolution of radiation-induced defect structures. The relative merits and limitations of the two approaches are discussed, and the predictions of the two approaches are compared for specific irradiation conditions.
Sarkadi, L
2015-01-01T23:59:59.000Z
The three-body dynamics of the ionization of the atomic hydrogen by 30 keV antiproton impact has been investigated by calculation of fully differential cross sections (FDCS) using the classical trajectory Monte Carlo (CTMC) method. The results of the calculations are compared with the predictions of quantum mechanical descriptions: The semi-classical time-dependent close-coupling theory, the fully quantal, time-independent close-coupling theory, and the continuum-distorted-wave-eikonal-initial-state model. In the analysis particular emphasis was put on the role of the nucleus-nucleus (NN) interaction played in the ionization process. For low-energy electron ejection CTMC predicts a large NN interaction effect on FDCS, in agreement with the quantum mechanical descriptions. By examining individual particle trajectories it was found that the relative motion between the electron and the nuclei is coupled very weakly with that between the nuclei, consequently the two motions can be treated independently. A simple ...
Lin, J. Y. Y. [California Institute of Technology, Pasadena] [California Institute of Technology, Pasadena; Aczel, Adam A [ORNL] [ORNL; Abernathy, Douglas L [ORNL] [ORNL; Nagler, Stephen E [ORNL] [ORNL; Buyers, W. J. L. [National Research Council of Canada] [National Research Council of Canada; Granroth, Garrett E [ORNL] [ORNL
2014-01-01T23:59:59.000Z
Recently an extended series of equally spaced vibrational modes was observed in uranium nitride (UN) by performing neutron spectroscopy measurements using the ARCS and SEQUOIA time-of- flight chopper spectrometers [A.A. Aczel et al, Nature Communications 3, 1124 (2012)]. These modes are well described by 3D isotropic quantum harmonic oscillator (QHO) behavior of the nitrogen atoms, but there are additional contributions to the scattering that complicate the measured response. In an effort to better characterize the observed neutron scattering spectrum of UN, we have performed Monte Carlo ray tracing simulations of the ARCS and SEQUOIA experiments with various sample kernels, accounting for the nitrogen QHO scattering, contributions that arise from the acoustic portion of the partial phonon density of states (PDOS), and multiple scattering. These simulations demonstrate that the U and N motions can be treated independently, and show that multiple scattering contributes an approximate Q-independent background to the spectrum at the oscillator mode positions. Temperature dependent studies of the lowest few oscillator modes have also been made with SEQUOIA, and our simulations indicate that the T-dependence of the scattering from these modes is strongly influenced by the uranium lattice.
Lopez, M. A.; Broggio, D.; Capello, K.; Cardenas-Mendez, E.; El-Faramawy, N.; Franck, D.; James, Anthony C.; Kramer, Gary H.; Lacerenza, G.; Lynch, Timothy P.; Navarro, J. F.; Navarro, T.; Perez, B.; Ruhm, W.; Tolmachev, Sergei Y.; Weitzenegger, E.
2011-03-01T23:59:59.000Z
United States Transuranium and Uranium Registries (USTUR) Case 0102 was the first whole-body donation to the USTUR (1979), of a worker affected by a substantial accidental 241Am intake(1). Half of this man’s skeleton, encased in tissue-quivalent plastic, provides a unique human ‘phantom’ for calibrating in vivo counting systems. In this case, the 241Am skeletal activity was measured 25 y after the intake. Approximately 82 % of the 241Am remaining in the body was found in the bones and teeth. The241Am activity concentration throughout the skeleton (in all types of bone) was fairly uniform(2). A protocol has been proposed by a group of in vivo laboratories from Europe [CIEMAT-Spain, IRSN-France and Helmholtz Zentrum Mu¨nchen (HMGU)-Germany] and Canada (HML) participating in this DOS/USTUR intercomparison. The focus areas for the study included: (1) the efficiency pattern along the leg phantom using Germanium detectors (experimental and computational), (2) the comparison of Monte Carlo (MC) results with experimental values in counting efficiency data and (3) the inflence of americium distribution in the bone material (volume or surface).
Kim, Jeongnim [ORNL] [ORNL; Reboredo, Fernando A [ORNL] [ORNL
2014-01-01T23:59:59.000Z
The self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo J. Chem. Phys. {\\bf 136}, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. {\\bf 89}, 6316 (1988)] are blended to obtain a method for the calculation of thermodynamic properties of many-body systems at low temperatures. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric trial wave functions. A statistical method is derived for the calculation of finite temperature properties of many-body systems near the ground state. In the process we also obtain a parallel algorithm that optimizes the many-body basis of a small subspace of the many-body Hilbert space. This small subspace is optimized to have maximum overlap with the one expanded by the lower energy eigenstates of a many-body Hamiltonian. We show in a model system that the Helmholtz free energy is minimized within this subspace as the iteration number increases. We show that the subspace expanded by the small basis systematically converges towards the subspace expanded by the lowest energy eigenstates. Possible applications of this method to calculate the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can be also used to accelerate the calculation of the ground or excited states with Quantum Monte Carlo.
Andrea Zen; Ye Luo; Sandro Sorella; Leonardo Guidoni
2013-09-02T23:59:59.000Z
Quantum Monte Carlo methods are accurate and promising many body techniques for electronic structure calculations which, in the last years, are encountering a growing interest thanks to their favorable scaling with the system size and their efficient parallelization, particularly suited for the modern high performance computing facilities. The ansatz of the wave function and its variational flexibility are crucial points for both the accurate description of molecular properties and the capabilities of the method to tackle large systems. In this paper, we extensively analyze, using different variational ansatzes, several properties of the water molecule, namely: the total energy, the dipole and quadrupole momenta, the ionization and atomization energies, the equilibrium configuration, and the harmonic and fundamental frequencies of vibration. The investigation mainly focuses on variational Monte Carlo calculations, although several lattice regularized diffusion Monte Carlo calculations are also reported. Through a systematic study, we provide a useful guide to the choice of the wave function, the pseudo potential, and the basis set for QMC calculations. We also introduce a new strategy for the definition of the atomic orbitals involved in the Jastrow - Antisymmetrised Geminal power wave function, in order to drastically reduce the number of variational parameters. This scheme significantly improves the efficiency of QMC energy minimization in case of large basis sets.
Belec, Jason; Ploquin, Nicolas; La Russa, Daniel J.; Clark, Brenda G. [Department of Medical Physics, Ottawa Hospital Cancer Centre, 501 Smyth Road, Box 927, Ottawa, Ontario K1H 8L6 (Canada) and Carleton University, 1125 Colonel By Drive, Ottawa, Ontario K1S 5B6 (Canada); Department of Medical Physics, Ottawa Hospital Cancer Centre, 501 Smyth Road, Box 927, Ottawa, Ontario K1H 8L6 (Canada); Department of Medical Physics, Ottawa Hospital Cancer Centre, 501 Smyth Road, Box 927, Ottawa, Ontario K1H 8L6 (Canada) and Carleton University, 1125 Colonel By Drive, Ottawa, Ontario K1S 5B6 (Canada)
2011-02-15T23:59:59.000Z
Purpose: The commercial release of volumetric modulated arc therapy techniques using a conventional linear accelerator and the growing number of helical tomotherapy users have triggered renewed interest in dose verification methods, and also in tools for exploring the impact of machine tolerance and patient motion on dose distributions without the need to approximate time-varying parameters such as gantry position, MLC leaf motion, or patient motion. To this end we have developed a Monte Carlo-based calculation method capable of simulating a wide variety of treatment techniques without the need to resort to discretization approximations. Methods: The ability to perform complete position-probability-sampled Monte Carlo dose calculations was implemented in the BEAMnrc/DOSXZYnrc user codes of EGSnrc. The method includes full accelerator head simulations of our tomotherapy and Elekta linacs, and a realistic representation of continous motion via the sampling of a time variable. The functionality of this algorithm was tested via comparisons with both measurements and treatment planning dose distributions for four types of treatment techniques: 3D conformal, step-shoot intensity modulated radiation therapy, helical tomotherapy, and volumetric modulated arc therapy. Results: For static fields, the absolute dose agreement between the EGSnrc Monte Carlo calculations and measurements is within 2%/1 mm. Absolute dose agreement between Monte Carlo calculations and treatment planning system for the four different treatment techniques is within 3%/3 mm. Discrepancies with the tomotherapy TPS on the order of 10%/5 mm were observed for the extreme example of a small target located 15 cm off-axis and planned with a low modulation factor. The increase in simulation time associated with using position-probability sampling, as opposed to the discretization approach, was less than 2% in most cases. Conclusions: A single Monte Carlo simulation method can be used to calculate patient dose distribution for various types of treatment techniques delivered with either tomotherapy or a conventional linac. The method simplifies the simulation process, improves dose calculation accuracy, and involves an acceptably small change in computation time.
Gohar, Y.; Zhong, Z.; Talamo, A.; Nuclear Engineering Division
2009-06-09T23:59:59.000Z
Argonne National Laboratory (ANL) of USA and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the conceptual design development of an electron accelerator driven subcritical (ADS) facility, using the KIPT electron accelerator. The neutron source of the subcritical assembly is generated from the interaction of 100 KW electron beam with a natural uranium target. The electron beam has a uniform spatial distribution and electron energy in the range of 100 to 200 MeV. The main functions of the subcritical assembly are the production of medical isotopes and the support of the Ukraine nuclear power industry. Neutron physics experiments and material structure analyses are planned using this facility. With the 100 KW electron beam power, the total thermal power of the facility is {approx}375 kW including the fission power of {approx}260 kW. The burnup of the fissile materials and the buildup of fission products reduce continuously the reactivity during the operation, which reduces the neutron flux level and consequently the facility performance. To preserve the neutron flux level during the operation, fuel assemblies should be added after long operating periods to compensate for the lost reactivity. This process requires accurate prediction of the fuel burnup, the decay behavior of the fission produces, and the introduced reactivity from adding fresh fuel assemblies. The recent developments of the Monte Carlo computer codes, the high speed capability of the computer processors, and the parallel computation techniques made it possible to perform three-dimensional detailed burnup simulations. A full detailed three-dimensional geometrical model is used for the burnup simulations with continuous energy nuclear data libraries for the transport calculations and 63-multigroup or one group cross sections libraries for the depletion calculations. Monte Carlo Computer code MCNPX and MCB are utilized for this study. MCNPX transports the electrons and the produced neutrons and photons but the current version of MCNPX doesn't support depletion/burnup calculation of the subcritical system with the generated neutron source from the target. MCB can perform neutron transport and burnup calculation for subcritical system using external neutron source, however it cannot perform electron transport calculations. To solve this problem, a hybrid procedure is developed by coupling these two computer codes. The user tally subroutine of MCNPX is developed and utilized to record the information of the each generated neutron from the photonuclear reactions resulted from the electron beam interactions. MCB reads the recorded information of each generated neutron thorough the user source subroutine. In this way, the neutron source generated by electron reactions could be utilized in MCB calculations, without the need for MCB to transport the electrons. Using the source subroutines, MCB could get the external neutron source, which is prepared by MCNPX, and perform depletion calculation for the driven subcritical facility.
Q. Chang; H. M. Cuppen; E. Herbst
2007-05-24T23:59:59.000Z
AIM: We have recently developed a microscopic Monte Carlo approach to study surface chemistry on interstellar grains and the morphology of ice mantles. The method is designed to eliminate the problems inherent in the rate-equation formalism to surface chemistry. Here we report the first use of this method in a chemical model of cold interstellar cloud cores that includes both gas-phase and surface chemistry. The surface chemical network consists of a small number of diffusive reactions that can produce molecular oxygen, water, carbon dioxide, formaldehyde, methanol and assorted radicals. METHOD: The simulation is started by running a gas-phase model including accretion onto grains but no surface chemistry or evaporation. The starting surface consists of either flat or rough olivine. We introduce the surface chemistry of the three species H, O and CO in an iterative manner using our stochastic technique. Under the conditions of the simulation, only atomic hydrogen can evaporate to a significant extent. Although it has little effect on other gas-phase species, the evaporation of atomic hydrogen changes its gas-phase abundance, which in turn changes the flux of atomic hydrogen onto grains. The effect on the surface chemistry is treated until convergence occurs. We neglect all non-thermal desorptive processes. RESULTS: We determine the mantle abundances of assorted molecules as a function of time through 2x10^5 yr. Our method also allows determination of the abundance of each molecule in specific monolayers. The mantle results can be compared with observations of water, carbon dioxide, carbon monoxide, and methanol ices in the sources W33A and Elias 16. Other than a slight underproduction of mantle CO, our results are in very good agreement with observations.
Liu, T.; Ding, A.; Ji, W.; Xu, X. G. [Nuclear Engineering and Engineering Physics, Rensselaer Polytechnic Inst., Troy, NY 12180 (United States); Carothers, C. D. [Dept. of Computer Science, Rensselaer Polytechnic Inst. RPI (United States); Brown, F. B. [Los Alamos National Laboratory (LANL) (United States)
2012-07-01T23:59:59.000Z
Monte Carlo (MC) method is able to accurately calculate eigenvalues in reactor analysis. Its lengthy computation time can be reduced by general-purpose computing on Graphics Processing Units (GPU), one of the latest parallel computing techniques under development. The method of porting a regular transport code to GPU is usually very straightforward due to the 'embarrassingly parallel' nature of MC code. However, the situation becomes different for eigenvalue calculation in that it will be performed on a generation-by-generation basis and the thread coordination should be explicitly taken care of. This paper presents our effort to develop such a GPU-based MC code in Compute Unified Device Architecture (CUDA) environment. The code is able to perform eigenvalue calculation under simple geometries on a multi-GPU system. The specifics of algorithm design, including thread organization and memory management were described in detail. The original CPU version of the code was tested on an Intel Xeon X5660 2.8 GHz CPU, and the adapted GPU version was tested on NVIDIA Tesla M2090 GPUs. Double-precision floating point format was used throughout the calculation. The result showed that a speedup of 7.0 and 33.3 were obtained for a bare spherical core and a binary slab system respectively. The speedup factor was further increased by a factor of {approx}2 on a dual GPU system. The upper limit of device-level parallelism was analyzed, and a possible method to enhance the thread-level parallelism was proposed. (authors)
Su, L.; Du, X.; Liu, T.; Xu, X. G. [Nuclear Engineering Program, Rensselaer Polytechnic Institute, Troy, NY 12180 (United States)
2013-07-01T23:59:59.000Z
An electron-photon coupled Monte Carlo code ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - is being developed at Rensselaer Polytechnic Institute as a software test bed for emerging heterogeneous high performance computers that utilize accelerators such as GPUs. In this paper, the preliminary results of code development and testing are presented. The electron transport in media was modeled using the class-II condensed history method. The electron energy considered ranges from a few hundred keV to 30 MeV. Moller scattering and bremsstrahlung processes above a preset energy were explicitly modeled. Energy loss below that threshold was accounted for using the Continuously Slowing Down Approximation (CSDA). Photon transport was dealt with using the delta tracking method. Photoelectric effect, Compton scattering and pair production were modeled. Voxelised geometry was supported. A serial ARHCHER-CPU was first written in C++. The code was then ported to the GPU platform using CUDA C. The hardware involved a desktop PC with an Intel Xeon X5660 CPU and six NVIDIA Tesla M2090 GPUs. ARHCHER was tested for a case of 20 MeV electron beam incident perpendicularly on a water-aluminum-water phantom. The depth and lateral dose profiles were found to agree with results obtained from well tested MC codes. Using six GPU cards, 6x10{sup 6} histories of electrons were simulated within 2 seconds. In comparison, the same case running the EGSnrc and MCNPX codes required 1645 seconds and 9213 seconds, respectively, on a CPU with a single core used. (authors)
Talamo, A.; Gohar, Y. (Nuclear Engineering Division) [Nuclear Engineering Division
2011-05-12T23:59:59.000Z
This study investigates the performance of the YALINA Booster subcritical assembly, located in Belarus, during operation with high (90%), medium (36%), and low (21%) enriched uranium fuels in the assembly's fast zone. The YALINA Booster is a zero-power, subcritical assembly driven by a conventional neutron generator. It was constructed for the purpose of investigating the static and dynamic neutronics properties of accelerator driven subcritical systems, and to serve as a fast neutron source for investigating the properties of nuclear reactions, in particular transmutation reactions involving minor-actinides. The first part of this study analyzes the assembly's performance with several fuel types. The MCNPX and MONK Monte Carlo codes were used to determine effective and source neutron multiplication factors, effective delayed neutron fraction, prompt neutron lifetime, neutron flux profiles and spectra, and neutron reaction rates produced from the use of three neutron sources: californium, deuterium-deuterium, and deuterium-tritium. In the latter two cases, the external neutron source operates in pulsed mode. The results discussed in the first part of this report show that the use of low enriched fuel in the fast zone of the assembly diminishes neutron multiplication. Therefore, the discussion in the second part of the report focuses on finding alternative fuel loading configurations that enhance neutron multiplication while using low enriched uranium fuel. It was found that arranging the interface absorber between the fast and the thermal zones in a circular rather than a square array is an effective method of operating the YALINA Booster subcritical assembly without downgrading neutron multiplication relative to the original value obtained with the use of the high enriched uranium fuels in the fast zone.
Talamo, A.; Gohar, M. Y. A.; Nuclear Engineering Division
2008-09-11T23:59:59.000Z
This study was carried out to model and analyze the YALINA-Booster facility, of the Joint Institute for Power and Nuclear Research of Belarus, with the long term objective of advancing the utilization of accelerator driven systems for the incineration of nuclear waste. The YALINA-Booster facility is a subcritical assembly, driven by an external neutron source, which has been constructed to study the neutron physics and to develop and refine methodologies to control the operation of accelerator driven systems. The external neutron source consists of Californium-252 spontaneous fission neutrons, 2.45 MeV neutrons from Deuterium-Deuterium reactions, or 14.1 MeV neutrons from Deuterium-Tritium reactions. In the latter two cases a deuteron beam is used to generate the neutrons. This study is a part of the collaborative activity between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research of Belarus. In addition, the International Atomic Energy Agency (IAEA) has a coordinated research project benchmarking and comparing the results of different numerical codes with the experimental data available from the YALINA-Booster facility and ANL has a leading role coordinating the IAEA activity. The YALINA-Booster facility has been modeled according to the benchmark specifications defined for the IAEA activity without any geometrical homogenization using the Monte Carlo codes MONK and MCNP/MCNPX/MCB. The MONK model perfectly matches the MCNP one. The computational analyses have been extended through the MCB code, which is an extension of the MCNP code with burnup capability because of its additional feature for analyzing source driven multiplying assemblies. The main neutronics parameters of the YALINA-Booster facility were calculated using these computer codes with different nuclear data libraries based on ENDF/B-VI-0, -6, JEF-2.2, and JEF-3.1.
allegri filippini carlo: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
1 Wideband Array Signal Processing Using Sequential Monte Carlo Methods William Ng, James P. Reilly model in the time-domain, and incorporates the Markov chain Monte Carlo...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel),Feet) Year Jan Feb Mar Apr MayAtmosphericNuclear Security Administration the1 -the Mid-Infrared at 278, 298, and 323 K. |Quantum Field Theory & GravityQuantum
Muir, B. R., E-mail: bmuir@physics.carleton.ca; Rogers, D. W. O., E-mail: drogers@physics.carleton.ca [Physics Department, Carleton Laboratory for Radiotherapy Physics, Carleton University, 1125 Colonel By Drive, Ottawa, Ontario K1S 5B6 (Canada)] [Physics Department, Carleton Laboratory for Radiotherapy Physics, Carleton University, 1125 Colonel By Drive, Ottawa, Ontario K1S 5B6 (Canada)
2013-12-15T23:59:59.000Z
Purpose: To investigate recommendations for reference dosimetry of electron beams and gradient effects for the NE2571 chamber and to provide beam quality conversion factors using Monte Carlo simulations of the PTW Roos and NE2571 ion chambers. Methods: The EGSnrc code system is used to calculate the absorbed dose-to-water and the dose to the gas in fully modeled ion chambers as a function of depth in water. Electron beams are modeled using realistic accelerator simulations as well as beams modeled as collimated point sources from realistic electron beam spectra or monoenergetic electrons. Beam quality conversion factors are calculated with ratios of the doses to water and to the air in the ion chamber in electron beams and a cobalt-60 reference field. The overall ion chamber correction factor is studied using calculations of water-to-air stopping power ratios. Results: The use of an effective point of measurement shift of 1.55 mm from the front face of the PTW Roos chamber, which places the point of measurement inside the chamber cavity, minimizes the difference betweenR{sub 50}, the beam quality specifier, calculated from chamber simulations compared to that obtained using depth-dose calculations in water. A similar shift minimizes the variation of the overall ion chamber correction factor with depth to the practical range and reduces the root-mean-square deviation of a fit to calculated beam quality conversion factors at the reference depth as a function of R{sub 50}. Similarly, an upstream shift of 0.34 r{sub cav} allows a more accurate determination of R{sub 50} from NE2571 chamber calculations and reduces the variation of the overall ion chamber correction factor with depth. The determination of the gradient correction using a shift of 0.22 r{sub cav} optimizes the root-mean-square deviation of a fit to calculated beam quality conversion factors if all beams investigated are considered. However, if only clinical beams are considered, a good fit to results for beam quality conversion factors is obtained without explicitly correcting for gradient effects. The inadequacy of R{sub 50} to uniquely specify beam quality for the accurate selection of k{sub Q} factors is discussed. Systematic uncertainties in beam quality conversion factors are analyzed for the NE2571 chamber and amount to between 0.4% and 1.2% depending on assumptions used. Conclusions: The calculated beam quality conversion factors for the PTW Roos chamber obtained here are in good agreement with literature data. These results characterize the use of an NE2571 ion chamber for reference dosimetry of electron beams even in low-energy beams.
Lane, R. A.; Ordonez, C. A. [Department of Physics, University of North Texas, Denton, Texas (United States)
2013-04-19T23:59:59.000Z
A computational tool is described that can be used for designing magnetic focusing or defocusing systems. A fully three-dimensional classical trajectory Monte Carlo simulation has been developed. Ion trajectories are simulated in the presence of magnetic elements that can be modeled as any combination of current loops and current lines. Each current loop or line may be located anywhere in the system and oriented along any of the three coordinate axes. The configuration need not be axisymmetric. The solutions are obtained using normalized parameters, which can be used for easily scaling the results. Examples are provided of the utility of the code.
Pasciak, Alexander Samuel
2009-05-15T23:59:59.000Z
.4 describes the efficient method for sampling the polar scattering angle, where x is a uniformly distributed random number between 0 and 1 (15). 2cos( ) 1 1 axJ a x #1; #2;= - #3; #4;+ -#5; #6; A Monte Carlo code utilizing the screened Rutherford... the frequency mean. (3.5) 9 ( ) ( ) 180 0 180 0 sin sin o o N elastic N elastic mom sJ J J sJ J ?#1; #2;? #3; #4;?W#5; #6; = ?#1; #2;? #3; #4;?W#5; #6; #7; #7; where J is the polar angle of collision. Preservation of higher order moments is also...
Park, H. [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Densmore, J. D. [Bettis Atomic Power Laboratory, West Mifflin, PA 15122 (United States); Wollaber, A. B.; Knoll, D. A.; Rauenzahn, R. M. [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)
2013-07-01T23:59:59.000Z
We have developed a moment-based scale-bridging algorithm for thermal radiative transfer problems. The algorithm takes the form of well-known nonlinear-diffusion acceleration which utilizes a low-order (LO) continuum problem to accelerate the solution of a high-order (HO) kinetic problem. The coupled nonlinear equations that form the LO problem are efficiently solved using a preconditioned Jacobian-free Newton-Krylov method. This work demonstrates the applicability of the scale-bridging algorithm with a Monte Carlo HO solver and reports the computational efficiency of the algorithm in comparison to the well-known Fleck-Cummings algorithm. (authors)
Singleterry, R.C. Jr. [Argonne National Lab., Idaho Falls, ID (United States); Jahshan, S. [SNJ Consulting, Idaho Falls, ID (United States)
1996-04-01T23:59:59.000Z
The F{sub N} basis function expansion solution to the Boltzmann transport equation in Cartesian geometry is summarized and evaluated for several heterogeneous slabs of interest. The resultant scalar and angular fluxes and the critical slab thickness (when applicable) compare to the Monte Carlo transport evaluations by MCNP. A correspondence between the one-group macroscopic cross section used in the FN code is made to energy independent synthetic MCNP microscopic cross sections. The FN method produces comparable results to MCNP, requires fewer computer resources, but is limited to specific problem types.
B. M. Abramov; P. N. Alexeev; Yu. A. Borodin; S. A. Bulychjov; I. A. Dukhovskoy; A. P. Krutenkova; V. V. Kulikov; M. A. Martemianov; M. A. Matsyuk; E. N. Turdakina; A. I. Khanov; S. G. Mashnik
2015-02-05T23:59:59.000Z
Momentum spectra of hydrogen isotopes have been measured at 3.5 deg from C12 fragmentation on a Be target. Momentum spectra cover both the region of fragmentation maximum and the cumulative region. Differential cross sections span five orders of magnitude. The data are compared to predictions of four Monte Carlo codes: QMD, LAQGSM, BC, and INCL++. There are large differences between the data and predictions of some models in the high momentum region. The INCL++ code gives the best and almost perfect description of the data.
Abramov, B M; Borodin, Yu A; Bulychjov, S A; Dukhovskoy, I A; Krutenkova, A P; Kulikov, V V; Martemianov, M A; Matsyuk, M A; Turdakina, E N; Khanov, A I; Mashnik, S G
2015-01-01T23:59:59.000Z
Momentum spectra of hydrogen isotopes have been measured at 3.5 deg from C12 fragmentation on a Be target. Momentum spectra cover both the region of fragmentation maximum and the cumulative region. Differential cross sections span five orders of magnitude. The data are compared to predictions of four Monte Carlo codes: QMD, LAQGSM, BC, and INCL++. There are large differences between the data and predictions of some models in the high momentum region. The INCL++ code gives the best and almost perfect description of the data.
Barbosa, Marcia C. B.
the lack of consensus concerning the origin of water-like anomalies, it is widely believed of the Bell-Lavis model for water Carlos E. Fiore Departamento de F´isica, Universidade Federal do Paran for liquid water is investigated through numerical simulations. The lattice- gas model on a triangular
Ondis, L.A., II; Tyburski, L.J.; Moskowitz, B.S.
2000-03-01T23:59:59.000Z
The RCP01 Monte Carlo program is used to analyze many geometries of interest in nuclear design and analysis of light water moderated reactors such as the core in its pressure vessel with complex piping arrangement, fuel storage arrays, shipping and container arrangements, and neutron detector configurations. Written in FORTRAN and in use on a variety of computers, it is capable of estimating steady state neutron or photon reaction rates and neutron multiplication factors. The energy range covered in neutron calculations is that relevant to the fission process and subsequent slowing-down and thermalization, i.e., 20 MeV to 0 eV. The same energy range is covered for photon calculations.
Jinaphanh, A.; Miss, J.; Richet, Y. [Inst. for Radiological Protection and Nuclear Safety IRSN, BP 17, 92262 Fontenay-Aux-Roses Cedex (France); Jacquet, O. [Independent Consultant (France)
2012-07-01T23:59:59.000Z
Monte Carlo (MC) criticality calculations are based on an iterative method. It requires a converged fission source distribution before beginning tallying the effective multiplication factor (K{sub eff}) or other quantities of interest. However, it is pretty difficult to locate on the run, the end of the source convergence and scores may be biased by an initial transient. This paper deals with a method that locates and suppresses the transient due to the initialization in an output series, applied here to K{sub eff} and Shannon entropy. It relies on modeling stationary series by an order 1 auto regressive process and applying statistical tests based on a Student Bridge statistics. It should be noticed that the initial transient suppression only aims at obtaining stationary output series and cannot guarantee any kind of convergence. The truncation method is applied on both K{sub eff} and Shannon entropy on three test cases. (authors)
A comparison of WIMS-D4 and WIMS-D4m generated cross-section data with Monte Carlo
Woodruff, W.L.; Deen, J.R. [Argonne National Lab., IL (United States); Costescu, C.I. [Illinois Univ., Urbana, IL (United States)
1992-11-01T23:59:59.000Z
Cross-section and related data generated by a modified version of the WIMS-D4 code for both plate and rod type research reactor fuel are compared with Monte Carlo data from the VIM code. The modifications include the introduction of a capability for generating broad group microscopic data and to write selected microscopic cross-sections to an ISOTXS file format. The original WIMS-D4 library with H in ZrH, and {sup 166}Er and {sup 167}Er added gives processed microscopic cross-section data that agree well with VIM ENDF/B-V based data for both plate and TRIGA cells. Additional improvements are in progress including the capability to generate an ENDF/B-V based library.
A comparison of WIMS-D4 and WIMS-D4m generated cross-section data with Monte Carlo
Woodruff, W.L.; Deen, J.R. (Argonne National Lab., IL (United States)); Costescu, C.I. (Illinois Univ., Urbana, IL (United States))
1992-01-01T23:59:59.000Z
Cross-section and related data generated by a modified version of the WIMS-D4 code for both plate and rod type research reactor fuel are compared with Monte Carlo data from the VIM code. The modifications include the introduction of a capability for generating broad group microscopic data and to write selected microscopic cross-sections to an ISOTXS file format. The original WIMS-D4 library with H in ZrH, and [sup 166]Er and [sup 167]Er added gives processed microscopic cross-section data that agree well with VIM ENDF/B-V based data for both plate and TRIGA cells. Additional improvements are in progress including the capability to generate an ENDF/B-V based library.
A. K. Fomin; A. P. Serebrov
2010-05-17T23:59:59.000Z
We performed a detailed analysis and the Monte Carlo simulation of the neutron lifetime experiment [S. Arzumanov et al., Phys. Lett. B 483 (2000) 15] because of the strong disagreement by 5.6 standard deviations between the results of this experiment and our experiment [A. Serebrov et al., Phys. Lett. B 605 (2005) 72]. We found a few effects which were not taken into account in the experiment [S. Arzumanov et al., Phys. Lett. B 483 (2000) 15]. The possible correction is -5.5 s with uncertainty of 2.4 s which comes from initial data knowledge. We assume that after taking into account this correction the result of work [S. Arzumanov et al., Phys. Lett. B 483 (2000) 15] for neutron lifetime 885.4 +/- 0.9stat +/- 0.4syst s could be corrected to 879.9 +/- 0.9stat +/- 2.4syst s.
Chan, V. S.; Turnbull, A. D.; Choi, M.; Chu, M. S.; Lao, L. L. [General Atomics, P.O. Box 85608, San Diego, California 92186-5608 (United States)
2006-11-30T23:59:59.000Z
Experimentally, during fast wave (FW) radio frequency (rf) heating in DIII-D L-mode discharges, strong acceleration of neutral beam (NB) deuterium beam ions has been observed. Significant effects on the n/m = 1/1 sawtooth stability are also seen. Simulations using the Monte-Carlo Hamiltonian code ORBIT-RF, coupled to the TORIC full wave code, predict beam ion tails up to a few hundred keV, in agreement with the experiment. The simulations and experiment both clearly show a much greater efficiency for 4th harmonic FW heating than for 8th harmonic heating. Simple analyses of the kinetic contribution to the ideal magnetohydrodynamic (MHD) potential energy from energetic beam ions generated by FW heating yields reasonable consistency with the observations. A more detailed analysis shows a more complicated picture, however. Other physics effects such as geometry, plasma rotation, and the presence of a free boundary, play a significant role.
Hu, Z. M.; Xie, X. F.; Chen, Z. J.; Peng, X. Y.; Du, T. F.; Cui, Z. Q.; Ge, L. J.; Li, T.; Yuan, X.; Zhang, X.; Li, X. Q.; Zhang, G. H.; Chen, J. X.; Fan, T. S., E-mail: tsfan@pku.edu.cn [State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871 (China); Hu, L. Q.; Zhong, G. Q.; Lin, S. Y.; Wan, B. N. [Institute of Plasma Physics, CAS, Hefei 230031 (China); Gorini, G. [Dipartimento di Fisica, Università di Milano-Bicocca, Milano 20126 (Italy); Istituto di Fisica del Plasma “P. Caldirola,” Milano 20126 (Italy)
2014-11-15T23:59:59.000Z
To assess the neutron energy spectra and the neutron dose for different positions around the Experimental Advanced Superconducting Tokamak (EAST) device, a Bonner Sphere Spectrometer (BSS) was developed at Peking University, with totally nine polyethylene spheres and a SP9 {sup 3}He counter. The response functions of the BSS were calculated by the Monte Carlo codes MCNP and GEANT4 with dedicated models, and good agreement was found between these two codes. A feasibility study was carried out with a simulated neutron energy spectrum around EAST, and the simulated “experimental” result of each sphere was obtained by calculating the response with MCNP, which used the simulated neutron energy spectrum as the input spectrum. With the deconvolution of the “experimental” measurement, the neutron energy spectrum was retrieved and compared with the preset one. Good consistence was found which offers confidence for the application of the BSS system for dose and spectrum measurements around a fusion device.
Raychaudhuri, Subhadip
2015-01-01T23:59:59.000Z
Death ligand mediated apoptotic activation is a mode of programmed cell death that is widely used in cellular and physiological situations. Interest in studying death ligand induced apoptosis has increased due to the promising role of recombinant soluble forms of death ligands (mainly recombinant TRAIL) in anti-cancer therapy. A clear elucidation of how death ligands activate the type 1 and type 2 apoptotic pathways in healthy and cancer cells may help develop better chemotherapeutic strategies. In this work, we use kinetic Monte Carlo simulations to address the problem of type 1/ type 2 choice in death ligand mediated apoptosis of cancer cells. Our study provides insights into the activation of membrane proximal death module that results from complex interplay between death and decoy receptors. Relative abundance of death and decoy receptors was shown to be a key parameter for activation of the initiator caspases in the membrane module. Increased concentration of death ligands frequently increased the type 1...
Joint International Conference on Supercomputing in Nuclear Applications and Monte Carlo 2013 (SNA are expected to operate at temperatures exceeding 350Â°C, and possibly approaching 750Â°C, there is a genuine recoil atom is transferred, through interatomic interactions, to the local environment of the recoil atom
Choi, Myunghee [Retired] [Retired; Chan, Vincent S. [General Atomics] [General Atomics
2014-02-28T23:59:59.000Z
This final report describes the work performed under U.S. Department of Energy Cooperative Agreement DE-FC02-08ER54954 for the period April 1, 2011 through March 31, 2013. The goal of this project was to perform iterated finite-orbit Monte Carlo simulations with full-wall fields for modeling tokamak ICRF wave heating experiments. In year 1, the finite-orbit Monte-Carlo code ORBIT-RF and its iteration algorithms with the full-wave code AORSA were improved to enable systematical study of the factors responsible for the discrepancy in the simulated and the measured fast-ion FIDA signals in the DIII-D and NSTX ICRF fast-wave (FW) experiments. In year 2, ORBIT-RF was coupled to the TORIC full-wave code for a comparative study of ORBIT-RF/TORIC and ORBIT-RF/AORSA results in FW experiments.
Bretscher, M.M.
1993-12-31T23:59:59.000Z
The WIMS-D4 code has been modified (WIMS-D4M) to produce microscopic isotopic cross sections in ISOTXS format for use in diffusion and transport calculations. Beginning with 69-group libraries based on ENDF/B-V data, numerous cell calculations have been made to prepare a set of broad group cross sections for use in diffusion calculations. Global calculations have been made for two control rod states of the Romanian steady state TRIGA reactor with 29 fresh HEU fuel clusters. Detailed Monte Carlo calculations also have been performed for the same reactor configurations using data based on ENDF/B-V. Results from these global calculations are compared with each other and with the measured excess reactivities. Although region-averaged macroscopic principal cross sections obtained from WIMS-D4M are in good agreement with the corresponding Monte Carlo values, problems exist with the high energy (E > 10 keV) microscopic hydrogen transport cross sections.
Kadoura, Ahmad; Sun, Shuyu, E-mail: shuyu.sun@kaust.edu.sa; Salama, Amgad
2014-08-01T23:59:59.000Z
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (?, ?) for single site models were proposed for methane, nitrogen and carbon monoxide.
Santana Leitner, Mario
2010-09-14T23:59:59.000Z
In 2009 the Linac Coherent Light Source (LCLS) at the SLAC National Accelerator Center started free electron laser (FEL) operation. In order to continue to produce the bright and short-pulsed x-ray laser demanded by FEL scientists, this pioneer hard x-ray FEL requires a perfectly tailored magnetic field at the undulators, so that the photons generated at the electron wiggling path interact at the right phase with the electron beam. In such a precise system, small (>0.01%) radiation-induced alterations of the magnetic field in the permanent magnets could affect FEL performance. This paper describes the simulation studies of radiation fields in permanent magnets and the expected signal in the detectors. The transport of particles from the radiation sources (i.e. diagnostic insert) to the undulator magnets and to the beam loss monitors (BLM) was simulated with the intra nuclear cascade codes FLUKA and MARS15. In order to accurately reproduce the optics of LCLS, lattice capabilities and magnetic fields were enabled in FLUKA and betatron oscillations were validated against reference data. All electron events entering the BLMs were printed in data files. The paper also introduces the Radioactive Ion Beam Optimizer (RIBO) Monte Carlo 3-D code, which was used to read from the event files, to compute Cerenkov production and then to simulate the optical coupling of the BLM detectors, accounting for the transmission of light through the quartz.
McGrath, Matthew; Kuo, I-F W.; Ngouana, Brice F.; Ghogomu, Julius N.; Mundy, Christopher J.; Marenich, Aleksandr; Cramer, Christopher J.; Truhlar, Donald G.; Siepmann, Joern I.
2013-08-28T23:59:59.000Z
The free energy of solvation and dissociation of hydrogen chloride in water is calculated through a combined molecular simulation quantum chemical approach at four temperatures between T = 300 and 450 K. The free energy is first decomposed into the sum of two components: the Gibbs free energy of transfer of molecular HCl from the vapor to the aqueous liquid phase and the standard-state free energy of acid dissociation of HCl in aqueous solution. The former quantity is calculated using Gibbs ensemble Monte Carlo simulations using either Kohn-Sham density functional theory or a molecular mechanics force field to determine the system’s potential energy. The latter free energy contribution is computed using a continuum solvation model utilizing either experimental reference data or micro-solvated clusters. The predicted combined solvation and dissociation free energies agree very well with available experimental data. CJM was supported by the US Department of Energy,Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. Pacific Northwest National Laboratory is operated by Battelle for the US Department of Energy.
Pastore, S. [University of South Carolina; Wiringa, Robert B. [ANL; Pieper, Steven C. [ANL; Schiavilla, Rocco [Old Dominion U., JLAB
2014-08-01T23:59:59.000Z
We report quantum Monte Carlo calculations of electromagnetic transitions in $^8$Be. The realistic Argonne $v_{18}$ two-nucleon and Illinois-7 three-nucleon potentials are used to generate the ground state and nine excited states, with energies that are in excellent agreement with experiment. A dozen $M1$ and eight $E2$ transition matrix elements between these states are then evaluated. The $E2$ matrix elements are computed only in impulse approximation, with those transitions from broad resonant states requiring special treatment. The $M1$ matrix elements include two-body meson-exchange currents derived from chiral effective field theory, which typically contribute 20--30\\% of the total expectation value. Many of the transitions are between isospin-mixed states; the calculations are performed for isospin-pure states and then combined with the empirical mixing coefficients to compare to experiment. In general, we find that transitions between states that have the same dominant spatial symmetry are in decent agreement with experiment, but those transitions between different spatial symmetries are often significantly underpredicted.
Icarus: A 2D direct simulation Monte Carlo (DSMC) code for parallel computers. User`s manual - V.3.0
Bartel, T.; Plimpton, S.; Johannes, J.; Payne, J.
1996-10-01T23:59:59.000Z
Icarus is a 2D Direct Simulation Monte Carlo (DSMC) code which has been optimized for the parallel computing environment. The code is based on the DSMC method of Bird and models from free-molecular to continuum flowfields in either cartesian (x, y) or axisymmetric (z, r) coordinates. Computational particles, representing a given number of molecules or atoms, are tracked as they have collisions with other particles or surfaces. Multiple species, internal energy modes (rotation and vibration), chemistry, and ion transport are modelled. A new trace species methodology for collisions and chemistry is used to obtain statistics for small species concentrations. Gas phase chemistry is modelled using steric factors derived from Arrhenius reaction rates. Surface chemistry is modelled with surface reaction probabilities. The electron number density is either a fixed external generated field or determined using a local charge neutrality assumption. Ion chemistry is modelled with electron impact chemistry rates and charge exchange reactions. Coulomb collision cross-sections are used instead of Variable Hard Sphere values for ion-ion interactions. The electrostatic fields can either be externally input or internally generated using a Langmuir-Tonks model. The Icarus software package includes the grid generation, parallel processor decomposition, postprocessing, and restart software. The commercial graphics package, Tecplot, is used for graphics display. The majority of the software packages are written in standard Fortran.
Kolbun, N.; Leveque, Ph.; Abboud, F.; Bol, A.; Vynckier, S.; Gallez, B. [Biomedical Magnetic Resonance Unit, Louvain Drug Research Institute, Universite catholique de Louvain, Avenue Mounier 73.40, B-1200 Brussels (Belgium); Molecular Imaging and Experimental Radiotherapy Unit, Institute of Experimental and Clinical Research, Universite catholique de Louvain, Avenue Hippocrate 55, B-1200 Brussels (Belgium); Biomedical Magnetic Resonance Unit, Louvain Drug Research Institute, Universite catholique de Louvain, Avenue Mounier 73.40, B-1200 Brussels (Belgium)
2010-10-15T23:59:59.000Z
Purpose: The experimental determination of doses at proximal distances from radioactive sources is difficult because of the steepness of the dose gradient. The goal of this study was to determine the relative radial dose distribution for a low dose rate {sup 192}Ir wire source using electron paramagnetic resonance imaging (EPRI) and to compare the results to those obtained using Gafchromic EBT film dosimetry and Monte Carlo (MC) simulations. Methods: Lithium formate and ammonium formate were chosen as the EPR dosimetric materials and were used to form cylindrical phantoms. The dose distribution of the stable radiation-induced free radicals in the lithium formate and ammonium formate phantoms was assessed by EPRI. EBT films were also inserted inside in ammonium formate phantoms for comparison. MC simulation was performed using the MCNP4C2 software code. Results: The radical signal in irradiated ammonium formate is contained in a single narrow EPR line, with an EPR peak-to-peak linewidth narrower than that of lithium formate ({approx}0.64 and 1.4 mT, respectively). The spatial resolution of EPR images was enhanced by a factor of 2.3 using ammonium formate compared to lithium formate because its linewidth is about 0.75 mT narrower than that of lithium formate. The EPRI results were consistent to within 1% with those of Gafchromic EBT films and MC simulations at distances from 1.0 to 2.9 mm. The radial dose values obtained by EPRI were about 4% lower at distances from 2.9 to 4.0 mm than those determined by MC simulation and EBT film dosimetry. Conclusions: Ammonium formate is a suitable material under certain conditions for use in brachytherapy dosimetry using EPRI. In this study, the authors demonstrated that the EPRI technique allows the estimation of the relative radial dose distribution at short distances for a {sup 192}Ir wire source.
Aryal, Prakash; Molloy, Janelle A. [Department of Radiation Medicine, University of Kentucky, Lexington, Kentucky 40536 (United States)] [Department of Radiation Medicine, University of Kentucky, Lexington, Kentucky 40536 (United States); Rivard, Mark J., E-mail: mark.j.rivard@gmail.com [Department of Radiation Oncology, Tufts University School of Medicine, Boston, Massachusetts 02111 (United States)
2014-02-15T23:59:59.000Z
Purpose: To investigate potential causes for differences in TG-43 brachytherapy dosimetry parameters in the existent literature for the model IAI-125A{sup 125}I seed and to propose new standard dosimetry parameters. Methods: The MCNP5 code was used for Monte Carlo (MC) simulations. Sensitivity of dose distributions, and subsequently TG-43 dosimetry parameters, was explored to reproduce historical methods upon which American Association of Physicists in Medicine (AAPM) consensus data are based. Twelve simulation conditions varying{sup 125}I coating thickness, coating mass density, photon interaction cross-section library, and photon emission spectrum were examined. Results: Varying{sup 125}I coating thickness, coating mass density, photon cross-section library, and photon emission spectrum for the model IAI-125A seed changed the dose-rate constant by up to 0.9%, about 1%, about 3%, and 3%, respectively, in comparison to the proposed standard value of 0.922 cGy?h{sup ?1}?U{sup ?1}. The dose-rate constant values by Solberg et al. [“Dosimetric parameters of three new solid core {sup 125}I brachytherapy sources,” J. Appl. Clin. Med. Phys. 3, 119–134 (2002)], Meigooni et al. [“Experimental and theoretical determination of dosimetric characteristics of IsoAid ADVANTAGE™ {sup 125}I brachytherapy source,” Med. Phys. 29, 2152–2158 (2002)], and Taylor and Rogers [“An EGSnrc Monte Carlo-calculated database of TG-43 parameters,” Med. Phys. 35, 4228–4241 (2008)] for the model IAI-125A seed and Kennedy et al. [“Experimental and Monte Carlo determination of the TG-43 dosimetric parameters for the model 9011 THINSeed™ brachytherapy source,” Med. Phys. 37, 1681–1688 (2010)] for the model 6711 seed were +4.3% (0.962 cGy?h{sup ?1}?U{sup ?1}), +6.2% (0.98 cGy?h{sup ?1}?U{sup ?1}), +0.3% (0.925 cGy?h{sup ?1}?U{sup ?1}), and ?0.2% (0.921 cGy?h{sup ?1}?U{sup ?1}), respectively, in comparison to the proposed standard value. Differences in the radial dose functions between the current study and both Solberg et al. and Meigooni et al. were <10% for r ? 5 cm, and increased for r > 5 cm with a maximum difference of 29% at r = 9 cm. In comparison to Taylor and Rogers, these differences were lower (maximum of 2% at r = 9 cm). For the similarly designed model 6711 {sup 125}I seed, differences did not exceed 0.5% for 0.5 ? r ? 10 cm. Radial dose function values varied by 1% as coating thickness and coating density were changed. Varying the cross-section library and source spectrum altered the radial dose function by 25% and 12%, respectively, but these differences occurred at r = 10 cm where the dose rates were very low. The 2D anisotropy function results were most similar to those of Solberg et al. and most different to those of Meigooni et al. The observed order of simulation condition variables from most to least important for influencing the 2D anisotropy function was spectrum, coating thickness, coating density, and cross-section library. Conclusions: Several MC radiation transport codes are available for calculation of the TG-43 dosimetry parameters for brachytherapy seeds. The physics models in these codes and their related cross-section libraries have been updated and improved since publication of the 2007 AAPM TG-43U1S1 report. Results using modern data indicated statistically significant differences in these dosimetry parameters in comparison to data recommended in the TG-43U1S1 report. Therefore, it seems that professional societies such as the AAPM should consider reevaluating the consensus data for this and others seeds and establishing a process of regular evaluations in which consensus data are based upon methods that remain state-of-the-art.
Advanced Monte Carlo Aiichiro Nakano
Southern California, University of
(offshore Africa, North Sea & Gulf of Mexico) with 10,000-20,000 forward simulation runs on CACS high;Opportunity: Overnight HMAF on a Grid GridRPC MPI GridRPC #12;Final Project at the Frontier of Science? #12;3D
Coleman, Joy [Department of Radiation Oncology, University of California, San Francisco, San Francisco, CA (United States); Park, Catherine [Department of Radiation Oncology, University of California, San Francisco, San Francisco, CA (United States); Villarreal-Barajas, J. Eduardo [Department of Radiation Oncology, University of California, San Francisco, San Francisco, CA (United States); Petti, Paula [Department of Radiation Oncology, University of California, San Francisco, San Francisco, CA (United States); Faddegon, Bruce [Department of Radiation Oncology, University of California, San Francisco, San Francisco, CA (United States)]. E-mail: faddegon@radonc17.ucsf.edu
2005-02-01T23:59:59.000Z
Purpose: Electrons are commonly used in the treatment of breast cancer primarily to deliver a tumor bed boost. We compared the use of the Monte Carlo (MC) method and the Fermi-Eyges-Hogstrom (FEH) algorithm to calculate the dose distribution of electron treatment to normal tissues. Methods and materials: Ten patients with left-sided breast cancer treated with breast-conservation therapy at the University of California, San Francisco, were included in this study. Each patient received an electron boost to the surgical bed to a dose of 1,600 cGy in 200 cGy fractions prescribed to 80% of the maximum. Doses to the left ventricle (LV) and the ipsilateral lung (IL) were calculated using the EGS4 MC system and the FEH algorithm implemented on the commercially available Pinnacle treatment planning system. An anthromorphic phantom was irradiated with radiochromic film in place to verify the accuracy of the MC system. Results: Dose distributions calculated with the MC algorithm agreed with the film measurements within 3% or 3 mm. For all patients in the study, the dose to the LV and IL was relatively low as calculated by MC. That is, the maximum dose received by up to 98% of the LV volume was < 100 cGy/day. Less than half of the IL received a dose in excess of 30 cGy/day. When compared with MC, FEH tended to show reduced penetration of the electron beam in lung, and FEH tended to overestimate the bremsstrahlung dose in regions well beyond the electron practical range. These differences were clinically likely to be of little significance, comprising differences of less than one-tenth of the LV and IL volume at doses > 30 cGy and differences in maximum dose of < 35 cGy/day to the LV and 80 cGy/day to the IL. Conclusions: From our series, using clinical judgment to prescribe the boost to the surgical bed after breast-conserving treatment results in low doses to the underlying LV and IL. When calculated dose distributions are desired, MC is the most accurate, but FEH can still be used.
Cheng, C.-W.; Sang, Hyun Cho; Taylor, Michael; Das, Indra J. [Department of Radiation Oncology, Morristown Memorial Hospital, Morristown, New Jersey 07962 (United States); Department of Radiation Physics, University of Texas M. D. Anderson Cancer Center, Houston, Texas 77030 (United States); Department of Radiation Oncology, Portland Kaiser Permanente, Portland, Oregon 97227 (United States); Department of Radiation Oncology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States)
2007-08-15T23:59:59.000Z
In this study, zero-field percent depth dose (PDD) and tissue maximum ratio (TMR) for 6 MV x rays have been determined by extrapolation from dosimetric measurements over the field size range 1x1-10x10 cm{sup 2}. The key to small field dosimetry is the selection of a proper dosimeter for the measurements, as well as the alignment of the detector with the central axis (CAX) of beam. The measured PDD results are compared with those obtained from Monte Carlo (MC) simulation to examine the consistency and integrity of the measured data from which the zero-field PDD is extrapolated. Of the six most commonly used dosimeters in the clinic, the stereotactic diode field detector (SFD), the PTW Pinpoint, and the Exradin A14 are the most consistent and produce results within 2% of each other over the entire field size range 1x1-40x40 cm{sup 2}. Although the diamond detector has the smallest sensitive volume, it is the least stable and tends to disagree with all other dosimeters by more than 10%. The zero-field PDD data extrapolated from larger field measurements obtained with the SFD are in good agreement with the MC results. The extrapolated and MC data agree within 2.5% over the clinical depth range (d{sub max}-30 cm), when the MC data for the zero field are derived from a 1x1 cm{sup 2} field simulation using a miniphantom (1x1x48 cm{sup 3}). The agreement between the measured PDD and the MC data based on a full phantom (48x48x48 cm{sup 3}) simulation is fairly good within 1% at shallow depths to approximately 5% at 30 cm. Our results seem to indicate that zero-field TMR can be accurately calculated from PDD measurements with a proper choice of detector and a careful alignment of detector axis with the CAX.
Barrera, C A; Moran, M J
2007-08-21T23:59:59.000Z
The Neutron Imaging System (NIS) is one of seven ignition target diagnostics under development for the National Ignition Facility. The NIS is required to record hot-spot (13-15 MeV) and downscattered (6-10 MeV) images with a resolution of 10 microns and a signal-to-noise ratio (SNR) of 10 at the 20% contour. The NIS is a valuable diagnostic since the downscattered neutrons reveal the spatial distribution of the cold fuel during an ignition attempt, providing important information in the case of a failed implosion. The present study explores the parameter space of several line-of-sight (LOS) configurations that could serve as the basis for the final design. Six commercially available organic scintillators were experimentally characterized for their light emission decay profile and neutron sensitivity. The samples showed a long lived decay component that makes direct recording of a downscattered image impossible. The two best candidates for the NIS detector material are: EJ232 (BC422) plastic fibers or capillaries filled with EJ399B. A Monte Carlo-based end-to-end model of the NIS was developed to study the imaging capabilities of several LOS configurations and verify that the recovered sources meet the design requirements. The model includes accurate neutron source distributions, aperture geometries (square pinhole, triangular wedge, mini-penumbral, annular and penumbral), their point spread functions, and a pixelated scintillator detector. The modeling results show that a useful downscattered image can be obtained by recording the primary peak and the downscattered images, and then subtracting a decayed version of the former from the latter. The difference images need to be deconvolved in order to obtain accurate source distributions. The images are processed using a frequency-space modified-regularization algorithm and low-pass filtering. The resolution and SNR of these sources are quantified by using two surrogate sources. The simulations show that all LOS configurations have a resolution of 7 microns or better. The 28 m LOS with a 7 x 7 array of 100-micron mini-penumbral apertures or 50-micron square pinholes meets the design requirements and is a very good design alternative.
Handley, G. R.; Masters, L. C.; Stachowiak, R. V.
1981-04-10T23:59:59.000Z
Validation of the Monte Carlo criticality code, KENO IV, and the Hansen-Roach sixteen-energy-group cross sections was accomplished by calculating the effective neutron multiplication constant, k/sub eff/, of 29 experimentally critical assemblies which had uranium enrichments of 92.6% or higher in the uranium-235 isotope. The experiments were chosen so that a large variety of geometries and of neutron energy spectra were covered. Problems, calculating the k/sub eff/ of systems with high-uranium-concentration uranyl nitrate solution that were minimally reflected or unreflected, resulted in the separate examination of five cases.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel),Feet) Year Jan Feb Mar Apr May JunDatastreamsmmcrcalgovInstrumentsrucLas Conchas recovery challenge fundProject8 -3EutecticMinding the GapTheA SearchMiniDFT
Kelly, Thomas P.; Greer, James C., E-mail: jim.greer@tyndall.ie [Tyndall National Institute, University College Cork, Dyke Parade, Lee Maltings, Cork (Ireland); Perera, Ajith; Bartlett, Rodney J. [Quantum Theory Project, 2234 New Physics Building 92, PO Box 118435, University of Florida at Gainesville, Gainesville, Florida 32611-8435 (United States)] [Quantum Theory Project, 2234 New Physics Building 92, PO Box 118435, University of Florida at Gainesville, Gainesville, Florida 32611-8435 (United States)
2014-02-28T23:59:59.000Z
Dissociation energies for the diatomic molecules C{sub 2}, N{sub 2}, O{sub 2}, CO, and NO are estimated using the Monte Carlo configuration interaction (MCCI) and augmented by a second order perturbation theory correction. The calculations are performed using the correlation consistent polarized valence “triple zeta” atomic orbital basis and resulting dissociation energies are compared to coupled cluster calculations including up to triple excitations (CCSDT) and Full Configuration Interaction Quantum Monte Carlo (FCIQMC) estimates. It is found that the MCCI method readily describes the correct behavior for dissociation for the diatomics even when capturing only a relatively small fraction (?80%) of the correlation energy. At this level only a small number of configurations, typically O(10{sup 3}) from a FCI space of dimension O(10{sup 14}), are required to describe dissociation. Including the perturbation correction to the MCCI estimates, the difference in dissociation energies with respect to CCSDT ranges between 1.2 and 3.1 kcal/mol, and the difference when comparing to FCIQMC estimates narrows to between 0.5 and 1.9 kcal/mol. Discussions on MCCI's ability to recover static and dynamic correlations and on the form of correlations in the electronic configuration space are presented.
Ali, Imad, E-mail: iali@ouhsc.edu [Department of Radiation Oncology, University of Oklahoma Health Sciences Center, Oklahoma City, OK (United States); Ahmad, Salahuddin [Department of Radiation Oncology, University of Oklahoma Health Sciences Center, Oklahoma City, OK (United States)
2013-10-01T23:59:59.000Z
To compare the doses calculated using the BrainLAB pencil beam (PB) and Monte Carlo (MC) algorithms for tumors located in various sites including the lung and evaluate quality assurance procedures required for the verification of the accuracy of dose calculation. The dose-calculation accuracy of PB and MC was also assessed quantitatively with measurement using ionization chamber and Gafchromic films placed in solid water and heterogeneous phantoms. The dose was calculated using PB convolution and MC algorithms in the iPlan treatment planning system from BrainLAB. The dose calculation was performed on the patient's computed tomography images with lesions in various treatment sites including 5 lungs, 5 prostates, 4 brains, 2 head and necks, and 2 paraspinal tissues. A combination of conventional, conformal, and intensity-modulated radiation therapy plans was used in dose calculation. The leaf sequence from intensity-modulated radiation therapy plans or beam shapes from conformal plans and monitor units and other planning parameters calculated by the PB were identical for calculating dose with MC. Heterogeneity correction was considered in both PB and MC dose calculations. Dose-volume parameters such as V95 (volume covered by 95% of prescription dose), dose distributions, and gamma analysis were used to evaluate the calculated dose by PB and MC. The measured doses by ionization chamber and EBT GAFCHROMIC film in solid water and heterogeneous phantoms were used to quantitatively asses the accuracy of dose calculated by PB and MC. The dose-volume histograms and dose distributions calculated by PB and MC in the brain, prostate, paraspinal, and head and neck were in good agreement with one another (within 5%) and provided acceptable planning target volume coverage. However, dose distributions of the patients with lung cancer had large discrepancies. For a plan optimized with PB, the dose coverage was shown as clinically acceptable, whereas in reality, the MC showed a systematic lack of dose coverage. The dose calculated by PB for lung tumors was overestimated by up to 40%. An interesting feature that was observed is that despite large discrepancies in dose-volume histogram coverage of the planning target volume between PB and MC, the point doses at the isocenter (center of the lesions) calculated by both algorithms were within 7% even for lung cases. The dose distributions measured with EBT GAFCHROMIC films in heterogeneous phantoms showed large discrepancies of nearly 15% lower than PB at interfaces between heterogeneous media, where these lower doses measured by the film were in agreement with those by MC. The doses (V95) calculated by MC and PB agreed within 5% for treatment sites with small tissue heterogeneities such as the prostate, brain, head and neck, and paraspinal tumors. Considerable discrepancies, up to 40%, were observed in the dose-volume coverage between MC and PB in lung tumors, which may affect clinical outcomes. The discrepancies between MC and PB increased for 15 MV compared with 6 MV indicating the importance of implementation of accurate clinical treatment planning such as MC. The comparison of point doses is not representative of the discrepancies in dose coverage and might be misleading in evaluating the accuracy of dose calculation between PB and MC. Thus, the clinical quality assurance procedures required to verify the accuracy of dose calculation using PB and MC need to consider measurements of 2- and 3-dimensional dose distributions rather than a single point measurement using heterogeneous phantoms instead of homogenous water-equivalent phantoms.
Cai, Zhongli; Chattopadhyay, Niladri; Kwon, Yongkyu Luke [Department of Pharmaceutical Sciences, University of Toronto, Toronto, Ontario M5S 3M2 (Canada)] [Department of Pharmaceutical Sciences, University of Toronto, Toronto, Ontario M5S 3M2 (Canada); Pignol, Jean-Philippe [Department of Radiation Oncology, University of Toronto, Toronto, Ontario M4N 3M5, Canada and Department of Medical Biophysics, University of Toronto, Toronto, Ontario M4N 3M5 (Canada)] [Department of Radiation Oncology, University of Toronto, Toronto, Ontario M4N 3M5, Canada and Department of Medical Biophysics, University of Toronto, Toronto, Ontario M4N 3M5 (Canada); Lechtman, Eli [Department of Medical Biophysics, University of Toronto, Toronto, Ontario M4N 3M5 (Canada)] [Department of Medical Biophysics, University of Toronto, Toronto, Ontario M4N 3M5 (Canada); Reilly, Raymond M. [Department of Pharmaceutical Sciences, University of Toronto, Toronto, Ontario M5S 3M2 (Canada) [Department of Pharmaceutical Sciences, University of Toronto, Toronto, Ontario M5S 3M2 (Canada); Department of Medical Imaging, University of Toronto, Toronto, Ontario M5S 3E2 (Canada); Toronto General Research Institute, University Health Network, Toronto, Ontario M5G 2C4 (Canada)
2013-11-15T23:59:59.000Z
Purpose: The authors’ aims were to model how various factors influence radiation dose enhancement by gold nanoparticles (AuNPs) and to propose a new modeling approach to the dose enhancement factor (DEF).Methods: The authors used Monte Carlo N-particle (MCNP 5) computer code to simulate photon and electron transport in cells. The authors modeled human breast cancer cells as a single cell, a monolayer, or a cluster of cells. Different numbers of 5, 30, or 50 nm AuNPs were placed in the extracellular space, on the cell surface, in the cytoplasm, or in the nucleus. Photon sources examined in the simulation included nine monoenergetic x-rays (10–100 keV), an x-ray beam (100 kVp), and {sup 125}I and {sup 103}Pd brachytherapy seeds. Both nuclear and cellular dose enhancement factors (NDEFs, CDEFs) were calculated. The ability of these metrics to predict the experimental DEF based on the clonogenic survival of MDA-MB-361 human breast cancer cells exposed to AuNPs and x-rays were compared.Results: NDEFs show a strong dependence on photon energies with peaks at 15, 30/40, and 90 keV. Cell model and subcellular location of AuNPs influence the peak position and value of NDEF. NDEFs decrease in the order of AuNPs in the nucleus, cytoplasm, cell membrane, and extracellular space. NDEFs also decrease in the order of AuNPs in a cell cluster, monolayer, and single cell if the photon energy is larger than 20 keV. NDEFs depend linearly on the number of AuNPs per cell. Similar trends were observed for CDEFs. NDEFs using the monolayer cell model were more predictive than either single cell or cluster cell models of the DEFs experimentally derived from the clonogenic survival of cells cultured as a monolayer. The amount of AuNPs required to double the prescribed dose in terms of mg Au/g tissue decreases as the size of AuNPs increases, especially when AuNPs are in the nucleus and the cytoplasm. For 40 keV x-rays and a cluster of cells, to double the prescribed x-ray dose (NDEF = 2) using 30 nm AuNPs, would require 5.1 ± 0.2, 9 ± 1, 10 ± 1, 10 ± 1 mg Au/g tissue in the nucleus, in the cytoplasm, on the cell surface, or in the extracellular space, respectively. Using 50 nm AuNPs, the required amount decreases to 3.1 ± 0.3, 8 ± 1, 9 ± 1, 9 ± 1 mg Au/g tissue, respectively.Conclusions: NDEF is a new metric that can predict the radiation enhancement of AuNPs for various experimental conditions. Cell model, the subcellular location and size of AuNPs, and the number of AuNPs per cell, as well as the x-ray photon energy all have effects on NDEFs. Larger AuNPs in the nucleus of cluster cells exposed to x-rays of 15 or 40 keV maximize NDEFs.
Fubiani, G.; Boeuf, J. P. [Université de Toulouse, UPS, INPT, LAPLACE (Laboratoire Plasma et Conversion d'Energie), 118 route de Narbonne, F-31062 Toulouse cedex 9 (France) [Université de Toulouse, UPS, INPT, LAPLACE (Laboratoire Plasma et Conversion d'Energie), 118 route de Narbonne, F-31062 Toulouse cedex 9 (France); CNRS, LAPLACE, F-31062 Toulouse (France)
2013-11-15T23:59:59.000Z
Results from a 3D self-consistent Particle-In-Cell Monte Carlo Collisions (PIC MCC) model of a high power fusion-type negative ion source are presented for the first time. The model is used to calculate the plasma characteristics of the ITER prototype BATMAN ion source developed in Garching. Special emphasis is put on the production of negative ions on the plasma grid surface. The question of the relative roles of the impact of neutral hydrogen atoms and positive ions on the cesiated grid surface has attracted much attention recently and the 3D PIC MCC model is used to address this question. The results show that the production of negative ions by positive ion impact on the plasma grid is small with respect to the production by atomic hydrogen or deuterium bombardment (less than 10%)
Dewberry, R.
2003-02-10T23:59:59.000Z
This report describes an interference observed when acquiring y-ray holdup data. The interference comes from secondary contaminated surfaces that contribute to the y-ray signal when acquiring data in the area source configuration. It is often the case that these unwanted contributions can not be isolated and eliminated, so it is necessary to mathematically correct for the contribution. In this report we propose experiments to acquire the necessary data to determine the experimental correction factor specifically for highly enriched uranium holdup measurements. We then propose to use the MCNP Monte Carlo computer code to model the contribution in several acquisition configurations and for multiple interfering y-ray energies. Results will provide a model for calculation of this secondary source correction factor for future holdup measurements. We believe the results of the experiments and modeling of the data acquired in this proposal will have a significant impact on deactivation and de commissioning activities throughout the DOE weapons complex.
Bergström, Ida; Elfgren, Erik
2013-06-11T23:59:59.000Z
At the particle physics laboratory CERN in Geneva, Switzerland, the Neutron Time-of-Flight facility has recently started the construction of a second experimental line. The new neutron beam line will unavoidably induce radiation in both the experimental area and in nearby accessible areas. Computer simulations for the minimization of the background were carried out using the FLUKA Monte Carlo simulation package. The background radiation in the new experimental area needs to be kept to a minimum during measurements. This was studied with focus on the contributions from backscattering in the beam dump. The beam dump was originally designed for shielding the outside area using a block of iron covered in concrete. However, the backscattering was never studied in detail. In this thesis, the fluences (i.e. the flux integrated over time) of neutrons and photons were studied in the experimental area while the beam dump design was modified. An optimized design was obtained by stopping the fast neutrons in a high Z mat...
Sahu, Nityananda; Gadre, Shridhar R.; Bandyopadhyay, Pradipta; Miliordos, Evangelos; Xantheas, Sotiris S.
2014-10-28T23:59:59.000Z
We report new global minimum candidate structures for the (H2O)25 cluster that are lower in energy than the ones reported previously and correspond to hydrogen bonded networks with 42 hydrogen bonds and an interior, fully coordinated water molecule. These were obtained as a result of a hierarchical approach based on initial Monte Carlo Temperature Basin Paving (MCTBP) sampling of the cluster’s Potential Energy Surface (PES) with the Effective Fragment Potential (EFP), subsequent geometry optimization using the Molecular Tailoring fragmentation Approach (MTA) and final refinement at the second order Møller Plesset perturbation (MP2) level of theory. The MTA geometry optimizations used between 14 and 18 main fragments with maximum sizes between 11 and 14 water molecules and average size of 10 water molecules, whose energies and gradients were computed at the MP2 level. The MTA-MP2 optimized geometries were found to be quite close (within < 0.5 kcal/mol) to the ones obtained from the MP2 optimization of the whole cluster. The grafting of the MTA-MP2 energies yields electronic energies that are within < 5×10-4 a.u. from the MP2 results for the whole cluster while preserving their energy order. The MTA-MP2 method was also found to reproduce the MP2 harmonic vibrational frequencies in both the HOH bending and the OH stretching regions.
Sahu, Nityananda; Gadre, Shridhar R., E-mail: gadre@iitk.ac.in, E-mail: sotiris.xantheas@pnnl.gov [Department of Chemistry, Indian Institute of Technology Kanpur, Kanpur 208016 (India); Rakshit, Avijit; Bandyopadhyay, Pradipta [School of Computational and Integrative Sciences, Jawaharlal Nehru University, New Delhi 110067 (India); Miliordos, Evangelos; Xantheas, Sotiris S., E-mail: gadre@iitk.ac.in, E-mail: sotiris.xantheas@pnnl.gov [Physical Sciences Division, Pacific Northwest National Laboratory, 902 Battelle Boulevard, P.O. Box 999, MS K1-83, Richland, Washington 99352 (United States)
2014-10-28T23:59:59.000Z
We report new global minimum candidate structures for the (H{sub 2}O){sub 25} cluster that are lower in energy than the ones reported previously and correspond to hydrogen bonded networks with 42 hydrogen bonds and an interior, fully coordinated water molecule. These were obtained as a result of a hierarchical approach based on initial Monte Carlo Temperature Basin Paving sampling of the cluster's Potential Energy Surface with the Effective Fragment Potential, subsequent geometry optimization using the Molecular Tailoring Approach with the fragments treated at the second order Møller-Plesset (MP2) perturbation (MTA-MP2) and final refinement of the entire cluster at the MP2 level of theory. The MTA-MP2 optimized cluster geometries, constructed from the fragments, were found to be within <0.5 kcal/mol from the minimum geometries obtained from the MP2 optimization of the entire (H{sub 2}O){sub 25} cluster. In addition, the grafting of the MTA-MP2 energies yields electronic energies that are within <0.3 kcal/mol from the MP2 energies of the entire cluster while preserving their energy rank order. Finally, the MTA-MP2 approach was found to reproduce the MP2 harmonic vibrational frequencies, constructed from the fragments, quite accurately when compared to the MP2 ones of the entire cluster in both the HOH bending and the OH stretching regions of the spectra.
Sharma, Diksha; Badano, Aldo [Division of Imaging and Applied Mathematics, Center for Devices and Radiological Health, Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993 (United States)
2013-03-15T23:59:59.000Z
Purpose: hybridMANTIS is a Monte Carlo package for modeling indirect x-ray imagers using columnar geometry based on a hybrid concept that maximizes the utilization of available CPU and graphics processing unit processors in a workstation. Methods: The authors compare hybridMANTIS x-ray response simulations to previously published MANTIS and experimental data for four cesium iodide scintillator screens. These screens have a variety of reflective and absorptive surfaces with different thicknesses. The authors analyze hybridMANTIS results in terms of modulation transfer function and calculate the root mean square difference and Swank factors from simulated and experimental results. Results: The comparison suggests that hybridMANTIS better matches the experimental data as compared to MANTIS, especially at high spatial frequencies and for the thicker screens. hybridMANTIS simulations are much faster than MANTIS with speed-ups up to 5260. Conclusions: hybridMANTIS is a useful tool for improved description and optimization of image acquisition stages in medical imaging systems and for modeling the forward problem in iterative reconstruction algorithms.
http://chem.ps.uci.edu/~kieron/dft/book/ The ABC of DFT
Burke, Kieron
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 II Basics 55 6 Density functional theory 57 6.1 One electron1 http://chem.ps.uci.edu/~kieron/dft/book/ The ABC of DFT Kieron Burke and friends Department.6 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2 Functionals 27 2
Bujila, R; Nowik, P; Poludniowski, G [Karolinska University Hospital, Stockholm, Stockholm (Sweden)
2014-06-01T23:59:59.000Z
Purpose: ImpactMC (CT Imaging, Erlangen, Germany) is a Monte Carlo (MC) software package that offers a GPU enabled, user definable and validated method for 3D dose distribution calculations for radiography and Computed Tomography (CT). ImpactMC, in and of itself, offers limited capabilities to perform batch simulations. The aim of this work was to develop a framework for the batch simulation of absorbed organ dose distributions from CT scans of computational voxel phantoms. Methods: The ICRP 110 adult Reference Male and Reference Female computational voxel phantoms were formatted into compatible input volumes for MC simulations. A Matlab (The MathWorks Inc., Natick, MA) script was written to loop through a user defined set of simulation parameters and 1) generate input files required for the simulation, 2) start the MC simulation, 3) segment the absorbed dose for organs in the simulated dose volume and 4) transfer the organ doses to a database. A demonstration of the framework is made where the glandular breast dose to the adult Reference Female phantom, for a typical Chest CT examination, is investigated. Results: A batch of 48 contiguous simulations was performed with variations in the total collimation and spiral pitch. The demonstration of the framework showed that the glandular dose to the right and left breast will vary depending on the start angle of rotation, total collimation and spiral pitch. Conclusion: The developed framework provides a robust and efficient approach to performing a large number of user defined MC simulations with computational voxel phantoms in CT (minimal user interaction). The resulting organ doses from each simulation can be accessed through a database which greatly increases the ease of analyzing the resulting organ doses. The framework developed in this work provides a valuable resource when investigating different dose optimization strategies in CT.
Near quantitative agreement of model free DFT- MD predictions...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Near quantitative agreement of model free DFT- MD predictions with XAFS observations of the hydration structure of highly Near quantitative agreement of model free DFT- MD...
Statistical assessment of Monte Carlo distributional tallies
Kiedrowski, Brian C [Los Alamos National Laboratory; Solomon, Clell J [Los Alamos National Laboratory
2010-12-09T23:59:59.000Z
Four tests are developed to assess the statistical reliability of distributional or mesh tallies. To this end, the relative variance density function is developed and its moments are studied using simplified, non-transport models. The statistical tests are performed upon the results of MCNP calculations of three different transport test problems and appear to show that the tests are appropriate indicators of global statistical quality.
Monte Carlo techniques applied to PERT networks
McGowan, Lawrence Lee
1964-01-01T23:59:59.000Z
conceived by Booze, Allen and Hamilton. Tnis concept was replaced by the three estimate concept during the Polaris Submarine Project when it was found that. the one estimate concept did not provide accurate solutions. However, in certain cases where union... on its way to more profitable adaptations. The PERT system is essentially an outgrowth of the Gantt or bar 6 chart concept of controlling time elements in a program. I+ was first formalized in 1956 by the management consultant firm of Booze, Allen...
VIM continuous energy Monte Carlo transport code
Blomquist, R.N. [Argonne National Lab., IL (United States)
1995-12-31T23:59:59.000Z
VIM is a continuous energy neutron and photon transport code. VIM solves the steady-state neutron or photon transport problem in any detailed three-dimensional geometry using either continuous energy-dependent ENDF nuclear data or multigroup cross sections. Neutron transport is carried out in a criticality mode, or in a fixed source mode (optionally incorporating subcritical multiplication). Photon transport is simulated in the fixed source mode. The geometry options are infinite medium, combinatorial geometry, and hexagonal or rectangular lattices of combinatorial geometry unit cells, and rectangular lattices of cells of assembled plates. Boundary conditions include vacuum, specular and white reflection, and periodic boundaries for reactor cell calculations. VIM was developed primarily as a reactor criticality code. Its tally and edit features are very easy to use, and automatically provide fission, fission production, absorption, capture, elastic scattering, inelastic scattering, and (n,2n) reaction rates for each edit region, edit energy group, and isotope, as well as the corresponding macroscopic information, including group scalar fluxes. Microscopic and macroscopic cross sections, including microscopic P{sub N} group-to-group cross sections are also easily produced.
Multiple quadrature by Monte Carlo techniques
Voss, John Dietrich
2012-06-07T23:59:59.000Z
manner in a Fortran program the statement: Y = FRN (a ) will cause a floating point random number X (0 & X & 1) to be placed in location Y, The argument a may be a constant or variable of any mode; it is employed to allow reference to FRN as a... function subprogram. By using FRN (a) we can generate a sequence of uniformly distributed pseudo-random numbers 20 between zero and one. Since the distribution is uniform the probabil- ity of a number of the sequence lying between X and (X+DX) is DX, i...
Amalfi/Positano Monte CarloMarseille
Spence, Harlan Ernest
Next program manager Â· Comprehensive pre-departure information Â· Air- and cruise-related government of Ephesus, a veritable museum of Greek and Roman history with impeccably preserved relics from its ancient--the largest island in the Mediterranean--abounds with natural and architectural beauty. AMALFI/POSITANO, ITALY
Monte Carlo simulation in systems biology
Schellenberger, Jan
2010-01-01T23:59:59.000Z
B.O. Candidate states of helicobacter pylori’s genome-scalereconstruction of helicobacter pylori (iit341 gsm/gpr): Ancoli core model [80] 3) Helicobacter pylori iIT341 [37] 4)
Adaptive Non-Boltzmann Monte Carlo
Fitzgerald, M.; Picard, R.R.; Silver, R.N.
1998-06-01T23:59:59.000Z
This manuscript generalizes the use of transition probabilities (TPs) between states, which are efficient relative to histogram procedures in deriving system properties. The empirical TPs of the simulation depend on the importance weights and are temperature-specific, so they are not conducive to accumulating statistics as weights change or to extrapolating in temperature. To address these issues, the authors provide a method for inferring Boltzmann-weighted TPs for one temperature from simulations run at other temperatures and/or at different adaptively varying importance weights. They refer to these as canonical transition probabilities (CTPs). System properties are estimated from CTPs. Statistics on CTPs are gathered by inserting a low-cost easily-implemented bookkeeping step into the Metropolis algorithm for non-Boltzmann sampling. The CTP method is inherently adaptive, can take advantage of partitioning of the state space into small regions using either serial or (embarrassingly) parallel architectures, and reduces variance by avoiding histogramming. They also demonstrate how system properties may be extrapolated in temperature from CTPs without the extra memory required by using energy as a microstate label. Nor does it require the solution of non-linear equations used in histogram methods.
Advanced Monte Carlo Methods: Direct Simulation
Mascagni, Michael
assessment of investment portfolios Computer games Roadway design simulation War gaming #12;Direct Simulation and is lost from the solar system Kepler's Third Law the time taken to describe an orbit with energy Âz is z-3
Feedback-optimized parallel tempering Monte Carlo
Katzgraber, H G; Trebst, S; Huse, D A; Troyer, M
2006-01-01T23:59:59.000Z
brie?y discuss possible feedback schemes for systems that1742-5468/06/P03018+22$30.00 Feedback-optimized parallel5468/2006/03/P03018 Feedback-optimized parallel tempering
Park, Su-Jung; /Bonn U.
2004-02-01T23:59:59.000Z
The measurement of the t{bar t} production cross section at {radical}s = 1.96 TeV using the final state with an electron and jets is studied with Monte Carlo event samples. All methods used in the real data analysis to measure efficiencies and to estimate the background contributions are examined. The studies focus on measuring the electron reconstruction efficiencies as well as on improving the electron identification and background suppression. With a generated input cross section of 7 pb the following result is obtained: {sigma}{sub t{bar t}} = (7 {+-} 1.63(stat){sub -1.14}{sup +0.94} (syst)) pb.
Screened Hybrid and DFT + U Studies of the Structural, Electronic...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Screened Hybrid and DFT + U Studies of the Structural, Electronic, and Optical Properties of U3O8. Screened Hybrid and DFT + U Studies of the Structural, Electronic, and Optical...
Density Functional Theory (DFT) Rob Parrish
Sherrill, David
Density Functional Theory (DFT) Rob Parrish robparrish@gmail.com 1 #12;Agenda Â· The mechanism Easy to do this Why? Because of Hermitian Operators: Kinetic Energy Density: #12;Density Functional The density completely defines the observable state of the system: The way in which it does so (the functional
RADIATIVE HEAT TRANSFER WITH QUASIMONTE CARLO METHODS \\Lambda
RADIATIVE HEAT TRANSFER WITH QUASIÂMONTE CARLO METHODS \\Lambda A. Kersch 1 W. Morokoff 2 A accuracy modeling of the radiative heat transfer from the heater to the wafer. Figure 1 shows the draft Carlo simulation is often used to solve radiative transfer problems where complex physical phenomena
Inelastic neutron scattering, Raman and DFT investigations of...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Inelastic neutron scattering, Raman and DFT investigations of the adsorption of phenanthrenequinone on onion-like carbon Daniela M. Anjos a , Alexander I. Kolesnikov a , Zili Wu a...
approximate dft method: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
method for the calculation of the electronic in the success of DFT The optimization of new functionals depends on two factors: the functional form must of the...
New Development of Self-Interaction Corrected DFT for Extended...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
DFT-SIC calculation can be carried out efficiently even for extended systems. Using this new development the formation energies of defects in 3C-SiC were calculated and compared...
Session #1: Cutting Edge Methodologies--Beyond Current DFT
Broader source: Energy.gov (indexed) [DOE]
dimer PBE LDA Exp CCSD(T) LDA PBE vdW Interaction between H 2 and Carbon PBE Graphene CCSD(T) LDA Benzene omitted in the LDA and GGA van der Walls (vdW)-DFT: Langreth,...
Propagation of uncertainties in the nuclear DFT models
Markus Kortelainen
2014-09-04T23:59:59.000Z
Parameters of the nuclear density functional theory (DFT) models are usually adjusted to experimental data. As a result they carry certain theoretical error, which, as a consequence, carries out to the predicted quantities. In this work we address the propagation of theoretical error, within the nuclear DFT models, from the model parameters to the predicted observables. In particularly, the focus is set on the Skyrme energy density functional models.
CONVERGENCE OF MARKOV CHAIN MONTE CARLO ALGORITHMS WITH APPLICATIONS TO
Rosenthal, Jeffrey S.
Jeffrey Rosenthal, my thesis advisor, for his guidance in this project and for teaching me so much. Jeff Carter, Sylvia Williams, and Tom Glinos who have always been available to sort out my administrative
A Monte Carlo Approach To Generator Portfolio Planning And Carbon...
requirement of 1 day in 10 years. The present study includes wind, centralized solar thermal, and rooftop photovoltaics, as well as hydroelectric, geothermal, and natural...
A Monte Carlo based spent fuel analysis safeguards strategy assessment
Fensin, Michael L [Los Alamos National Laboratory; Tobin, Stephen J [Los Alamos National Laboratory; Swinhoe, Martyn T [Los Alamos National Laboratory; Menlove, Howard O [Los Alamos National Laboratory; Sandoval, Nathan P [Los Alamos National Laboratory
2009-01-01T23:59:59.000Z
Safeguarding nuclear material involves the detection of diversions of significant quantities of nuclear materials, and the deterrence of such diversions by the risk of early detection. There are a variety of motivations for quantifying plutonium in spent fuel assemblies by means of nondestructive assay (NDA) including the following: strengthening the capabilities of the International Atomic Energy Agencies ability to safeguards nuclear facilities, shipper/receiver difference, input accountability at reprocessing facilities and burnup credit at repositories. Many NDA techniques exist for measuring signatures from spent fuel; however, no single NDA technique can, in isolation, quantify elemental plutonium and other actinides of interest in spent fuel. A study has been undertaken to determine the best integrated combination of cost effective techniques for quantifying plutonium mass in spent fuel for nuclear safeguards. A standardized assessment process was developed to compare the effective merits and faults of 12 different detection techniques in order to integrate a few techniques and to down-select among the techniques in preparation for experiments. The process involves generating a basis burnup/enrichment/cooling time dependent spent fuel assembly library, creating diversion scenarios, developing detector models and quantifying the capability of each NDA technique. Because hundreds of input and output files must be managed in the couplings of data transitions for the different facets of the assessment process, a graphical user interface (GUI) was development that automates the process. This GUI allows users to visually create diversion scenarios with varied replacement materials, and generate a MCNPX fixed source detector assessment input file. The end result of the assembly library assessment is to select a set of common source terms and diversion scenarios for quantifying the capability of each of the 12 NDA techniques. We present here the generalized assessment process, the techniques employed to automate the coupled facets of the assessment process, and the standard burnup/enrichment/cooling time dependent spent fuel assembly library. We also clearly define the diversion scenarios that will be analyzed during the standardized assessments. Though this study is currently limited to generic PWR assemblies, it is expected that the results of the assessment will yield an adequate spent fuel analysis strategy knowledge that will help the down-select process for other reactor types.
Boosting Monte Carlo Rendering by Ray Histogram Fusion MAURICIO DELBRACIO
Kazhdan, Michael
and Universidad de la RepÂ´ublica, Uruguay PABLO MUS Â´E Universidad de la RepÂ´ublica, Uruguay ANTONI BUADES ENS), ENS-Cachan, France, mdelbra@fing.edu.uy; P. MusÂ´e, Universidad de la RepÂ´ublica, Montevideo, Uruguay
Monte Carlo Modeling of Delayed Neutrons from Photofission
Pozzi, Sara A [ORNL; Monville, Maura E [ORNL; Padovani, Enrico [Nuclear Engineering Department Politecnico di Milano, Milan, Italy
2007-01-01T23:59:59.000Z
We describe the implementation of algorithms related to the delayed neutron production in photon-induced fission on actinides. The algorithms are based on data from experiments and have been implemented in the MCNP-PoliMi code. The modified code is being used to design and analyze methods to identify concealed highly enriched uranium with a system based on the use of photon interrogation and scintillation detectors.
MONTE CARLO STUDY OF THE BAXTER-WU MODEL
Adler, Joan
by averaging over nine runs. The uctua- tions are, in general, three order of magnitude smaller that ln scaling for second order transitions . . . . . . . . . 16 3.2.2 The reweighting method . . . . . . . . . . . . . . . . . . . . . . . . . 22 v #12; 3.3.1 LW ow-chart . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.3.2 Calculation
Efficient Monte Carlo methods for light transport in scattering media
Jarosz, Wojciech
2008-01-01T23:59:59.000Z
to the radiative transport equation. These techniquesof the radiative transport equation. These gradients takeaccount the full radiative transport equation, which leads
Monte Carlo simulations of channeling spectra recorded for samples...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
dislocation has been discussed. Several examples of the analysis performed at different energies of analyzing ions are presented. The results obtained demonstrate that the new...
A Monte Carlo tool for multi-node reliability evaluation
Thalasila, Chander Pravin
1993-01-01T23:59:59.000Z
(19 (14, 14) CBI 0 I Vff3 f'), 171 f12 131 I)SOS (12 (4) (19, 19) 8S03 (11, 12) I V02 (9, 9) 13S04 ( N02 16. 6) (6. 1 11 CHIN (I l, l I) (9, 11) C 807 f5f) 155) CHIK (5, 7) 17, 91 CBOS CH04 (15, 19) (77) (18, 181 (45) LNOI (16..., 16) HSO CBI I (15, 17) CB12 (17, 18) CB13 BS08 BSOI 12, 3) CHOI (2, 2 (2. 4) 14, 8) CB02 CB03 f 4, 4) 114 16) CB)5 (16, 20) 1 17, 17) (2f), 20) CB14 L 104 (20, 21) (21, 2 (32) 3BOI (8, 8) HS02 818 (16, 22) CB17 CB1 6 (22...
Hybrid Radiosity/Monte Carlo Methods Peter Shirley
Shirley, Peter
important of which are: continuous random variable, probability density function, expected value an elementary probability theory book (particularly the sections on continuous, rather than discrete, random density function, p, associated with x (the relationship is denoted x p). If x ranges over some region S
Visualizing Quantum Monte Carlo Study of Photoprotection via...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
project is to increase understanding of the complex processes that occur during photosynthesis. Photosynthesis, which is an efficient energy transfer system, is an example of...
Monte Carlo Computation of Optimal in Complete Markets
Cvitanic, Jaksa
-mail: goukasia@usc.edu x FBE, Marshall School of Business, USC, Los Angeles, CA 90089. Ph: (213) 740-6538. Fax