While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

1

Addendum to “Event-chain Monte Carlo algorithms for hard-sphere systems”

We extend the event-chain Monte Carlo algorithm from hard-sphere interactions to general potentials. This event-driven Monte Carlo algorithm is nonlocal and rejection free and allows for the breaking of detailed balance. ...

Bernard, Etienne

2

The Monte Carlo generator MERADGEN for the simulation of QED radiative events in polarized Moller scattering has been developed. Analytical integration wherever it is possible provides rather fast generation. Some numerical tests and histograms are presented.

Andrei Afanasev; Eugene Chudakov; Alexander Ilyichev; Vladimir Zykunov

2006-04-04T23:59:59.000Z

3

This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

Brown, F.B.; Sutton, T.M.

1996-02-01T23:59:59.000Z

4

Population Monte Carlo algorithms

We give a cross-disciplinary survey on ``population'' Monte Carlo algorithms. In these algorithms, a set of ``walkers'' or ``particles'' is used as a representation of a high-dimensional vector. The computation is carried out by a random walk and split/deletion of these objects. The algorithms are developed in various fields in physics and statistical sciences and called by lots of different terms -- ``quantum Monte Carlo'', ``transfer-matrix Monte Carlo'', ``Monte Carlo filter (particle filter)'',``sequential Monte Carlo'' and ``PERM'' etc. Here we discuss them in a coherent framework. We also touch on related algorithms -- genetic algorithms and annealed importance sampling.

Yukito IBA

2000-08-16T23:59:59.000Z

5

NLE Websites -- All DOE Office Websites (Extended Search)

Quantum Monte Carlo for the Electronic Structure of Atoms and Molecules Brian Austin Lester Group, U.C. Berkeley BES Requirements Workshop Rockville, MD February 9, 2010 Outline...

6

An Event-Driven Hybrid Molecular Dynamics and Direct Simulation Monte Carlo Algorithm

Science Conference Proceedings (OSTI)

A novel algorithm is developed for the simulation of polymer chains suspended in a solvent. The polymers are represented as chains of hard spheres tethered by square wells and interact with the solvent particles with hard core potentials. The algorithm uses event-driven molecular dynamics (MD) for the simulation of the polymer chain and the interactions between the chain beads and the surrounding solvent particles. The interactions between the solvent particles themselves are not treated deterministically as in event-driven algorithms, rather, the momentum and energy exchange in the solvent is determined stochastically using the Direct Simulation Monte Carlo (DSMC) method. The coupling between the solvent and the solute is consistently represented at the particle level, however, unlike full MD simulations of both the solvent and the solute, the spatial structure of the solvent is ignored. The algorithm is described in detail and applied to the study of the dynamics of a polymer chain tethered to a hard wall subjected to uniform shear. The algorithm closely reproduces full MD simulations with two orders of magnitude greater efficiency. Results do not confirm the existence of periodic (cycling) motion of the polymer chain.

Donev, A; Garcia, A L; Alder, B J

2007-07-30T23:59:59.000Z

7

Monte Carlo next-event point flux estimation for RCP01

Two next event point estimators have been developed and programmed into the RCP01 Monte Carlo program for solving neutron transport problems in three-dimensional geometry with detailed energy description. These estimators use a simplified but accurate flux-at-a-point tallying technique. Anisotropic scattering in the lab system at the collision site is accounted for by determining the exit energy that corresponds to the angle between the location of the collision and the point detector. Elastic, inelastic, and thermal kernel scattering events are included in this formulation. An averaging technique is used in both estimators to eliminate the well-known problem of infinite variance due to collisions close to the point detector. In a novel approach to improve the estimator`s efficiency, a Russian roulette scheme based on anticipated flux fall off is employed where averaging is not appropriate. A second estimator successfully uses a simple rejection technique in conjunction with detailed tracking where averaging isn`t needed. Test results show good agreement with known numeric solutions. Efficiencies are examined as a function of input parameter selection and problem difficulty.

Martz, R.L.; Gast, R.C.; Tyburski, L.J.

1991-12-31T23:59:59.000Z

8

The path forward: Monte Carlo Convergence discussion

This is a summary of 'the path forward' discussion session of the NuInt09 workshop which focused on Monte Carlo event generators. The main questions raised as part of this discussion are: how to make Monte Carlo generators more reliable and how important it is to work on a universal Monte Carlo generator of events? In this contribution, several experts in the field summarize their views, as presented at the workshop.

Andreopoulos, Costas [Rutherford Appleton Laboratory, STFC Oxfordshire OX11 0QX (United Kingdom); Gallagher, Hugh [Tufts University, Medford, Massachusetts (United States); Hayato, Yoshinari [Kamioka Observatory, ICRR, University of Tokyo Higashi-Mozumi 456, Kamioka-cho, Hida-city Gifu 506-1205 (Japan); Sobczyk, Jan T. [Institute of Theoretical Physics, Wroclaw, University Poland (Poland); Walter, Chris [Department of Physics, Duke University, Durham, NC 27708 (United States); Zeller, Sam [Los Alamos National Laboratory, Los Alamos, NM (United States)

2009-11-25T23:59:59.000Z

9

Monte Carlo Neutrino Oscillations

We demonstrate that the effects of matter upon neutrino propagation may be recast as the scattering of the initial neutrino wavefunction. Exchanging the differential, Schrodinger equation for an integral equation for the scattering matrix S permits a Monte Carlo method for the computation of S that removes many of the numerical difficulties associated with direct integration techniques.

James P. Kneller; Gail C. McLaughlin

2005-09-29T23:59:59.000Z

10

Monte Carlo Methods and Partial Differential Equations ...

Science Conference Proceedings (OSTI)

... Up, Monte Carlo Methods and Partial Differential Equations: Algorithms and Implications for High-Performance Computing. ...

2013-08-16T23:59:59.000Z

11

Monte Carlo Methods in Chemistry

Science Conference Proceedings (OSTI)

Monte Carlo methods fulfil an important dual role. At a specific level, they provide a general-purpose numerical approach to problems in a wide range of topics. Using such methods, we can explore the characteristics of specific systems without introducing ...

Jim Doll; David L. Freeman

1994-03-01T23:59:59.000Z

12

Optimal generalized truncated sequential Monte Carlo test

Science Conference Proceedings (OSTI)

When it is not possible to obtain the analytical null distribution of a test statistic U, Monte Carlo hypothesis tests can be used to perform the test. Monte Carlo tests are commonly used in a wide variety of applications, including spatial statistics, ... Keywords: 62L05, 62L15, 65C05, Execution time, Power loss, Resampling risk, p-value density

Ivair R. Silva, Renato M. Assunção

2013-10-01T23:59:59.000Z

13

An alternative Monte Carlo approach to the thermal radiative transfer problem

The usual Monte Carlo approach to the thermal radiative transfer problem is to view Monte Carlo as a solution technique for the nonlinear thermal radiative transfer equations. The equations contain time derivatives which are approximated by introducing small time steps. An alternative approach avoids time steps by using Monte Carlo to directly sample the time at which the next event occurs. That is, the time is advanced on a natural event-by-event basis rather than by introducing an artificial time step.

Booth, Thomas E., E-mail: teb@lanl.go [Mail Stop A143, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)

2011-02-20T23:59:59.000Z

14

The MC21 Monte Carlo Transport Code

MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities.

Sutton TM, Donovan TJ, Trumbull TH, Dobreff PS, Caro E, Griesheimer DP, Tyburski LJ, Carpenter DC, Joo H

2007-01-09T23:59:59.000Z

15

Data Decomposition of Monte Carlo Particle Transport Simulations...

NLE Websites -- All DOE Office Websites (Extended Search)

Data Decomposition of Monte Carlo Particle Transport Simulations via Tally Servers Title Data Decomposition of Monte Carlo Particle Transport Simulations via Tally Servers...

16

Fission Matrix Capability for MCNP Monte Carlo

Science Conference Proceedings (OSTI)

In a Monte Carlo criticality calculation, before the tallying of quantities can begin, a converged fission source (the fundamental eigenvector of the fission kernel) is required. Tallies of interest may include powers, absorption rates, leakage rates, or the multiplication factor (the fundamental eigenvalue of the fission kernel, k{sub eff}). Just as in the power iteration method of linear algebra, if the dominance ratio (the ratio of the first and zeroth eigenvalues) is high, many iterations of neutron history simulations are required to isolate the fundamental mode of the problem. Optically large systems have large dominance ratios, and systems containing poor neutron communication between regions are also slow to converge. The fission matrix method, implemented into MCNP[1], addresses these problems. When Monte Carlo random walk from a source is executed, the fission kernel is stochastically applied to the source. Random numbers are used for: distances to collision, reaction types, scattering physics, fission reactions, etc. This method is used because the fission kernel is a complex, 7-dimensional operator that is not explicitly known. Deterministic methods use approximations/discretization in energy, space, and direction to the kernel. Consequently, they are faster. Monte Carlo directly simulates the physics, which necessitates the use of random sampling. Because of this statistical noise, common convergence acceleration methods used in deterministic methods do not work. In the fission matrix method, we are using the random walk information not only to build the next-iteration fission source, but also a spatially-averaged fission kernel. Just like in deterministic methods, this involves approximation and discretization. The approximation is the tallying of the spatially-discretized fission kernel with an incorrect fission source. We address this by making the spatial mesh fine enough that this error is negligible. As a consequence of discretization we get a spatially low-order kernel, the fundamental eigenvector of which should converge faster than that of continuous kernel. We can then redistribute the fission bank to match the fundamental fission matrix eigenvector, effectively eliminating all higher modes. For all computations here biasing is not used, with the intention of comparing the unaltered, conventional Monte Carlo process with the fission matrix results. The source convergence of standard Monte Carlo criticality calculations are, to some extent, always subject to the characteristics of the problem. This method seeks to partially eliminate this problem-dependence by directly calculating the spatial coupling. The primary cost of this, which has prevented widespread use since its inception [2,3,4], is the extra storage required. To account for the coupling of all N spatial regions to every other region requires storing N{sup 2} values. For realistic problems, where a fine resolution is required for the suppression of discretization error, the storage becomes inordinate. Two factors lead to a renewed interest here: the larger memory available on modern computers and the development of a better storage scheme based on physical intuition. When the distance between source and fission events is short compared with the size of the entire system, saving memory by accounting for only local coupling introduces little extra error. We can gain other information from directly tallying the fission kernel: higher eigenmodes and eigenvalues. Conventional Monte Carlo cannot calculate this data - here we have a way to get new information for multiplying systems. In Ref. [5], higher mode eigenfunctions are analyzed for a three-region 1-dimensional problem and 2-dimensional homogenous problem. We analyze higher modes for more realistic problems. There is also the question of practical use of this information; here we examine a way of using eigenmode information to address the negative confidence interval bias due to inter-cycle correlation. We apply this method mainly to four problems: 2D pressurized water reactor (PWR) [6],

Carney, Sean E. [Los Alamos National Laboratory; Brown, Forrest B. [Los Alamos National Laboratory; Kiedrowski, Brian C. [Los Alamos National Laboratory; Martin, William R. [Los Alamos National Laboratory

2012-09-05T23:59:59.000Z

17

Fission Matrix Capability for MCNP Monte Carlo

In a Monte Carlo criticality calculation, before the tallying of quantities can begin, a converged fission source (the fundamental eigenvector of the fission kernel) is required. Tallies of interest may include powers, absorption rates, leakage rates, or the multiplication factor (the fundamental eigenvalue of the fission kernel, k{sub eff}). Just as in the power iteration method of linear algebra, if the dominance ratio (the ratio of the first and zeroth eigenvalues) is high, many iterations of neutron history simulations are required to isolate the fundamental mode of the problem. Optically large systems have large dominance ratios, and systems containing poor neutron communication between regions are also slow to converge. The fission matrix method, implemented into MCNP[1], addresses these problems. When Monte Carlo random walk from a source is executed, the fission kernel is stochastically applied to the source. Random numbers are used for: distances to collision, reaction types, scattering physics, fission reactions, etc. This method is used because the fission kernel is a complex, 7-dimensional operator that is not explicitly known. Deterministic methods use approximations/discretization in energy, space, and direction to the kernel. Consequently, they are faster. Monte Carlo directly simulates the physics, which necessitates the use of random sampling. Because of this statistical noise, common convergence acceleration methods used in deterministic methods do not work. In the fission matrix method, we are using the random walk information not only to build the next-iteration fission source, but also a spatially-averaged fission kernel. Just like in deterministic methods, this involves approximation and discretization. The approximation is the tallying of the spatially-discretized fission kernel with an incorrect fission source. We address this by making the spatial mesh fine enough that this error is negligible. As a consequence of discretization we get a spatially low-order kernel, the fundamental eigenvector of which should converge faster than that of continuous kernel. We can then redistribute the fission bank to match the fundamental fission matrix eigenvector, effectively eliminating all higher modes. For all computations here biasing is not used, with the intention of comparing the unaltered, conventional Monte Carlo process with the fission matrix results. The source convergence of standard Monte Carlo criticality calculations are, to some extent, always subject to the characteristics of the problem. This method seeks to partially eliminate this problem-dependence by directly calculating the spatial coupling. The primary cost of this, which has prevented widespread use since its inception [2,3,4], is the extra storage required. To account for the coupling of all N spatial regions to every other region requires storing N{sup 2} values. For realistic problems, where a fine resolution is required for the suppression of discretization error, the storage becomes inordinate. Two factors lead to a renewed interest here: the larger memory available on modern computers and the development of a better storage scheme based on physical intuition. When the distance between source and fission events is short compared with the size of the entire system, saving memory by accounting for only local coupling introduces little extra error. We can gain other information from directly tallying the fission kernel: higher eigenmodes and eigenvalues. Conventional Monte Carlo cannot calculate this data - here we have a way to get new information for multiplying systems. In Ref. [5], higher mode eigenfunctions are analyzed for a three-region 1-dimensional problem and 2-dimensional homogenous problem. We analyze higher modes for more realistic problems. There is also the question of practical use of this information; here we examine a way of using eigenmode information to address the negative confidence interval bias due to inter-cycle correlation. We apply this method mainly to four problems: 2D pressurized water reactor (PWR) [6],

Carney, Sean E. [Los Alamos National Laboratory; Brown, Forrest B. [Los Alamos National Laboratory; Kiedrowski, Brian C. [Los Alamos National Laboratory; Martin, William R. [Los Alamos National Laboratory

2012-09-05T23:59:59.000Z

18

Monte Carlo simulation in financial engineering

Science Conference Proceedings (OSTI)

This paper reviews the use of Monte Carlo simulation in the field of financial engineering. It focuses on several interesting topics and introduces their recent development, including path generation, pricing American-style derivatives, evaluating Greeks ...

Nan Chen; L. Jeff Hong

2007-12-01T23:59:59.000Z

19

Monte Carlo simulation in systems biology

2 The history of Monte Carlo Sampling in Systems Biology 1.1simulation tools: the systems biology workbench and biospiceCellular and Molecular Biology. ASM Press, Washington

Schellenberger, Jan

2010-01-01T23:59:59.000Z

20

Exponential convergence with adaptive Monte Carlo

Science Conference Proceedings (OSTI)

For over a decade, it has been known that exponential convergence on discrete transport problems was possible using adaptive Monte Carlo techniques. Now, exponential convergence has been empirically demonstrated on a spatially continuous problem.

Booth, T.E.

1997-11-01T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

21

The Rational Hybrid Monte Carlo Algorithm

The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.

M. A. Clark

2006-10-06T23:59:59.000Z

22

A First-Passage Kinetic Monte Carlo algorithm for complex diffusion-reaction systems

Science Conference Proceedings (OSTI)

We develop an asynchronous event-driven First-Passage Kinetic Monte Carlo (FPKMC) algorithm for continuous time and space systems involving multiple diffusing and reacting species of spherical particles in two and three dimensions. The FPKMC algorithm ... Keywords: Asynchronous algorithms, Diffusion-reaction, First-passage, Kinetic Monte Carlo

Aleksandar Donev; Vasily V. Bulatov; Tomas Oppelstrup; George H. Gilmer; Babak Sadigh; Malvin H. Kalos

2010-05-01T23:59:59.000Z

23

Monte Carlo Renormalization Group: a review

Science Conference Proceedings (OSTI)

The logic and the methods of Monte Carlo Renormalization Group (MCRG) are reviewed. A status report of results for 4-dimensional lattice gauge theories derived using MCRG is presented. Existing methods for calculating the improved action are reviewed and evaluated. The Gupta-Cordery improved MCRG method is described and compared with the standard one. 71 refs., 8 figs.

Gupta, R.

1985-01-01T23:59:59.000Z

24

Relevance of accurate Monte Carlo modeling in nuclear medical imaging

Science Conference Proceedings (OSTI)

Monte Carlo techniques have become popular in different areas of medical physics with advantage of powerful computing systems. In particular

Habib Zaidi

1999-01-01T23:59:59.000Z

25

Solving Systems of Linear Equations with Relaxed Monte Carlo Method

Science Conference Proceedings (OSTI)

The problem of solving systems of linear algebraic equations by parallel Monte Carlo numerical methods is considered. A parallel Monte Carlo method with relaxation is presented. This is a report of a research in progress, showing the effectiveness of ... Keywords: Monte Carlo method, linear solver, parallel algorithms, systems of linear algebraic equations

Chih Jeng Kenneth Tan

2002-05-01T23:59:59.000Z

26

Condensed history Monte Carlo methods for photon transport problems

Science Conference Proceedings (OSTI)

We study methods for accelerating Monte Carlo simulations that retain most of the accuracy of conventional Monte Carlo algorithms. These methods - called Condensed History (CH) methods - have been very successfully used to model the transport of ionizing ... Keywords: Condensed history models, Monte Carlo methods, Radiative transport equation

Katherine Bhan; Jerome Spanier

2007-08-01T23:59:59.000Z

27

Monte Carlo: in the beginning and some great expectations

The central theme will be on the historical setting and origins of the Monte Carlo Method. The scene was post-war Los Alamos Scientific Laboratory. There was an inevitability about the Monte Carlo Event: the ENIAC had recently enjoyed its meteoric rise (on a classified Los Alamos problem); Stan Ulam had returned to Los Alamos; John von Neumann was a frequent visitor. Techniques, algorithms, and applications developed rapidly at Los Alamos. Soon, the fascination of the Method reached wider horizons. The first paper was submitted for publication in the spring of 1949. In the summer of 1949, the first open conference was held at the University of California at Los Angeles. Of some interst perhaps is an account of Fermi's earlier, independent application in neutron moderation studies while at the University of Rome. The quantum leap expected with the advent of massively parallel processors will provide stimuli for very ambitious applications of the Monte Carlo Method in disciplines ranging from field theories to cosmology, including more realistic models in the neurosciences. A structure of multi-instruction sets for parallel processing is ideally suited for the Monte Carlo approach. One may even hope for a modest hardening of the soft sciences.

Metropolis, N.

1985-01-01T23:59:59.000Z

28

Status of Monte Carlo at Los Alamos

At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time.

Thompson, W.L.; Cashwell, E.D.

1980-01-01T23:59:59.000Z

29

Monte Carlo Simulation for Particle Detectors

Monte Carlo simulation is an essential component of experimental particle physics in all the phases of its life-cycle: the investigation of the physics reach of detector concepts, the design of facilities and detectors, the development and optimization of data reconstruction software, the data analysis for the production of physics results. This note briefly outlines some research topics related to Monte Carlo simulation, that are relevant to future experimental perspectives in particle physics. The focus is on physics aspects: conceptual progress beyond current particle transport schemes, the incorporation of materials science knowledge relevant to novel detection technologies, functionality to model radiation damage, the capability for multi-scale simulation, quantitative validation and uncertainty quantification to determine the predictive power of simulation. The R&D on simulation for future detectors would profit from cooperation within various components of the particle physics community, and synerg...

Pia, Maria Grazia

2012-01-01T23:59:59.000Z

30

Guideline of Monte Carlo calculation. Neutron/gamma ray transport simulation by Monte Carlo method

This report condenses basic theories and advanced applications of neutron/gamma ray transport calculations in many fields of nuclear energy research. Chapters 1 through 5 treat historical progress of Monte Carlo methods, general issues of variance reduction technique, cross section libraries used in continuous energy Monte Carlo codes. In chapter 6, the following issues are discussed: fusion benchmark experiments, design of ITER, experiment analyses of fast critical assembly, core analyses of JMTR, simulation of pulsed neutron experiment, core analyses of HTTR, duct streaming calculations, bulk shielding calculations, neutron/gamma ray transport calculations of the Hiroshima atomic bomb. Chapters 8 and 9 treat function enhancements of MCNP and MVP codes, and a parallel processing of Monte Carlo calculation, respectively. An important references are attached at the end of this report.

2002-01-01T23:59:59.000Z

31

QWalk: A quantum Monte Carlo program for electronic structure

Science Conference Proceedings (OSTI)

We describe QWalk, a new computational package capable of performing quantum Monte Carlo electronic structure calculations for molecules and solids with many electrons. We describe the structure of the program and its implementation of quantum Monte ... Keywords: Monte Carlo, Quantum mechanics, Stochastic methods

Lucas K. Wagner; Michal Bajdich; Lubos Mitas

2009-05-01T23:59:59.000Z

32

Monte Carlo Particle Transport: Algorithm and Performance Overview

National Nuclear Security Administration (NNSA)

Monte Carlo Particle Transport: Monte Carlo Particle Transport: Algorithm and Performance Overview N. A. Gentile, R. J. Procassini and H. A. Scott Lawrence Livermore National Laboratory, Livermore, California, 94551 Monte Carlo methods are frequently used for neutron and radiation trans- port. These methods have several advantages, such as relative ease of programming and dealing with complex meshes. Disadvantages include long run times and statistical noise. Monte Carlo photon transport calcu- lations also often suffer from inaccuracies in matter temperature due to the lack of implicitness. In this paper we discuss the Monte Carlo algo- rithm as it is applied to neutron and photon transport, detail the differ- ences between neutron and photon Monte Carlo, and give an overview of the ways the numerical method has been modified to deal with issues that

33

Quantum Monte Carlo for vibrating molecules

Quantum Monte Carlo (QMC) has successfully computed the total electronic energies of atoms and molecules. The main goal of this work is to use correlation function quantum Monte Carlo (CFQMC) to compute the vibrational state energies of molecules given a potential energy surface (PES). In CFQMC, an ensemble of random walkers simulate the diffusion and branching processes of the imaginary-time time dependent Schroedinger equation in order to evaluate the matrix elements. The program QMCVIB was written to perform multi-state VMC and CFQMC calculations and employed for several calculations of the H{sub 2}O and C{sub 3} vibrational states, using 7 PES`s, 3 trial wavefunction forms, two methods of non-linear basis function parameter optimization, and on both serial and parallel computers. In order to construct accurate trial wavefunctions different wavefunctions forms were required for H{sub 2}O and C{sub 3}. In order to construct accurate trial wavefunctions for C{sub 3}, the non-linear parameters were optimized with respect to the sum of the energies of several low-lying vibrational states. In order to stabilize the statistical error estimates for C{sub 3} the Monte Carlo data was collected into blocks. Accurate vibrational state energies were computed using both serial and parallel QMCVIB programs. Comparison of vibrational state energies computed from the three C{sub 3} PES`s suggested that a non-linear equilibrium geometry PES is the most accurate and that discrete potential representations may be used to conveniently determine vibrational state energies.

Brown, W.R. [Univ. of California, Berkeley, CA (United States). Chemistry Dept.]|[Lawrence Berkeley National Lab., CA (United States). Chemical Sciences Div.

1996-08-01T23:59:59.000Z

34

Discrete diffusion Monte Carlo for frequency-dependent radiative transfer

Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.

Densmore, Jeffrey D [Los Alamos National Laboratory; Kelly, Thompson G [Los Alamos National Laboratory; Urbatish, Todd J [Los Alamos National Laboratory

2010-11-17T23:59:59.000Z

35

Science Conference Proceedings (OSTI)

Overview of this presentation is (1) Exascale computing - different technologies, getting there; (2) high-performance proof-of-concept MCMini - features and results; and (3) OpenCL toolkit - Oatmeal (OpenCL Automatic Memory Allocation Library) - purpose and features. Despite driver issues, OpenCL seems like a good, hardware agnostic tool. MCMini demonstrates the possibility for GPGPU-based Monte Carlo methods - it shows great scaling for HPC application and algorithmic equivalence. Oatmeal provides a flexible framework to aid in the development of scientific OpenCL codes.

Marcus, Ryan C. [Los Alamos National Laboratory

2012-07-24T23:59:59.000Z

36

Monte Carlo simulations on Graphics Processing Units

Implementation of basic local Monte-Carlo algorithms on ATI Graphics Processing Units (GPU) is investigated. The Ising model and pure SU(2) gluodynamics simulations are realized with the Compute Abstraction Layer (CAL) of ATI Stream environment using the Metropolis and the heat-bath algorithms, respectively. We present an analysis of both CAL programming model and the efficiency of the corresponding simulation algorithms on GPU. In particular, the significant performance speed-up of these algorithms in comparison with serial execution is observed.

Vadim Demchik; Alexei Strelchenko

2009-03-17T23:59:59.000Z

37

Numerical study of error propagation in Monte Carlo depletion simulations.

??Improving computer technology and the desire to more accurately model the heterogeneity of the nuclear reactor environment have made the use of Monte Carlo depletion… (more)

Wyant, Timothy Joseph

2012-01-01T23:59:59.000Z

38

Visualizing Quantum Monte Carlo Study of Photoprotection via...

NLE Websites -- All DOE Office Websites (Extended Search)

Photoprotection via Carotenoids in Photosynthetic Centers Quantum Monte Carlo Study of Photoprotection via Carotenoids in Photosynthetic Centers photosynthesisimage30y2004.jpg The...

39

HILO: Quasi Diffusion Accelerated Monte Carlo on Hybrid Architectures

NLE Websites -- All DOE Office Websites (Extended Search)

HILO: Quasi Diffusion Accelerated Monte Carlo on Hybrid Architectures The Boltzmann transport equation provides high fidelity simulation of a diverse range of kinetic systems....

40

A Kinetic Monte Carlo Model for Material Aging: Simulations of ...

Science Conference Proceedings (OSTI)

In this paper, we develop a kinetic Monte Carlo framework aiming at ... A Controlled Stress Energy Minimization Method for Coarse-grained Atomistic Simulation.

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

41

THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE

MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.

WATERS, LAURIE S. [Los Alamos National Laboratory; MCKINNEY, GREGG W. [Los Alamos National Laboratory; DURKEE, JOE W. [Los Alamos National Laboratory; FENSIN, MICHAEL L. [Los Alamos National Laboratory; JAMES, MICHAEL R. [Los Alamos National Laboratory; JOHNS, RUSSELL C. [Los Alamos National Laboratory; PELOWITZ, DENISE B. [Los Alamos National Laboratory

2007-01-10T23:59:59.000Z

42

Quantum Monte Carlo for atoms and molecules

The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H{sub 2}, LiH, Li{sub 2}, and H{sub 2}O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li{sub 2}, and H{sub 2}O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations, the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions.

Barnett, R.N.

1989-11-01T23:59:59.000Z

43

Energy Monte Carlo (EMCEE) | Open Energy Information

Energy Monte Carlo (EMCEE) Energy Monte Carlo (EMCEE) Jump to: navigation, search Tool Summary LAUNCH TOOL Name: EMCEE and Emc2 Agency/Company /Organization: United States Geological Survey Sector: Energy Focus Area: Non-renewable Energy Topics: Resource assessment Resource Type: Software/modeling tools User Interface: Spreadsheet Website: pubs.usgs.gov/pp/pp1713/26/ Country: United States Cost: Free Northern America Coordinates: 37.09024Â°, -95.712891Â° Loading map... {"minzoom":false,"mappingservice":"googlemaps3","type":"ROADMAP","zoom":14,"types":["ROADMAP","SATELLITE","HYBRID","TERRAIN"],"geoservice":"google","maxzoom":false,"width":"600px","height":"350px","centre":false,"title":"","label":"","icon":"","visitedicon":"","lines":[],"polygons":[],"circles":[],"rectangles":[],"copycoords":false,"static":false,"wmsoverlay":"","layers":[],"controls":["pan","zoom","type","scale","streetview"],"zoomstyle":"DEFAULT","typestyle":"DEFAULT","autoinfowindows":false,"kml":[],"gkml":[],"fusiontables":[],"resizable":false,"tilt":0,"kmlrezoom":false,"poi":true,"imageoverlays":[],"markercluster":false,"searchmarkers":"","locations":[{"text":"","title":"","link":null,"lat":37.09024,"lon":-95.712891,"alt":0,"address":"","icon":"","group":"","inlineLabel":"","visitedicon":""}]}

44

Quantum Monte Carlo Endstation for Petascale Computing

NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlo code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13 published papers, 15 invited talks and lectures nationally and internationally. My former graduate student and postdoc Dr. Michal Bajdich, who was supported byt this grant, is currently a postdoc with ORNL in the group of Dr. F. Reboredo and Dr. P. Kent and is using the developed tools in a number of DOE projects. The QWalk package has become a truly important research tool used by the electronic structure community and has attracted several new developers in other research groups. Our tools use several types of correlated wavefunction approaches, variational, diffusion and reptation methods, large-scale optimization methods for wavefunctions and enables to calculate energy differences such as cohesion, electronic gaps, but also densities and other properties, using multiple runs one can obtain equations of state for given structures and beyond. Our codes use efficient numerical and Monte Carlo strategies (high accuracy numerical orbitals, multi-reference wave functions, highly accurate correlation factors, pairing orbitals, force biased and correlated sampling Monte Carlo), are robustly parallelized and enable to run on tens of thousands cores very efficiently. Our demonstration applications were focused on the challenging research problems in several fields of materials science such as transition metal solids. We note that our study of FeO solid was the first QMC calculation of transition metal oxides at high pressures.

Lubos Mitas

2011-01-26T23:59:59.000Z

45

Quantum Monte Carlo simulations of solids W. M. C. Foulkes

Quantum Monte Carlo simulations of solids W. M. C. Foulkes CMTH Group, Department of Physics and fixed-node diffusion quantum Monte Carlo methods and how they may be used to calculate the properties of quantum many-body effects and serve as benchmarks against which other techniques may be compared

Wu, Zhigang

46

Efficient, automated Monte Carlo methods for radiation transport

Science Conference Proceedings (OSTI)

Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence ... Keywords: Computational efficiency, Geometrically convergent Monte Carlo algorithms, Transport equation

Rong Kong; Martin Ambrose; Jerome Spanier

2008-11-01T23:59:59.000Z

47

Adjoint electron-photon transport Monte Carlo calculations with ITS

Science Conference Proceedings (OSTI)

A general adjoint coupled electron-photon Monte Carlo code for solving the Boltzmann-Fokker-Planck equation has recently been created. It is a modified version of ITS 3.0, a coupled electronphoton Monte Carlo code that has world-wide distribution. The applicability of the new code to radiation-interaction problems of the type found in space environments is demonstrated.

Lorence, L.J.; Kensek, R.P.; Halbleib, J.A. [Sandia National Labs., Albuquerque, NM (United States); Morel, J.E. [Los Alamos National Lab., NM (United States)

1995-02-01T23:59:59.000Z

48

Bold diagrammatic Monte Carlo for the resonant Fermi gas

We provide a comprehensive description of the Bold Diagrammatic Monte Carlo algorithm for the normal resonant Fermi gas that was briefly reported and used in [Nature Phys. 8, 366 (2012)] and [arXiv:1303.6245]. Details are given on all key aspects of the scheme: diagrammatic framework, Monte Carlo moves, ultraviolet asymptotics, and resummation techniques.

Van Houcke, K; Prokof'ev, N; Svistunov, B

2013-01-01T23:59:59.000Z

49

Hybrid algorithms in quantum Monte Carlo

With advances in algorithms and growing computing powers, quantum Monte Carlo (QMC) methods have become a leading contender for high accuracy calculations for the electronic structure of realistic systems. The performance gain on recent HPC systems is largely driven by increasing parallelism: the number of compute cores of a SMP and the number of SMPs have been going up, as the Top500 list attests. However, the available memory as well as the communication and memory bandwidth per element has not kept pace with the increasing parallelism. This severely limits the applicability of QMC and the problem size it can handle. OpenMP/MPI hybrid programming provides applications with simple but effective solutions to overcome efficiency and scalability bottlenecks on large-scale clusters based on multi/many-core SMPs. We discuss the design and implementation of hybrid methods in QMCPACK and analyze its performance on current HPC platforms characterized by various memory and communication hierarchies.

Esler, Kenneth P [ORNL; Mcminis, Jeremy [University of Illinois, Urbana-Champaign; Morales, Miguel A [Lawrence Livermore National Laboratory (LLNL); Clark, Bryan K. [Princeton University; Shulenburger, Luke [Sandia National Laboratory (SNL); Ceperley, David M [ORNL

2012-01-01T23:59:59.000Z

50

Quantum Ice : a quantum Monte Carlo study

Ice states, in which frustrated interactions lead to a macroscopic ground-state degeneracy, occur in water ice, in problems of frustrated charge order on the pyrochlore lattice, and in the family of rare-earth magnets collectively known as spin ice. Of particular interest at the moment are "quantum spin ice" materials, where large quantum fluctuations may permit tunnelling between a macroscopic number of different classical ground states. Here we use zero-temperature quantum Monte Carlo simulations to show how such tunnelling can lift the degeneracy of a spin or charge ice, stabilising a unique "quantum ice" ground state --- a quantum liquid with excitations described by the Maxwell action of 3+1-dimensional quantum electrodynamics. We further identify a competing ordered "squiggle" state, and show how both squiggle and quantum ice states might be distinguished in neutron scattering experiments on a spin ice material.

Nic Shannon; Olga Sikora; Frank Pollmann; Karlo Penc; Peter Fulde

2011-05-20T23:59:59.000Z

51

Monte Carlo calculations of channeling radiation

Science Conference Proceedings (OSTI)

Results of classical Monte Carlo calculations are presented for the radiation produced by ultra-relativistic positrons incident in a direction parallel to the (110) plane of Si in the energy range 30 to 100 MeV. The results all show the characteristic CR(channeling radiation) peak in the energy range 20 keV to 100 keV. Plots of the centroid energies, widths, and total yields of the CR peaks as a function of energy show the power law dependences of ..gamma../sup 1/ /sup 5/, ..gamma../sup 1/ /sup 7/, and ..gamma../sup 2/ /sup 5/ respectively. Except for the centroid energies and power-law dependence is only approximate. Agreement with experimental data is good for the centroid energies and only rough for the widths. Adequate experimental data for verifying the yield dependence on ..gamma.. does not yet exist.

Bloom, S.D.; Berman, B.L.; Hamilton, D.C.; Alguard, M.J.; Barrett, J.H.; Datz, S.; Pantell, R.H.; Swent, R.H.

1981-01-01T23:59:59.000Z

52

FREYA-a new Monte Carlo code for improved modeling of fission chains

Science Conference Proceedings (OSTI)

A new simulation capability for modeling of individual fission events and chains and the transport of fission products in materials is presented. FREYA ( Fission Yield Event Yield Algorithm ) is a Monte Carlo code for generating fission events providing correlated kinematic information for prompt neutrons, gammas, and fragments. As a standalone code, FREYA calculates quantities such as multiplicity-energy, angular, and gamma-neutron energy sharing correlations. To study materials with multiplication, shielding effects, and detectors, we have integrated FREYA into the general purpose Monte Carlo code MCNP. This new tool will allow more accurate modeling of detector responses including correlations and the development of SNM detectors with increased sensitivity.

Hagmann, C A; Randrup, J; Vogt, R L

2012-06-12T23:59:59.000Z

53

National Nuclear Security Administration (NNSA)

Monte Carlo Simulation of Joint Transport of Neutrons Monte Carlo Simulation of Joint Transport of Neutrons Monte Carlo Simulation of Joint Transport of Neutrons and Photons and Photons Zhitnik Zhitnik A A . . K K . . , , Artemeva Artemeva E.V., E.V., Bakanov Bakanov V.V., V.V., Donskoy Donskoy E.N., E.N., Zalyalov Zalyalov A.N., A.N., Ivanov Ivanov N.V., N.V., Ognev Ognev S.P., S.P., Ronzhin Ronzhin A.B., A.B., Roslov Roslov V.I., V.I., Semenova Semenova T.V. T.V. RFNC-VNIIEF, 607190, Sarov, RFNC-VNIIEF, 607190, Sarov, Nizhni Nizhni Novgorod region Novgorod region The approaches used at VNIIEF to simulate transport of neutrons and photons in standard (with The approaches used at VNIIEF to simulate transport of neutrons and photons in standard (with surface description of region interfaces) and grid geometries are considered in the paper.

54

??The Monte Carlo method provides powerful geometric modeling capabilities for large problem domains in 3-D; therefore, the Monte Carlo method is becoming popular for 3-D… (more)

Newell, Quentin Thomas

2011-01-01T23:59:59.000Z

55

Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

Science Conference Proceedings (OSTI)

If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.

Urbatsch, T.J.

1995-11-01T23:59:59.000Z

56

Independent Pixel and Monte Carlo Estimates of Stratocumulus Albedo

Science Conference Proceedings (OSTI)

Monte Carlo radiative transfer methods are employed here to estimate the plane-parallel albedo bias for marine stratocumulus clouds. This is the bias in estimates of the mesoscale-average albedo, which arises from the assumption that cloud liquid ...

Robert F. Cahalan; William Ridgway; Warren J. Wiscombe; Steven Gollmer; Harshvardhan

1994-12-01T23:59:59.000Z

57

Parallel Fission Bank Algorithms in Monte Carlo Criticality Calculations

In this work we describe a new method for parallelizing the source iterations in a Monte Carlo criticality calculation. Instead of having one global fission bank that needs to be synchronized, as is traditionally done, our ...

Romano, Paul Kollath

58

Bayesian inverse problems with Monte Carlo forward models

The full application of Bayesian inference to inverse problems requires exploration of a posterior distribution that typically does not possess a standard form. In this context, Markov chain Monte Carlo (MCMC) methods are ...

Bal, Guillaume

59

Enhancements in Continuous-Energy Monte Carlo Capabilities in SCALE

Monte Carlo tools in SCALE are commonly used in criticality safety calculations as well as sensitivity and uncertainty analysis, depletion, and criticality alarm system analyses. Recent improvements in the continuous-energy data generated by the AMPX code system and significant advancements in the continuous-energy treatment in the KENO Monte Carlo eigenvalue codes facilitate the use of SCALE Monte Carlo codes to model geometrically complex systems with enhanced solution fidelity. The addition of continuous-energy treatment to the SCALE Monaco code, which can be used with automatic variance reduction in the hybrid MAVRIC sequence, provides significant enhancements, especially for criticality alarm system modeling. This paper describes some of the advancements in continuous-energy Monte Carlo codes within the SCALE code system.

Bekar, Kursat B [ORNL; Celik, Cihangir [ORNL; Wiarda, Dorothea [ORNL; Peplow, Douglas E. [ORNL; Rearden, Bradley T [ORNL; Dunn, Michael E [ORNL

2013-01-01T23:59:59.000Z

60

Low Dose Radiation Research Program: Monte Carlo Track Structure...

NLE Websites -- All DOE Office Websites (Extended Search)

Monte Carlo Track Structure Simulations for Low-LET Selected Cell Radiation Studies Walt Wilson Washington State University Tri-Cities Why This Project There are many types of...

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

61

Kinetic Monte Carlo simulations of nanocrystalline film deposition

A full diffusion kinetic Monte Carlo algorithm is used to model nanocrystalline film deposition, and study the mechanisms of grain nucleation and microstructure formation in such films. The major finding of this work is ...

Ruan, Shiyun

62

Quiet direct simulation Monte-Carlo with random timesteps

Science Conference Proceedings (OSTI)

Use of a high-order deterministic sampling technique in direct simulation Monte-Carlo (DSMC) simulations eliminates statistical noise and improves computational performance by orders of magnitude. In this paper it is also shown that if a random timestep ... Keywords: 02.50.Ey, 02.70.Tt, 47.11.+j, 52.65.-y, Direct simulation Monte-Carlo, Particle-in-cell methods, Stochastic processess

William Peter

2007-01-01T23:59:59.000Z

63

High-Performance Quasi-Monte Carlo Financial Simulation: FPGA vs. GPP vs. GPU

Science Conference Proceedings (OSTI)

Quasi-Monte Carlo simulation is a special Monte Carlo simulation method that uses quasi-random or low-discrepancy numbers as random sample sets. In many applications, this method has proved advantageous compared to the traditional Monte Carlo simulation ... Keywords: CPU, FPGA, GPU, Maxwell, Quasi-Monte Carlo simulations, option pricing

Xiang Tian; Khaled Benkrid

2010-11-01T23:59:59.000Z

64

A hybrid transport-diffusion method for Monte Carlo radiative-transfer simulations

Science Conference Proceedings (OSTI)

Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Monte Carlo particle-transport simulations in diffusive media. If standard Monte Carlo is used in such media, particle histories will consist of many small steps, resulting ... Keywords: Hybrid transport-diffusion, Monte Carlo, Radiative transfer

Jeffery D. Densmore; Todd J. Urbatsch; Thomas M. Evans; Michael W. Buksas

2007-03-01T23:59:59.000Z

65

Quantum Monte Carlo Endstation for Petascale Computing

Science Conference Proceedings (OSTI)

The major achievements enabled by QMC Endstation grant include * Performance improvement on clusters of x86 multi-core systems, especially on Cray XT systems * New and improved methods for the wavefunction optimizations * New forms of trial wavefunctions * Implementation of the full application on NVIDIA GPUs using CUDA The scaling studies of QMCPACK on large-scale systems show excellent parallel efficiency up to 216K cores on Jaguarpf (Cray XT5). The GPU implementation shows speedups of 10-15x over the CPU implementation on older generation of x86. We have implemented hybrid OpenMP/MPI scheme in QMC to take advantage of multi-core shared memory processors of petascale systems. Our hybrid scheme has several advantages over the standard MPI-only scheme. * Memory optimized: large read-only data to store one-body orbitals and other shared properties to represent the trial wave function and many-body Hamiltonian can be shared among threads, which reduces the memory footprint of a large-scale problem. * Cache optimized: the data associated with an active Walker are in cache during the compute-intensive drift-diffusion process and the operations on an Walker are optimized for cache reuse. Thread-local objects are used to ensure the data affinity to a thread. * Load balanced: Walkers in an ensemble are evenly distributed among threads and MPI tasks. The two-level parallelism reduces the population imbalance among MPI tasks and reduces the number of point-to-point communications of large messages (serialized objects) for the Walker exchange. * Communication optimized: the communication overhead, especially for the collective operations necessary to determine ET and measure the properties of an ensemble, is significantly lowered by using less MPI tasks. The multiple forms of parallelism afforded by QMC algorithms make them ideal candidates for acceleration in the many-core paradigm. We presented the results of our effort to port the QMCPACK simulation code to the NVIDIA CUDA GPU platform. We restructured the CPU algorithms to express additional parallelism, minimize GPU-CPU communication, and efficiently utilize the GPU memory hierarchy. Using mixed precision on GT200 GPUs and MPI for intercommunication and load balancing, we observe typical full-application speedups of approximately 10x to 15x relative to quad-core Xeon CPUs alone, while reproducing the double-precision CPU results within statistical error. We developed an all-electron quantum Monte Carlo (QMC) method for solids that does not rely on pseudopotentials, and used it to construct a primary ultra-high-pressure calibration based on the equation of state of cubic boron nitride. We computed the static contribution to the free energy with the QMC method and obtained the phonon contribution from density functional theory, yielding a high-accuracy calibration up to 900 GPa usable directly in experiment. We computed the anharmonic Raman frequency shift with QMC simulations as a function of pressure and temperature, allowing optical pressure calibration. In contrast to present experimental approaches, small systematic errors in the theoretical EOS do not increase with pressure, and no extrapolation is needed. This all-electron method is applicable to first-row solids, providing a new reference for ab initio calculations of solids and benchmarks for pseudopotential accuracy. We compared experimental and theoretical results on the momentum distribution and the quasiparticle renormalization factor in sodium. From an x-ray Compton-profile measurement of the valence-electron momentum density, we derived its discontinuity at the Fermi wavevector finding an accurate measure of the renormalization factor that we compared with quantum-Monte-Carlo and G0W0 calculations performed both on crystalline sodium and on the homogeneous electron gas. Our calculated results are in good agreement with the experiment. We have been studying the heat of formation for various Kubas complexes of molecular hydrogen on Ti(1,2)ethylene-nH2 using Diffusion Monte Carlo. This work has been started and is o

David Ceperley

2011-03-02T23:59:59.000Z

66

A Monte Carlo Assessment of Uncertainties in Heavy Precipitation Frequency Variations

Science Conference Proceedings (OSTI)

A Monte Carlo analysis was used to assess the effects of missing data and limited station density on the uncertainties in the temporal variations of U.S. heavy precipitation event frequencies observed for 1895–2004 using data from the U.S. ...

Kenneth E. Kunkel; Thomas R. Karl; David R. Easterling

2007-10-01T23:59:59.000Z

67

DOE Science Showcase - Monte Carlo Methods | OSTI, US Dept of Energy,

Office of Scientific and Technical Information (OSTI)

Monte Carlo Methods Monte Carlo Methods Monte Carlo calculation methods are algorithms for solving various kinds of computational problems by using (pseudo)random numbers. Developed in the 1940s during the Manhattan Project, the Monte Carlo method signified a radical change in how scientists solved problems. Learn about the ways these methods are used in DOE's research endeavors today in "Monte Carlo Methods" by Dr. William Watson, Physicist, OSTI staff. Effects of static particle dispersions on grain growth are studied using SPPARKS simulations Image credit: Sandia National Laboratory Monte Carlo Results in DOE Databases Lab biophysicist invents improvement to Monte Carlo technique, LLNL News Monte Carlo Benchmark software, ESTSC Improved Monte Carlo Renormalization Group Method, DOE R&D

68

An Overview of Geometry Representation in Monte Carlo Codes

National Nuclear Security Administration (NNSA)

Geometry Representation Geometry Representation in Monte Carlo Codes R.P. Kensek, * B.C. Franke, * T.W. Laub * , L.J. Lorence, * M. R. Martin, * S. Warren â€ * Sandia National Laboratories, P.O. Box 5800, MS 1179, Albuquerque, NM 87185 â€ Kansas State University, Manhattan, KS 66506 Geometry representations in production Monte Carlo radiation transport codes used for linear-transport simulations are traditionally limited to combinatorial geometry (CG) topologies. While CG representations of input geometries are efficient to query, they are difficult to construct. In the Integrated-TIGER-Series (ITS) Monte Carlo code suite, a new approach for radiation transport geometry engines has been implemented that allows for Computer Aided Design (CAD), facetted approximations, and other geometry types to simultaneously define an input geometry.

69

The Monte Carlo Independent Column Approximation Model Intercomparison

NLE Websites -- All DOE Office Websites (Extended Search)

The Monte Carlo Independent Column Approximation Model Intercomparison The Monte Carlo Independent Column Approximation Model Intercomparison Project (McMIP) Barker, Howard Meteorological Service of Canada Cole, Jason Meteorological Service of Canada Raisanen, Petri Finnish Meteorological Institute Pincus, Robert NOAA-CIRES Climate Diagnostics Center Morcrette, Jean-Jacques European Centre for Medium-Range Weather Forecasts Li, Jiangnan Canadian Center for Climate Modelling Stephens, Graeme Colorado State University Vaillancourt, Paul Environment Canada Oreopoulos, Lazaros JCET/UMBC and NASA/GSFC Siebesma, Pier KNMI Los, Alexander KNMI Clothiaux, Eugene The Pennsylvania State University Randall, David Colorado State University Iacono, Michael Atmospheric & Environmental Research, Inc. Category: Radiation The Monte Carlo Independent Column Approximation (McICA) method for

70

Monte Carlo Simulations for Mine Detection

During January, 1998, collaboration between LLNL, UCI and Exdet, Ltd. arranged for the testing and evaluation of a Russian developed antitank mine detection system at the Buried Objects Detection Facility (BODF) located at the Nevada Test Site. BODF is a secured 30-acre facility with approximately 300 live antitank mines that were buried in 1993 and 1994. The burial depths range from a few cm to 15 cm and the various metal- and plastic-case antitank mines each contain 6-12 kg of high explosive. Contractors who have tested their mine detection equipment at BODF include: SAIC, SRI, ERIM, MIT/Lincoln Laboratory and Loral Defense Systems. In addition LLNL researchers have used BODF to test antitank mine detection systems based on: dual-band infrared imaging, hyper-spectral imaging, synthetic aperture impulse radar and micro-impulse radar. In a blind test the Russian operated system obtained the highest score of any technology tested to date at BODF. The system is based on combining information from two separate sensors; one to detect anomalous concentrations of hydrogen and the other to detect if such anomalies also have the correct nitrogen to carbon ratio for high explosives. The detection sensitivity is set by the geometry and type of neutron moderator and filters surrounding the neutron source and detectors. Detection of hydrogen anomalies is a rapid process based on neutron scattering. The handheld instrument on the end of a wand could scan a large area at a rate of 4-5 square meters per minute. Once the hydrogen anomalies were located a second sensor was used to measure the thermal neutron excited gamma-ray spectrum at each hydrogen anomaly to determine whether that location in addition contained high concentrations of nitrogen. The second process was slower, taking up to 5 minutes for each location. The information from both sensors were then examined by the operator and a declaration was made as to whether or not the anomaly was a buried antitank mine. Although the system worked extremely well on all classes of anti-tank mines, the Russian hardware components were inferior to those that are commercially available in the United States, i.e. the NaI(Tl) crystals had significantly higher background levels and poorer resolution than their U.S. counterparts, the electronics appeared to be decades old and the photomultiplier tubes were noisy and lacked gain stabilization circuitry. During the evaluation of this technology, the question that came to mind was: could state-of-the-art sensors and electronics and improved software algorithms lead to a neutron based system that could reliably detect much smaller buried mines; namely antipersonnel mines containing 30-40 grams of high explosive? Our goal in this study was to conduct Monte Carlo simulations to gain better understanding of both phases of the mine detection system and to develop an understanding for the system's overall capabilities and limitations. In addition, we examined possible extensions of this technology to see whether or not state-of-the-art improvements could lead to a reliable anti-personnel mine detection system.

Toor, A.; Marchetti, A.A.

2000-03-14T23:59:59.000Z

71

Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there ...

Romano, Paul K. (Paul Kollath)

2013-01-01T23:59:59.000Z

72

Forecasting Hotel Arrivals and Occupancy Using Monte Carlo Simulation

Forecasting Hotel Arrivals and Occupancy Using Monte Carlo Simulation Athanasius Zakhary Faculty University, Giza, Egypt (n.elgayar@fci-cu.edu.eg) #12;Abstract Forecasting hotel arrivals and occupancy simulation approach for the arrivals and occupancy forecasting problem. In this approach we simulate

Atiya, Amir

73

Monte Carlo Simulation of Solar Reflectances for Cloudy Atmospheres

Science Conference Proceedings (OSTI)

Monte Carlo simulations of solar radiative transfer were performed for a well-resolved, large, three-dimensional (3D) domain of boundary layer cloud simulated by a cloud-resolving model. In order to represent 3D distributions of optical ...

H. W. Barker; R. K. Goldstein; D. E. Stevens

2003-08-01T23:59:59.000Z

74

Nested rollout policy adaptation for Monte Carlo tree search

Science Conference Proceedings (OSTI)

Monte Carlo tree search (MCTS) methods have had recent success in games, planning, and optimization. MCTS uses results from rollouts to guide search; a rollout is a path that descends the tree with a randomized decision at each ply until reaching a leaf. ...

Christopher D. Rosin

2011-07-01T23:59:59.000Z

75

Monte Carlo simulation of neutral beam injection into fusion reactors

Motivations and techniques for the Monte Carlo computer simulation of energetic neutral beam injection for fusion reactors are described. The versatility of this approach allows a significantly more sophisticated treatment of charge transfer collision phenomena and consequent effects on engineering design than available from prior work. Exemplary results for a mirror Fusion Engineering Research Facility (FERF) are discussed. (auth)

Miller, R.L.

1975-09-15T23:59:59.000Z

76

Monte Carlo study of self-heating in nanoscale devices

Science Conference Proceedings (OSTI)

Progress in device miniaturization combined with the increase in integrated circuit packing density, as described by Moore's law, have been accompanied by an exponential increase in on-chip heat generation. In this context, there is an increasing demand ... Keywords: Electron transport, Electrothermal modeling, Monte Carlo, Nanoscale semiconductor devices, Nanowire MISFETs, Self-heating, Si/III-V heterostructure FETs, Thermal transport

Toufik Sadi; Robert W. Kelsall; Neil J. Pilgrim; Jean-Luc Thobel; François Dessenne

2012-03-01T23:59:59.000Z

77

Monte Carlo algorithms for evaluating Sobol' sensitivity indices

Science Conference Proceedings (OSTI)

Sensitivity analysis is a powerful technique used to determine robustness, reliability and efficiency of a model. The main problem in this procedure is the evaluating total sensitivity indices that measure a parameter's main effect and all the interactions ... Keywords: Adaptive Monte Carlo algorithm, Global sensitivity indices, Multidimensional numerical integration, Sensitivity analysis

I. Dimov; R. Georgieva

2010-11-01T23:59:59.000Z

78

Monte-carlo calculations for some problems of quantum mechanics

Science Conference Proceedings (OSTI)

The Monte-Carlo technique for the calculations of functional integral in two one-dimensional quantum-mechanical problems had been applied. The energies of the bound states in some potential wells were obtained using this method. Also some peculiarities in the calculation of the kinetic energy in the ground state had been studied.

Novoselov, A. A., E-mail: novoselov@goa.bog.msu.ru; Pavlovsky, O. V.; Ulybyshev, M. V. [Moscow State University (Russian Federation)

2012-09-15T23:59:59.000Z

79

User Manual - Crystal Ball Monte Carlo POD Simulator

Science Conference Proceedings (OSTI)

This report provides a user manual for a Monte Carlo simulator using Crystal Ball a spreadsheet add-inthat can be used to predict a noise-dependent structural probability of detection (POD) for steam generator tube integrity assessments. The simulator uses plant noise as one of its inputs and provides a plant-specific POD for condition monitoring and operational assessment.

2006-09-18T23:59:59.000Z

80

A Monte Carlo electron transport code for the desktop computer

Science Conference Proceedings (OSTI)

A Monte Carlo electron transport code for the desktop computer is applied to the problem of determining the radiation dose throughout an oblique?surface heterogeneous medium. Absorbed?dose distributions are obtained for 10 MeV electrons incident upon flat?surface and oblique?surface water phantoms

1992-01-01T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

81

Science Conference Proceedings (OSTI)

We develop here a highly efficient variant of the Monte Carlo method for direct evaluation of the partition function

Jiro Sadanobu; William A. Goddard III

1997-01-01T23:59:59.000Z

82

Shadow hybrid Monte Carlo: an efficient propagator in phase space of macromolecules

Science Conference Proceedings (OSTI)

Shadow hybrid Monte Carlo (SHMC) is a new method for sampling the phase space of large molecules, particularly biological molecules. It improves sampling of hybrid Monte Carlo (HMC) by allowing larger time steps and system sizes in the molecular dynamics ... Keywords: conformational sampling, hybrid Monte Carlo, modified Hamiltonian, sampling methods, symplectic integrator

Jesús A. Izaguirre; Scott S. Hampton

2004-11-01T23:59:59.000Z

83

A sequential Monte Carlo/Quantum Mechanics study of the dipole polarizability of liquid benzene

Science Conference Proceedings (OSTI)

Metropolis Monte Carlo classical simulation and quantum mechanical calculations are performed to obtain the dipole polarizability of liquid benzene. Super-molecular configurations are sampled from NVT Monte Carlo simulation of liquid benzene at room ... Keywords: Monte Carlo simulation, density-functional theory, intermediate-neglect of differential overlap (INDO), liquid benzene, polarizability

Eudes E. Fileti; Sylvio Canuto

2004-12-01T23:59:59.000Z

84

Data decomposition of Monte Carlo particle transport simulations via tally servers

Science Conference Proceedings (OSTI)

An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute ... Keywords: Data decomposition, Exascale, Monte Carlo, Neutron transport, Tally server

Paul K. Romano, Andrew R. Siegel, Benoit Forget, Kord Smith

2013-11-01T23:59:59.000Z

85

Science Conference Proceedings (OSTI)

A domain decomposed Monte Carlo communication kernel is used to carry out performance tests to establish the feasibility of using Monte Carlo techniques for practical Light Water Reactor (LWR) core analyses. The results of the prototype code are interpreted ... Keywords: Monte Carlo, Neutron transport, Performance modeling, Reactor analysis

A. Siegel; K. Smith; P. Fischer; V. Mahadevan

2012-04-01T23:59:59.000Z

86

Quantum Monte Carlo with Coupled-Cluster wave functions

We introduce a novel many body method which combines two powerful many body techniques, viz., quantum Monte Carlo and coupled cluster theory. Coupled cluster wave functions are introduced as importance functions in a Monte Carlo method designed for the configuration interaction framework to provide rigorous upper bounds to the ground state energy. We benchmark our method on the homogeneous electron gas in momentum space. The importance function used is the coupled cluster doubles wave function. We show that the computational resources required in our method scale polynomially with system size. Our energy upper bounds are in very good agreement with previous calculations of similar accuracy, and they can be systematically improved by including higher order excitations in the coupled cluster wave function.

Alessandro Roggero; Abhishek Mukherjee; Francesco Pederiva

2013-04-04T23:59:59.000Z

87

Ab initio Monte Carlo investigation of small lithium clusters.

Science Conference Proceedings (OSTI)

Structural and thermal properties of small lithium clusters are studied using ab initio-based Monte Carlo simulations. The ab initio scheme uses a Hartree-Fock/density functional treatment of the electronic structure combined with a jump-walking Monte Carlo sampling of nuclear configurations. Structural forms of Li{sub 8} and Li{sub 9}{sup +} clusters are obtained and their thermal properties analyzed in terms of probability distributions of the cluster potential energy, average potential energy and configurational heat capacity all considered as a function of the cluster temperature. Details of the gradual evolution with temperature of the structural forms sampled are examined. Temperatures characterizing the onset of structural changes and isomer coexistence are identified for both clusters.

Srinivas, S.

1999-06-16T23:59:59.000Z

88

Quantum Monte Carlo with Coupled-Cluster wave functions

We introduce a novel many body method which combines two powerful many body techniques, viz., quantum Monte Carlo and coupled cluster theory. Coupled cluster wave functions are introduced as importance functions in a Monte Carlo method designed for the configuration interaction framework to provide rigorous upper bounds to the ground state energy. We benchmark our method on the homogeneous electron gas in momentum space. The importance function used is the coupled cluster doubles wave function. We show that the computational resources required in our method scale polynomially with system size. Our energy upper bounds are in very good agreement with previous calculations of similar accuracy, and they can be systematically improved by including higher order excitations in the coupled cluster wave function.

Roggero, Alessandro; Pederiva, Francesco

2013-01-01T23:59:59.000Z

89

The dynamics of quantum criticality: Quantum Monte Carlo and holography

Understanding the real time dynamics of systems near quantum critical points at non-zero temperatures constitutes an important yet challenging problem, especially in two spatial dimensions where interactions are strong. We present detailed quantum Monte Carlo results for two separate realizations of the superfluid-insulator transition of bosons on a lattice: their low-frequency conductivities are found to have the same universal dependence on imaginary frequency and temperature. We then use the structure of the real time dynamics of conformal field theories described by the holographic gauge/gravity duality to make progress on the difficult problem of analytically continuing the Monte Carlo data to real time. Our method yields quantitative and experimentally testable results on the frequency-dependent conductivity at the quantum critical point, and on the spectrum of quasinormal modes in the vicinity of the superfluid-insulator quantum phase transition. Extensions to other observables and universality classes are discussed.

William Witczak-Krempa; Erik Sorensen; Subir Sachdev

2013-09-11T23:59:59.000Z

90

Monte Carlo Simulations of Neutron Oil well Logging Tools

Monte Carlo simulations of simple neutron oil well logging tools into typical geological formations are presented.The simulated tools consist of both 14 MeV pulsed and continuous Am-Be neutron sources with time gated and continuous gamma ray detectors respectively.The geological formation consists of pure limestone with 15% absolute porosity in a wide range of oil saturation.The particle transport was performed with the Monte Carlo N-Particle Transport Code System, MCNP-4B.Several gamma ray spectra were obtained at the detector position that allow to perform composition analysis of the formation.In particular, the ratio C/O was analyzed as an indicator of oil saturation.Further calculations are proposed to simulate actual detector responses in order to contribute to understand the relation between the detector response with the formation composition

Azcurra, M

2002-01-01T23:59:59.000Z

91

Load Balancing Of Parallel Monte Carlo Transport Calculations

National Nuclear Security Administration (NNSA)

Load Balancing Of Parallel Load Balancing Of Parallel Monte Carlo Transport Calculations R.J. Procassini, M. J. O'Brien and J.M. Taylor Lawrence Livermore National Laboratory, P. O. Box 808, Livermore, CA 94551 The performance of parallel Monte Carlo transport calculations which use both spatial and particle parallelism is increased by dynamically assigning processors to the most worked domains. Since the particle work load varies over the course of the simulation, each cycle this algorithm determines if dynamic load balancing would speed up the calculation. If load balancing is required, a small number of particle communications are initiated in order to achieve load balance. This method has decreased the parallel run time by more than a factor of three for certain criticality

92

Towards a Revised Monte Carlo Neutral Particle Surface Interaction Model

The components of the neutral- and plasma-surface interaction model used in the Monte Carlo neutral transport code DEGAS 2 are reviewed. The idealized surfaces and processes handled by that model are inadequate for accurately simulating neutral transport behavior in present day and future fusion devices. We identify some of the physical processes missing from the model, such as mixed materials and implanted hydrogen, and make some suggestions for improving the model.

D.P. Stotler

2005-06-09T23:59:59.000Z

93

MC++: Parallel, portable, Monte Carlo neutron transport in C++

Science Conference Proceedings (OSTI)

We have developed an implicit Monte Carlo neutron transport code in C++ using the Parallel Object-Oriented Methods and Applications (POOMA) class library. MC++ runs in parallel on and is portable to a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and parallelism. Current capabilities of MC++ are discussed, along with future plans and physics and performance results on many different platforms.

Lee, S.R.; Cummings, J.C. [Los Alamos National Lab., NM (United States); Nolen, S.D. [Texas A& M Univ., College Station, TX (United States). Dept. of Nuclear Engineering

1997-02-01T23:59:59.000Z

94

Multi-Determinant Wave-functions in Quantum Monte Carlo

Science Conference Proceedings (OSTI)

Quantum Monte Carlo methods have received considerable attention over the last decades due to the great promise they have for the direct solution to the many-body Schrodinger equation for electronic systems. Thanks to a low scaling with number of particles, they present one of the best alternatives in the accurate study of large systems and solid state calculations. In spite of such promise, the method has not become popular in the quantum chemistry community, mainly due to the lack of control over the fixed-node error which can be large in many cases. In this article we present the application of large multi-determinant expansions in quantum Monte Carlo, studying its performance with first row dimers and the 55 molecules of the G1 test set. We demonstrate the potential of the wave-function to systematically reduce the fixed-node error in the calculations, achieving chemical accuracy in almost all cases studied. When compared to traditional methods in quantum chemistry, the results show a marked improvement over most methods including MP2, CCSD(T) and DFT with various functionals; in fact the only method able to produce better results is the explicitly-correlated CCSD(T) method with a large basis set. With recent developments in trial wave functions and algorithmic improvements in Quantum Monte Carlo, we are quickly approaching a time where the method can become the standard in the study of large molecular systems and solids.

Morales, Miguel A [Lawrence Livermore National Laboratory (LLNL); Mcminis, Jeremy [University of Illinois, Urbana-Champaign; Clark, Bryan K. [Princeton University; Kim, Jeongnim [ORNL; Scuseria, Gustavo E [Rice University

2012-01-01T23:59:59.000Z

95

Coupled MHD-Monte Carlo transport model for dense plasmas

A two-dimensional, two fluid model of the MHD equations has been coupled to a Monte Carlo transport model of high energy, non-Maxwellian ions. The MHD part of the model assumes complete ionization and includes a perfect gas law for a scalar pressure, a tensor artificial viscosity, electron and ion thermal conduction, electron-ion coupling, and a radiation loss term. A simple Ohm's Law is used with a B/sub theta/ magnetic field. The MHD equations were solved in Lagrangian coordinates. The conservation equations were differenced explicitly and the diffusion-type equations implicitly using the splitting technique. The Monte Carlo model solves the equation of motion for high energy ions, moving through and suffering small and large angle collisions with the fluid Maxwellian plasma. The source of high energy ions is the thermonuclear reactions of the hydrogen isotopes, or it may be an externally injected beam of neutralized ions. In addition to using the usual Maxwell averaged thermonuclear cross sections for calculating the number of reactions taking place within the Maxwellian plasma, the high energy ions may suffer collisions resulting in a reaction. In the Monte Carlo model all neutrons are assumed to escape, and all energetic ions of Z less than or equal to 2 are followed. (auth)

Chandler, W.P.

1975-06-01T23:59:59.000Z

96

A new class of accelerated kinetic Monte Carlo algorithms

Science Conference Proceedings (OSTI)

Kinetic (aka dynamic) Monte Carlo (KMC) is a powerful method for numerical simulations of time dependent evolution applied in a wide range of contexts including biology, chemistry, physics, nuclear sciences, financial engineering, etc. Generally, in a KMC the time evolution takes place one event at a time, where the sequence of events and the time intervals between them are selected (or sampled) using random numbers. While details of the method implementation vary depending on the model and context, there exist certain common issues that limit KMC applicability in almost all applications. Among such is the notorious 'flicker problem' where the same states of the systems are repeatedly visited but otherwise no essential evolution is observed. In its simplest form the flicker problem arises when two states are connected to each other by transitions whose rates far exceed the rates of all other transitions out of the same two states. In such cases, the model will endlessly hop between the two states otherwise producing no meaningful evolution. In most situation of practical interest, the trapping cluster includes more than two states making the flicker somewhat more difficult to detect and to deal with. Several methods have been proposed to overcome or mitigate the flicker problem, exactly [1-3] or approximately [4,5]. Of the exact methods, the one proposed by Novotny [1] is perhaps most relevant to our research. Novotny formulates the problem of escaping from a trapping cluster as a Markov system with absorbing states. Given an initial state inside the cluster, it is in principle possible to solve the Master Equation for the time dependent probabilities to find the walker in a given state (transient or absorbing) of the cluster at any time in the future. Novotny then proceeds to demonstrate implementation of his general method to trapping clusters containing the initial state plus one or two transient states and all of their absorbing states. Similar methods have been subsequently proposed in [refs] but applied in a different context. The most serious deficiency of the earlier methods is that size of the trapping cluster size is fixed and often too small to bring substantial simulation speedup. Furthermore, the overhead associated with solving for the probability distribution on the trapping cluster sometimes makes such simulations less efficient than the standard KMC. Here we report on a general and exact accelerated kinetic Monte Carlo algorithm generally applicable to arbitrary Markov models1. Two different implementations are attempted both based on incremental expansion of trapping sub-set of Markov states: (1) numerical solution of the Master Equation with absorbing states and (2) incremental graph reduction followed by randomization. Of the two implementations, the 2nd one performs better allowing, for the first time, to overcome trapping basins spanning several million Markov states. The new method is used for simulations of anomalous diffusion on a 2D substrate and of the kinetics of diffusive 1st order phase transformations in binary alloys. Depending on temperature and (alloy) super-saturation conditions, speedups of 3 to 7 orders of magnitude are demonstrated, with no compromise of simulation accuracy.

Bulatov, V V; Oppelstrup, T; Athenes, M

2011-11-30T23:59:59.000Z

97

grain orientations s The first application of this code is to the growth of Mo tips, including chemicalKinetic Lattice Monte Carlo Simulation ofKinetic Lattice Monte Carlo Simulation of Polycrystalline Applications Motorola National Science Foundation National Center for Supercomputer Applications Motorola #12

Adams, James B

98

Continuous-Estimator Representation for Monte Carlo Criticality Diagnostics

Science Conference Proceedings (OSTI)

An alternate means of computing diagnostics for Monte Carlo criticality calculations is proposed. Overlapping spherical regions or estimators are placed covering the fissile material with a minimum center-to-center separation of the 'fission distance', which is defined herein, and a radius that is some multiple thereof. Fission neutron production is recorded based upon a weighted average of proximities to centers for all the spherical estimators. These scores are used to compute the Shannon entropy, and shown to reproduce the value, to within an additive constant, determined from a well-placed mesh by a user. The spherical estimators are also used to assess statistical coverage.

Kiedrowski, Brian C. [Los Alamos National Laboratory; Brown, Forrest B. [Los Alamos National Laboratory

2012-06-18T23:59:59.000Z

99

Adaptively Learning an Importance Function Using Transport Constrained Monte Carlo

Science Conference Proceedings (OSTI)

It is well known that a Monte Carlo estimate can be obtained with zero-variance if an exact importance function for the estimate is known. There are many ways that one might iteratively seek to obtain an ever more exact importance function. This paper describes a method that has obtained ever more exact importance functions that empirically produce an error that is dropping exponentially with computer time. The method described herein constrains the importance function to satisfy the (adjoint) Boltzmann transport equation. This constraint is provided by using the known form of the solution, usually referred to as the Case eigenfunction solution.

Booth, T.E.

1998-06-22T23:59:59.000Z

100

Validation of Phonon Physics in the CDMS Detector Monte Carlo

The SuperCDMS collaboration is a dark matter search effort aimed at detecting the scattering of WIMP dark matter from nuclei in cryogenic germanium targets. The CDMS Detector Monte Carlo (CDMS-DMC) is a simulation tool aimed at achieving a deeper understanding of the performance of the SuperCDMS detectors and aiding the dark matter search analysis. We present results from validation of the phonon physics described in the CDMS-DMC and outline work towards utilizing it in future WIMP search analyses.

McCarthy, K.A.; Leman, S.W.; Anderson, A.J.; /MIT; Brandt, D.; /SLAC; Brink, P.L.; Cabrera, B.; Cherry, M.; /Stanford U.; Do Couto E Silva, E.; /SLAC; Cushman, P.; /Minnesota U.; Doughty, T.; /UC, Berkeley; Figueroa-Feliciano, E.; /MIT; Kim, P.; /SLAC; Mirabolfathi, N.; /UC, Berkeley; Novak, L.; /Stanford U.; Partridge, R.; /SLAC; Pyle, M.; /Stanford U.; Reisetter, A.; /Minnesota U. /St. Olaf Coll.; Resch, R.; /SLAC; Sadoulet, B.; Serfass, B.; Sundqvist, K.M.; /UC, Berkeley /Stanford U.

2012-06-06T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

101

Bounded limit for the Monte Carlo point-flux-estimator

Science Conference Proceedings (OSTI)

In a Monte Carlo random walk the kernel K(R,E) is used as an expected value estimator at every collision for the collided flux phi/sub c/ r vector,E) at the detector point. A limiting value for the kernel is derived from a diffusion approximation for the probability current at a radius R/sub 1/ from the detector point. The variance of the collided flux at the detector point is thus bounded using this asymptotic form for K(R,E). The bounded point flux estimator is derived. (WHK)

Grimesey, R.A.

1981-01-01T23:59:59.000Z

102

Process Characterisation with Monte-Carlo Wave-Functions

We present a numerically efficient method for the characterisation of a quantum process subject to dissipation and noise. The master equation evolution of a maximally entangled state of the quantum system and a non-evolving ancilla system is simulated by Monte-Carlo wave-functions. We show how each stochastic state vectors provides quantities that are readily combined into an average process \\chi-matrix. Our method significantly reduces the computational complexity in comparison with standard characterisation methods. It also readily provides an upper bound on the trace distance between the ideal and simulated process based on the evolution of only a single wave function of the entangled system.

Jake Gulliksen; D. D. Bhaktavatsala Rao; Klaus Mølmer

2013-09-19T23:59:59.000Z

103

In this note I illustrate the program MINT, a FORTRAN program for Monte Carlo adaptive integration and generation of unweighted distributions.

Nason, P

2007-01-01T23:59:59.000Z

104

In this note I illustrate the program MINT, a FORTRAN program for Monte Carlo adaptive integration and generation of unweighted distributions.

P. Nason

2007-09-13T23:59:59.000Z

105

Monte Carlo parameter studies and uncertainty analyses with MCNP5

A software tool called mcnp-pstudy has been developed to automate the setup, execution, and collection of results from a series of MCNPS Monte Carlo calculations. This tool provides a convenient means of performing parameter studies, total uncertainty analyses, parallel job execution on clusters, stochastic geometry modeling, and other types of calculations where a series of MCNPS jobs must be performed with varying problem input specifications. Monte Carlo codes are being used for a wide variety of applications today due to their accurate physical modeling and the speed of today's computers. In most applications for design work, experiment analysis, and benchmark calculations, it is common to run many calculations, not just one, to examine the effects of design tolerances, experimental uncertainties, or variations in modeling features. We have developed a software tool for use with MCNP5 to automate this process. The tool, mcnp-pstudy, is used to automate the operations of preparing a series of MCNP5 input files, running the calculations, and collecting the results. Using this tool, parameter studies, total uncertainty analyses, or repeated (possibly parallel) calculations with MCNP5 can be performed easily. Essentially no extra user setup time is required beyond that of preparing a single MCNP5 input file.

Brown, F. B. (Forrest B.); Sweezy, J. E. (Jeremy E.); Hayes, R. B. (Robert B.)

2004-01-01T23:59:59.000Z

106

A comparison of generalized hybrid Monte Carlo methods with and without momentum flip

Science Conference Proceedings (OSTI)

The generalized hybrid Monte Carlo (GHMC) method combines Metropolis corrected constant energy simulations with a partial random refreshment step in the particle momenta. The standard detailed balance condition requires that momenta are negated upon ... Keywords: Langevin dynamics, Molecular dynamics, Monte Carlo methods, Sampling

Elena Akhmatskaya; Nawaf Bou-Rabee; Sebastian Reich

2009-04-01T23:59:59.000Z

107

Science Conference Proceedings (OSTI)

The aim of this study was to apply the Monte-Carlo techniques to develop a probabilistic risk assessment. The risk resulting from the occupational exposure during the remediation activities of a uranium tailings disposal, in an abandoned uranium mining ... Keywords: Monte Carlo simulation, occupational exposure, risk and dose assessment, uranium tailings disposal

M. L. Dinis; A. Fiúza

2010-08-01T23:59:59.000Z

108

Sequential Monte Carlo for Bayesian sequentially designed experiments for discrete data

Science Conference Proceedings (OSTI)

In this paper we present a sequential Monte Carlo algorithm for Bayesian sequential experimental design applied to generalised non-linear models for discrete data. The approach is computationally convenient in that the information of newly observed data ... Keywords: Clinical trials, Generalised linear model, Generalised non-linear model, Sequential Monte Carlo, Sequential design, Target stimulus

Christopher C. Drovandi; James M. Mcgree; Anthony N. Pettitt

2013-01-01T23:59:59.000Z

109

Monte Carlo modeling of spin FETs controlled by spin-orbit interaction

Science Conference Proceedings (OSTI)

A method for Monte Carlo simulation of 2D spin-polarized electron transport in III-V semiconductor heterojunction (FETs) is presented. In the simulation, the dynamics of the electrons in coordinate and momentum space is treated semiclassically. The density ... Keywords: FET, Monte Carlo, spin orbit, spintronics

Min Shen; Semion Saikin; Ming-C. Cheng; Vladimir Privman

2004-05-01T23:59:59.000Z

110

An efficient, robust, domain-decomposition algorithm for particle Monte Carlo

Science Conference Proceedings (OSTI)

A previously described algorithm [T.A. Brunner, T.J. Urbatsch, T.M. Evans, N.A. Gentile, Comparison of four parallel algorithms for domain decomposed implicit Monte Carlo, Journal of Computational Physics 212 (2) (2006) 527-539] for doing domain decomposed ... Keywords: Monte Carlo methods, Neutron transport, Parallel computation, Radiative transfer

Thomas A. Brunner; Patrick S. Brantley

2009-06-01T23:59:59.000Z

111

Methods for coupling radiation, ion, and electron energies in grey Implicit Monte Carlo

Science Conference Proceedings (OSTI)

We present three methods for extending the Implicit Monte Carlo (IMC) method to treat the time-evolution of coupled radiation, electron, and ion energies. The first method splits the ion and electron coupling and conduction from the standard IMC radiation-transport ... Keywords: Implicit Monte Carlo, Thermal radiation transport, Three-temperature model

T. M. Evans; J. D. Densmore

2007-08-01T23:59:59.000Z

112

Asymptotic diffusion limit of the symbolic Monte-Carlo method for the transport equation

Science Conference Proceedings (OSTI)

We use asymptotic analysis to study the diffusion limit of the Symbolic Implicit Monte-Carlo (SIMC) method for the transport equation. For standard SIMC with piecewise constant basis functions, we demonstrate mathematically that the solution converges ... Keywords: Monte-Carlo method, radiative transfer

J.-F. Clouët; G. Samba

2004-03-01T23:59:59.000Z

113

Optimization of Monte Carlo transport simulations in stochastic media

Science Conference Proceedings (OSTI)

This paper presents an accurate and efficient approach to optimize radiation transport simulations in a stochastic medium of high heterogeneity, like the Very High Temperature Gas-cooled Reactor (VHTR) configurations packed with TRISO fuel particles. Based on a fast nearest neighbor search algorithm, a modified fast Random Sequential Addition (RSA) method is first developed to speed up the generation of the stochastic media systems packed with both mono-sized and poly-sized spheres. A fast neutron tracking method is then developed to optimize the next sphere boundary search in the radiation transport procedure. In order to investigate their accuracy and efficiency, the developed sphere packing and neutron tracking methods are implemented into an in-house continuous energy Monte Carlo code to solve an eigenvalue problem in VHTR unit cells. Comparison with the MCNP benchmark calculations for the same problem indicates that the new methods show considerably higher computational efficiency. (authors)

Liang, C.; Ji, W. [Dept. of Mechanical, Aerospace and Nuclear Engineering, Rensselaer Polytechnic Inst., 110 8th street, Troy, NY (United States)

2012-07-01T23:59:59.000Z

114

Bayesian Inference in Monte-Carlo Tree Search

Monte-Carlo Tree Search (MCTS) methods are drawing great interest after yielding breakthrough results in computer Go. This paper proposes a Bayesian approach to MCTS that is inspired by distributionfree approaches such as UCT [13], yet significantly differs in important respects. The Bayesian framework allows potentially much more accurate (Bayes-optimal) estimation of node values and node uncertainties from a limited number of simulation trials. We further propose propagating inference in the tree via fast analytic Gaussian approximation methods: this can make the overhead of Bayesian inference manageable in domains such as Go, while preserving high accuracy of expected-value estimates. We find substantial empirical outperformance of UCT in an idealized bandit-tree test environment, where we can obtain valuable insights by comparing with known ground truth. Additionally we rigorously prove on-policy and off-policy convergence of the proposed methods.

Tesauro, Gerald; Segal, Richard

2012-01-01T23:59:59.000Z

115

The Quantum Energy Density: Improved Efficiency for Quantum Monte Carlo

We establish a physically meaningful representation of a quantum energy density for use in Quantum Monte Carlo calculations. The energy density operator, defined in terms of Hamiltonian components and density operators, returns the correct Hamiltonian when integrated over a volume containing a cluster of particles. This property is demonstrated for a helium-neon "gas," showing that atomic energies obtained from the energy density correspond to eigenvalues of isolated systems. The formation energies of defects or interfaces are typically calculated as total energy differences. Using a model of delta-doped silicon (where dopant atoms form a thin plane) we show how interfacial energies can be calculated more efficiently with the energy density, since the region of interest is small. We also demonstrate how the energy density correctly transitions to the bulk limit away from the interface where the correct energy is obtainable from a separate total energy calculation.

Krogel, Jaron T; Kim, Jeongnim; Ceperley, David M

2013-01-01T23:59:59.000Z

116

Correlated wavefunction quantum Monte Carlo approach to solids

A method for calculating the electronic and structural properties of solids using correlated wavefunctions together with quantum Monte Carlo techniques is described. The approach retains the exact Coulomb interaction between the electrons and employs a many-electron wavefunction of the Jastrow-Slater form. Several examples are given to illustrate the utility of the method. Topics discussed include the cohesive properties of bulk semiconductors, the magnetic-field- induced Wigner crystal in two dimensions, and the magnetic structure of bcc hydrogen. Landau level mixing is shown to be important in determining the transition between the fractional quantum Hall liquid and the Wigner crystal. Information on electron correlations such as the pair correlation functions which are not accessible to one- electron theories is also obtained. 24 refs, 5 figs, 1 tab.

Louie, S.G.

1992-10-01T23:59:59.000Z

117

Multi-Determinant Wave-functions in Quantum Monte Carlo

Quantum Monte Carlo (QMC) methods have received considerable attention over the last decades due to their great promise for providing a direct solution to the many-body Schrodinger equation in electronic systems. Thanks to their low scaling with number of particles, QMC methods present a compelling competitive alternative for the accurate study of large molecular systems and solid state calculations. In spite of such promise, the method has not permeated the quantum chemistry community broadly, mainly because of the fixed-node error, which can be large and whose control is difficult. In this Perspective, we present a systematic application of large scale multi-determinant expansions in QMC, and report on its impressive performance with first row dimers and the 55 molecules of the G1 test set. We demonstrate the potential of this strategy for systematically reducing the fixed-node error in the wave function and for achieving chemical accuracy in energy predictions. When compared to traditional quantum chemistr...

Morales, M A; Clark, B K; Kim, J; Scuseria, G; 10.1021/ct3003404

2013-01-01T23:59:59.000Z

118

Monte Carlo Simulation Tool Installation and Operation Guide

Science Conference Proceedings (OSTI)

This document provides information on software and procedures for Monte Carlo simulations based on the Geant4 toolkit, the ROOT data analysis software and the CRY cosmic ray library. These tools have been chosen for its application to shield design and activation studies as part of the simulation task for the Majorana Collaboration. This document includes instructions for installation, operation and modification of the simulation code in a high cyber-security computing environment, such as the Pacific Northwest National Laboratory network. It is intended as a living document, and will be periodically updated. It is a starting point for information collection by an experimenter, and is not the definitive source. Users should consult with one of the authors for guidance on how to find the most current information for their needs.

Aguayo Navarrete, Estanislao; Ankney, Austin S.; Berguson, Timothy J.; Kouzes, Richard T.; Orrell, John L.; Troy, Meredith D.; Wiseman, Clinton G.

2013-09-02T23:59:59.000Z

119

Quantum Monte Carlo for electronic structure: Recent developments and applications

Quantum Monte Carlo (QMC) methods have been found to give excellent results when applied to chemical systems. The main goal of the present work is to use QMC to perform electronic structure calculations. In QMC, a Monte Carlo simulation is used to solve the Schroedinger equation, taking advantage of its analogy to a classical diffusion process with branching. In the present work the author focuses on how to extend the usefulness of QMC to more meaningful molecular systems. This study is aimed at questions concerning polyatomic and large atomic number systems. The accuracy of the solution obtained is determined by the accuracy of the trial wave function`s nodal structure. Efforts in the group have given great emphasis to finding optimized wave functions for the QMC calculations. Little work had been done by systematically looking at a family of systems to see how the best wave functions evolve with system size. In this work the author presents a study of trial wave functions for C, CH, C{sub 2}H and C{sub 2}H{sub 2}. The goal is to study how to build wave functions for larger systems by accumulating knowledge from the wave functions of its fragments as well as gaining some knowledge on the usefulness of multi-reference wave functions. In a MC calculation of a heavy atom, for reasonable time steps most moves for core electrons are rejected. For this reason true equilibration is rarely achieved. A method proposed by Batrouni and Reynolds modifies the way the simulation is performed without altering the final steady-state solution. It introduces an acceleration matrix chosen so that all coordinates (i.e., of core and valence electrons) propagate at comparable speeds. A study of the results obtained using their proposed matrix suggests that it may not be the optimum choice. In this work the author has found that the desired mixing of coordinates between core and valence electrons is not achieved when using this matrix. A bibliography of 175 references is included.

Rodriguez, M.M.S. [Univ. of California, Berkeley, CA (United States). Dept. of Chemistry]|[Lawrence Berkeley Lab., CA (United States). Chemical Sciences Div.

1995-04-01T23:59:59.000Z

120

Monte Carlo simulations for generic granite repository studies

In a collaborative study between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL) for the DOE-NE Office of Fuel Cycle Technologies Used Fuel Disposition (UFD) Campaign project, we have conducted preliminary system-level analyses to support the development of a long-term strategy for geologic disposal of high-level radioactive waste. A general modeling framework consisting of a near- and a far-field submodel for a granite GDSE was developed. A representative far-field transport model for a generic granite repository was merged with an integrated systems (GoldSim) near-field model. Integrated Monte Carlo model runs with the combined near- and farfield transport models were performed, and the parameter sensitivities were evaluated for the combined system. In addition, a sub-set of radionuclides that are potentially important to repository performance were identified and evaluated for a series of model runs. The analyses were conducted with different waste inventory scenarios. Analyses were also conducted for different repository radionuelide release scenarios. While the results to date are for a generic granite repository, the work establishes the method to be used in the future to provide guidance on the development of strategy for long-term disposal of high-level radioactive waste in a granite repository.

Chu, Shaoping [Los Alamos National Laboratory; Lee, Joon H [SNL; Wang, Yifeng [SNL

2010-12-08T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

121

Non-analog Monte Carlo estimators for radiation momentum deposition

The standard method for calculating radiation momentum deposition in Monte Carlo simulations is the analog estimator, which tallies the change in a particle's momentum at each interaction with the matter. Unfortunately, the analog estimator can suffer from large amounts of statistical error. In this paper, we present three new non-analog techniques for estimating momentum deposition. Specifically, we use absorption, collision, and track-length estimators to evaluate a simple integral expression for momentum deposition that does not contain terms that can cause large amounts of statistical error in the analog scheme. We compare our new non-analog estimators to the analog estimator with a set of test problems that encompass a wide range of material properties and both isotropic and anisotropic scattering. In nearly all cases, the new non-analog estimators outperform the analog estimator. The track-length estimator consistently yields the highest performance gains, improving upon the analog-estimator figure of merit by factors of up to two orders of magnitude.

Densmore, Jeffery D [Los Alamos National Laboratory; Hykes, Joshua M [Los Alamos National Laboratory

2008-01-01T23:59:59.000Z

122

Monte Carlo Simulations of Cosmic Rays Hadronic Interactions

Science Conference Proceedings (OSTI)

This document describes the construction and results of the MaCoR software tool, developed to model the hadronic interactions of cosmic rays with different geometries of materials. The ubiquity of cosmic radiation in the environment results in the activation of stable isotopes, referred to as cosmogenic activities. The objective is to use this application in conjunction with a model of the MAJORANA DEMONSTRATOR components, from extraction to deployment, to evaluate cosmogenic activation of such components before and after deployment. The cosmic ray showers include several types of particles with a wide range of energy (MeV to GeV). It is infeasible to compute an exact result with a deterministic algorithm for this problem; Monte Carlo simulations are a more suitable approach to model cosmic ray hadronic interactions. In order to validate the results generated by the application, a test comparing experimental muon flux measurements and those predicted by the application is presented. The experimental and simulated results have a deviation of 3%.

Aguayo Navarrete, Estanislao; Orrell, John L.; Kouzes, Richard T.

2011-04-01T23:59:59.000Z

123

Numerical study of error propagation in Monte Carlo depletion simulations

Science Conference Proceedings (OSTI)

Improving computer technology and the desire to more accurately model the heterogeneity of the nuclear reactor environment have made the use of Monte Carlo depletion codes more attractive in recent years, and feasible (if not practical) even for 3-D depletion simulation. However, in this case statistical uncertainty is combined with error propagating through the calculation from previous steps. In an effort to understand this error propagation, a numerical study was undertaken to model and track individual fuel pins in four 17 x 17 PWR fuel assemblies. By changing the code's initial random number seed, the data produced by a series of 19 replica runs was used to investigate the true and apparent variance in k{sub eff}, pin powers, and number densities of several isotopes. While this study does not intend to develop a predictive model for error propagation, it is hoped that its results can help to identify some common regularities in the behavior of uncertainty in several key parameters. (authors)

Wyant, T.; Petrovic, B. [Nuclear and Radiological Engineering, Georgia Inst. of Technology, 770 State Street, Atlanta, GA 30332-0745 (United States)

2012-07-01T23:59:59.000Z

124

Reduced Variance for Material Sources in Implicit Monte Carlo

Implicit Monte Carlo (IMC), a time-implicit method due to Fleck and Cummings, is used for simulating supernovae and inertial confinement fusion (ICF) systems where x-rays tightly and nonlinearly interact with hot material. The IMC algorithm represents absorption and emission within a timestep as an effective scatter. Similarly, the IMC time-implicitness splits off a portion of a material source directly into the radiation field. We have found that some of our variance reduction and particle management schemes will allow large variances in the presence of small, but important, material sources, as in the case of ICF hot electron preheat sources. We propose a modification of our implementation of the IMC method in the Jayenne IMC Project. Instead of battling the sampling issues associated with a small source, we bypass the IMC implicitness altogether and simply deterministically update the material state with the material source if the temperature of the spatial cell is below a user-specified cutoff. We describe the modified method and present results on a test problem that show the elimination of variance for small sources.

Urbatsch, Todd J. [Los Alamos National Laboratory

2012-06-25T23:59:59.000Z

125

In this thesis research, a coherent scattering model for microwave remote sensing of vegetation canopy is developed on the basis of Monte Carlo simulations. An accurate model of vegetation structure is essential for the ...

Wang, Li-Fang, Ph. D. Massachusetts Institute of Technology

2007-01-01T23:59:59.000Z

126

Estimation of Wind Speed Distribution Using Markov Chain Monte Carlo Techniques

Science Conference Proceedings (OSTI)

The Weibull distribution is the most commonly used statistical distribution for describing wind speed data. Maximum likelihood has traditionally been the main method of estimation for Weibull parameters. In this paper, Markov chain Monte Carlo ...

Wan-Kai Pang; Jonathan J. Forster; Marvin D. Troutt

2001-08-01T23:59:59.000Z

127

Remote Sounding of High Clouds. III: Monte Carlo Calculations of Multiple-Scattered Lidar Returns

Science Conference Proceedings (OSTI)

Monte Carlo calculations of multiple-scattered contributions to the total energy received in a lidar beam have been made for a representative cirrus ice-cloud scattering phase function. The phase function is varied arbitrarily near the back ...

C. M. R. Platt

1981-01-01T23:59:59.000Z

128

Science Conference Proceedings (OSTI)

This paper explores the temporal evolution of cloud microphysical parameter uncertainty using an idealized 1D model of deep convection. Model parameter uncertainty is quantified using a Markov chain Monte Carlo (MCMC) algorithm. A new form of the ...

Derek J. Posselt; Craig H. Bishop

2012-06-01T23:59:59.000Z

129

Science Conference Proceedings (OSTI)

A new, time-domain, non-Monte Carlo method for computer simulation of electrical noise in nonlinear dynamic circuits with arbitrary excitations is presented. This time-domain noise simulation ...

Alper Demir; Edward W. Y. Liu; Alberto L. Sangiovanni-Vincentelli

1994-11-01T23:59:59.000Z

130

Clustering and Short-Range Orer in Fe-Cr Alloys: A Monte Carlo Study

Science Conference Proceedings (OSTI)

May 1, 2007 ... Clustering and Short-Range Orer in Fe-Cr Alloys: A Monte Carlo Study by Mikhail Lavrentiev, Duc Nguyen-Manh, Sergei Dudarev, Ralf Drautz, ...

131

A Monte Carlo Approach To Generator Portfolio Planning And Carbon Emissions

Monte Carlo Approach To Generator Portfolio Planning And Carbon Emissions Monte Carlo Approach To Generator Portfolio Planning And Carbon Emissions Assessments Of Systems With Large Penetrations Of Variable Renewables Jump to: navigation, search GEOTHERMAL ENERGYGeothermal Home Journal Article: A Monte Carlo Approach To Generator Portfolio Planning And Carbon Emissions Assessments Of Systems With Large Penetrations Of Variable Renewables Details Activities (0) Areas (0) Regions (0) Abstract: A new generator portfolio planning model is described that is capable of quantifying the carbon emissions associated with systems that include very high penetrations of variable renewables. The model combines a deterministic renewable portfolio planning module with a Monte Carlo simulation of system operation that determines the expected least-cost

132

The rigorous 2-step (R2S) method uses three-dimensional Monte Carlo transport simulations to calculate the shutdown dose rate (SDDR) in fusion reactors. Accurate full-scale R2S calculations are impractical in fusion reactors because they require calculating space- and energy-dependent neutron fluxes everywhere inside the reactor. The use of global Monte Carlo variance reduction techniques was suggested for accelerating the neutron transport calculation of the R2S method. The prohibitive computational costs of these approaches, which increase with the problem size and amount of shielding materials, inhibit their use in the accurate full-scale neutronics analyses of fusion reactors. This paper describes a novel hybrid Monte Carlo/deterministic technique that uses the Consistent Adjoint Driven Importance Sampling (CADIS) methodology but focuses on multi-step shielding calculations. The Multi-Step CADIS (MS-CADIS) method speeds up the Monte Carlo neutron calculation of the R2S method using an importance function that represents the importance of the neutrons to the final SDDR. Using a simplified example, preliminarily results showed that the use of MS-CADIS enhanced the efficiency of the neutron Monte Carlo simulation of an SDDR calculation by a factor of 550 compared to standard global variance reduction techniques, and that the increase over analog Monte Carlo is higher than 10,000.

Ibrahim, Ahmad M [ORNL; Peplow, Douglas E. [ORNL; Peterson, Joshua L [ORNL; Grove, Robert E [ORNL

2013-01-01T23:59:59.000Z

133

A NEW MONTE CARLO METHOD FOR TIME-DEPENDENT NEUTRINO RADIATION TRANSPORT

Science Conference Proceedings (OSTI)

Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck and Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.

Abdikamalov, Ernazar; Ott, Christian D.; O'Connor, Evan [TAPIR, California Institute of Technology, MC 350-17, 1200 E California Blvd., Pasadena, CA 91125 (United States); Burrows, Adam; Dolence, Joshua C. [Department of Astrophysical Sciences, Princeton University, Peyton Hall, Ivy Lane, Princeton, NJ 08544 (United States); Loeffler, Frank; Schnetter, Erik, E-mail: abdik@tapir.caltech.edu [Center for Computation and Technology, Louisiana State University, 216 Johnston Hall, Baton Rouge, LA 70803 (United States)

2012-08-20T23:59:59.000Z

134

PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code

Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.

Iandola, F N; O'Brien, M J; Procassini, R J

2010-11-29T23:59:59.000Z

135

Local Monte Carlo Implementation of the Non-Abelian Landau-Pomeranchuk-Migdal Effect

The non-Abelian Landau-Pomeranschuk-Migdal (LPM) effect arises from the quantum interference between spatially separated, inelastic radiation processes in matter. A consistent probabilistic implementation of this LPM effect is a prerequisite for extending the use of Monte Carlo (MC) event generators to the simulation of jetlike multiparticle final states in nuclear collisions. Here, we propose a local MC algorithm, which is based solely on relating the LPM effect to the probabilistic concept of formation time for virtual quanta. This accounts probabilistically for the characteristic L{sup 2} dependence of average parton energy loss and the characteristic 1/sq root(omega) modification of the non-Abelian LPM effect. Additional kinematic constraints are found to modify these L{sup 2} and omega dependencies characteristically in accordance with analytical estimates.

Zapp, Korinna [Physikalisches Institut, Universitaet Heidelberg, Philosophenweg 12, D-69120 Heidelberg (Germany); EMMI, GSI Helmholtz-Institut fuer Ionenforschung, Planckstrasse 1, D-64291 Darmstadt (Germany); Stachel, Johanna [Physikalisches Institut, Universitaet Heidelberg, Philosophenweg 12, D-69120 Heidelberg (Germany); Wiedemann, Urs Achim [Physics Department, Theory Unit, CERN, CH-1211 Geneve 23 (Switzerland)

2009-10-09T23:59:59.000Z

136

OPTIMIZATION OF THE HYSPEC DESIGN USING MONTE CARLO SIMULATIONS.

HYSPEC is a direct geometry spectrometer to be installed at the SNS [1] on beamline 14B where it will view a cryogenic coupled hydrogen moderator, The ''hybrid'' design combines time-of-flight spectroscopy with focusing Bragg optics to provide a high monochromatic flux on small single crystal samples, with a very low background at an extended detector bank. The instrument is optimized for an incident energy range of 3-90meV. It will have a medium energy resolution (2-10%) and will provide a flux on sample of the order of 10{sup 6}-10{sup 7} neutrons/s-cm{sup 2}. The spectrometer will be located in a satellite building outside the SNS experimental hall at the end of a 35m curved supermirror guide. A straight-slotted Fermi chopper will be used to monochromate the neutron beam and to determine the burst width. The 15cm high, 4cm wide beam will be focused onto a 2cm by 2cm area at the sample position using Bragg reflection from one of two crystal arrays. For unpolarized neutron studies these will be Highly Oriented Pyrolitic graphite crystals while for polarized neutron studies these will be replaced with Heusler alloy crystals. These focusing crystal arrays will be placed in a drum shield similar to those used for triple axis spectrometers. Hyspec will have a movable detector bank housing 160 position sensitive detectors. This detector bank will pivot about the sample axis. It will have a radius of 4.5m, a horizontal range of 60{sup o}, and a vertical range of {+-} 7.5{sup o}. In order to reduce background at the detector bank both a curved guide and a T0 chopper will be used. A bank of 20 supermirror bender polarization analyzers [2] will be used to spatially separate the polarized neutrons in the scattered beam so that both scattered neutron spin states can be measured simultaneously. The results of Monte Carlo simulations performed to optimize the instrument design will be discussed.

GHOSH, V.J.; HAGEN, M.E.; LEONHARDT, W.J.; ZALIZNYAK, I.; SHAPIRO, S.M.; PASSELL, L.

2005-04-25T23:59:59.000Z

137

Radiation heat transfer in an array of fixed discrete surfaces is an important problem that is particularly difficult to analyze because of the nonhomogeneous and anisotropic optical properties involved. This article presents an efficient Monte Carlo method for evaluating radiation heat transfer in arrays of fixed discrete surfaces. This Monte Carlo model has been optimized to take advantage of the regular arrangement of surfaces often encountered in these arrays. Monte Carlo model predictions have been compared with analytical and experimental results.

Drost, M.K. (Pacific Northwest Lab., Richland, WA (United States)); Welty, J.R. (Oregon State Univ., Corvallis, OR (United States))

1992-08-01T23:59:59.000Z

138

Thermal-hydraulic computer codes represent non-linear functions that may contain discontinuities. Monte Carlo methods are an effective means of accurately evaluating the effect of uncertainty on a nonlinear function with discontinuities, but the computational requirements of standard Monte Carlo methods may be prohibitive. The linear variate Monte Carlo method is a means of reducing the computational requirements. The linear variate method combines the linear response surface method with Monte Carlo analysis to obtain an efficient and accurate method of nonlinear uncertainty analysis. The method is applied to the power limit analysis for the Savannah River Site reactors. 7 refs., 2 figs., 3 tabs.

Kubic, W.L. Jr.; White, A.M. (Los Alamos National Lab., NM (USA); Westinghouse Savannah River Co., Aiken, SC (USA))

1989-01-01T23:59:59.000Z

139

Crossing the mesoscale no-mans land via parallel kinetic Monte Carlo.

Science Conference Proceedings (OSTI)

The kinetic Monte Carlo method and its variants are powerful tools for modeling materials at the mesoscale, meaning at length and time scales in between the atomic and continuum. We have completed a 3 year LDRD project with the goal of developing a parallel kinetic Monte Carlo capability and applying it to materials modeling problems of interest to Sandia. In this report we give an overview of the methods and algorithms developed, and describe our new open-source code called SPPARKS, for Stochastic Parallel PARticle Kinetic Simulator. We also highlight the development of several Monte Carlo models in SPPARKS for specific materials modeling applications, including grain growth, bubble formation, diffusion in nanoporous materials, defect formation in erbium hydrides, and surface growth and evolution.

Garcia Cardona, Cristina (San Diego State University); Webb, Edmund Blackburn, III; Wagner, Gregory John; Tikare, Veena; Holm, Elizabeth Ann; Plimpton, Steven James; Thompson, Aidan Patrick; Slepoy, Alexander (U. S. Department of Energy, NNSA); Zhou, Xiao Wang; Battaile, Corbett Chandler; Chandross, Michael Evan

2009-10-01T23:59:59.000Z

140

Crossing the mesoscale no-man__s land via parallel kinetic Monte Carlo.__

The kinetic Monte Carlo method and its variants are powerful tools for modeling materials at the mesoscale, meaning at length and time scales in between the atomic and continuum. We have completed a 3 year LDRD project with the goal of developing a parallel kinetic Monte Carlo capability and applying it to materials modeling problems of interest to Sandia. In this report we give an overview of the methods and algorithms developed, and describe our new open-source code called SPPARKS, for Stochastic Parallel PARticle Kinetic Simulator. We also highlight the development of several Monte Carlo models in SPPARKS for specific materials modeling applications, including grain growth, bubble formation, diffusion in nanoporous materials, defect formation in erbium hydrides, and surface growth and evolution.

Garcia Cardona, Cristina (San Diego State University); Webb, Edmund Blackburn, III; Wagner, Gregory John; Tikare, Veena; Holm, Elizabeth Ann; Plimpton, Steven James; Thompson, Aidan Patrick; Slepoy, Alexander (U. S. Department of Energy, NNSA); Zhou, Xiao Wang; Battaile, Corbett Chandler; Chandross, Michael Evan

2009-10-01T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

141

Revised methods for few-group cross sections generation in the Serpent Monte Carlo code

Science Conference Proceedings (OSTI)

This paper presents new calculation methods, recently implemented in the Serpent Monte Carlo code, and related to the production of homogenized few-group constants for deterministic 3D core analysis. The new methods fall under three topics: 1) Improved treatment of neutron-multiplying scattering reactions, 2) Group constant generation in reflectors and other non-fissile regions and 3) Homogenization in leakage-corrected criticality spectrum. The methodology is demonstrated by a numerical example, comparing a deterministic nodal diffusion calculation using Serpent-generated cross sections to a reference full-core Monte Carlo simulation. It is concluded that the new methodology improves the results of the deterministic calculation, and paves the way for Monte Carlo based group constant generation. (authors)

Fridman, E. [Reactor Safety Div., Helmholz-Zentrum Dresden-Rossendorf, POB 51 01 19, Dresden, 01314 (Germany); Leppaenen, J. [VTT Technical Research Centre of Finland, POB 1000, FI-02044 VTT (Finland)

2012-07-01T23:59:59.000Z

142

Calculation of radiation therapy dose using all particle Monte Carlo transport

The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.

Chandler, William P. (Tracy, CA); Hartmann-Siantar, Christine L. (San Ramon, CA); Rathkopf, James A. (Livermore, CA)

1999-01-01T23:59:59.000Z

143

General purpose dynamic Monte Carlo with continuous energy for transient analysis

For safety assessments transient analysis is an important tool. It can predict maximum temperatures during regular reactor operation or during an accident scenario. Despite the fact that this kind of analysis is very important, the state of the art still uses rather crude methods, like diffusion theory and point-kinetics. For reference calculations it is preferable to use the Monte Carlo method. In this paper the dynamic Monte Carlo method is implemented in the general purpose Monte Carlo code Tripoli4. Also, the method is extended for use with continuous energy. The first results of Dynamic Tripoli demonstrate that this kind of calculation is indeed accurate and the results are achieved in a reasonable amount of time. With the method implemented in Tripoli it is now possible to do an exact transient calculation in arbitrary geometry. (authors)

Sjenitzer, B. L.; Hoogenboom, J. E. [Delft Univ. of Technology, Dept. of Radiation, Radionuclide and Reactors, Mekelweg 15, 2629JB Delft (Netherlands)

2012-07-01T23:59:59.000Z

144

Tests of Monte Carlo Independent Column Approximation With a Mixed-Layer Ocean Model

NLE Websites -- All DOE Office Websites (Extended Search)

Tests of Monte Carlo Independent Column Tests of Monte Carlo Independent Column Approximation With a Mixed-Layer Ocean Model Petri Simo JÃ¤rvenoja Heikki JÃ¤rvinen RÃ¤isÃ¤nen Finnish Meteorological Institute Figure 1. Root-mean-square sampling errors in local instant- aneous total (LW+SW) net flux at the surface and total radiative heating rate for the 1COL, CLDS, and REF approaches. Global rms values are given at the upper right hand corner of the plots. 1. Introduction The Monte Carlo Independent Column Approximation (McICA) separates the description of unresolved cloud structure from the radiative transfer solver very flexible ! unbiased with respect to ICA ! However, the radiative fluxes and heating rates contain conditional random errors ("McICA noise"). ? The topic of this poster: All previous tests of McICA

145

Tests of Monte Carlo Independent Column Approximation in the ECHAM5

NLE Websites -- All DOE Office Websites (Extended Search)

Tests of Monte Carlo Independent Column Approximation in the ECHAM5 Tests of Monte Carlo Independent Column Approximation in the ECHAM5 Atmospheric GCM Raisanen, Petri Finnish Meteoroligical Institute Jarvenoja, Simo Finnish Meteorological Institute Jarvinen, Heikki Finnish Meteorological Institute Category: Modeling The Monte Carlo Independent Column Approximation (McICA) was recently introduced as a new approach for parametrizing broadband radiative fluxes in global climate models (GCMs). The McICA allows a flexible description of unresolved cloud structure, and it is unbiased with respect to the full ICA, but its results contain conditional random errors (i.e., noise). In this work, McICA and a stochastic cloud generator have been implemented to the Max Planck Institute for Meteorology's ECHAM5 atmospheric GCM. The

146

Search for New Heavy Higgs Boson in B-L model at the LHC using Monte Carlo Simulation

The aim of this work is to search for a new heavy Higgs boson in the B-L extension of the Standard Model at LHC using the data produced from simulated collisions between two protons at different center of mass energies by Monte Carlo event generator programs to find new Higgs boson signatures at the LHC. Also we study the production and decay channels for Higgs boson in this model and its interactions with the other new particles of this model namely the new neutral gauge massive boson and the new fermionic right-handed heavy neutrinos .

Hesham Mansour; Nady Bakhet

2013-04-24T23:59:59.000Z

147

MonChER: Monte-Carlo generator for CHarge Exchange Reactions. Version 1.1. Physics and Manual

MonChER is a Monte Carlo event generator for simulation of single and double charge exchange reactions in proton-proton collisions at energies from 0.9 to 14 TeV. Such reactions, $pp\\to n+X$ and $pp\\to n+X+n$, are characterized by leading neutron production. They are dominated by $\\pi^+$ exchange and could provide us with more information about total and elastic $\\pi^+ p$ and $\\pi^+\\pi^+$ cross sections and parton distributions in pions in the still unexplored kinematical region.

R. A. Ryutin; A. E. Sobol; V. A. Petrov

2011-06-10T23:59:59.000Z

148

Monte Carlo simulation of the Massachusetts Institute of Technology Research Reactor

The three-dimensional continuous-energy MCNP Monte Carlo code is used to develop a versatile and accurate reactor physics model of the Massachusetts Institute of Technology Research Reactor 2 (MITR-2). The validation of the model against existing experimental data is presented. Core multiplication factors as well as fast neutron in-core flux measurements were used in the validation process. The agreement between the MCNP predictions and the experimentally determined values is very good, which indicates that the Monte Carlo model is correctly simulating the MITR-2.

Redmond, E.L. II; Yanch, J.C.; Harling, O.K. (Massachusetts Inst. of Tech., Cambridge, MA (United States). Nuclear Engineering Dept.)

1994-04-01T23:59:59.000Z

149

Penalized Splines for Smooth Representation of High-dimensional Monte Carlo Datasets

Detector response to a high-energy physics process is often estimated by Monte Carlo simulation. For purposes of data analysis, the results of this simulation are typically stored in large multi-dimensional histograms, which can quickly become both too large to easily store and manipulate and numerically problematic due to unfilled bins or interpolation artifacts. We describe here an application of the penalized spline technique to efficiently compute B-spline representations of such tables and discuss aspects of the resulting B-spline fits that simplify many common tasks in handling tabulated Monte Carlo data in high-energy physics analysis, in particular their use in maximum-likelihood fitting.

Whitehorn, Nathan; Lafebre, Sven

2013-01-01T23:59:59.000Z

150

Pseudo-random number generators for Monte Carlo simulations on Graphics Processing Units

Basic uniform pseudo-random number generators are implemented on ATI Graphics Processing Units (GPU). The performance results of the realized generators (multiplicative linear congruential (GGL), XOR-shift (XOR128), RANECU, RANMAR, RANLUX and Mersenne Twister (MT19937)) on CPU and GPU are discussed. The obtained speed-up factor is hundreds of times in comparison with CPU. RANLUX generator is found to be the most appropriate for using on GPU in Monte Carlo simulations. The brief review of the pseudo-random number generators used in modern software packages for Monte Carlo simulations in high-energy physics is present.

Vadim Demchik

2010-03-09T23:59:59.000Z

151

Calculating kinetics parameters and reactivity changes with continuous-energy Monte Carlo

The iterated fission probability interpretation of the adjoint flux forms the basis for a method to perform adjoint weighting of tally scores in continuous-energy Monte Carlo k-eigenvalue calculations. Applying this approach, adjoint-weighted tallies are developed for two applications: calculating point reactor kinetics parameters and estimating changes in reactivity from perturbations. Calculations are performed in the widely-used production code, MCNP, and the results of both applications are compared with discrete ordinates calculations, experimental measurements, and other Monte Carlo calculations.

Kiedrowski, Brian C [Los Alamos National Laboratory; Brown, Forrest B [Los Alamos National Laboratory; Wilson, Paul [UNIV. WISCONSIN

2009-01-01T23:59:59.000Z

152

A First-Passage Kinetic Monte Carlo algorithm for complex diffusion-reaction systems

Science Conference Proceedings (OSTI)

We develop an asynchronous event-driven First-Passage Kinetic Monte Carlo (FPKMC) algorithm for continuous time and space systems involving multiple diffusing and reacting species of spherical particles in two and three dimensions. The FPKMC algorithm presented here is based on the method introduced in Oppelstrup et al. and is implemented in a robust and flexible framework. Unlike standard KMC algorithms such as the n-fold algorithm, FPKMC is most efficient at low densities where it replaces the many small hops needed for reactants to find each other with large first-passage hops sampled from exact time-dependent Green's functions, without sacrificing accuracy. We describe in detail the key components of the algorithm, including the event-loop and the sampling of first-passage probability distributions, and demonstrate the accuracy of the new method. We apply the FPKMC algorithm to the challenging problem of simulation of long-term irradiation of metals, relevant to the performance and aging of nuclear materials in current and future nuclear power plants. The problem of radiation damage spans many decades of time-scales, from picosecond spikes caused by primary cascades, to years of slow damage annealing and microstructure evolution. Our implementation of the FPKMC algorithm has been able to simulate the irradiation of a metal sample for durations that are orders of magnitude longer than any previous simulations using the standard Object KMC or more recent asynchronous algorithms.

Donev, Aleksandar [Lawrence Livermore National Laboratory, Livermore, CA 94551 (United States)], E-mail: aleks.donev@gmail.com; Bulatov, Vasily V. [Lawrence Livermore National Laboratory, Livermore, CA 94551 (United States); Oppelstrup, Tomas [Lawrence Livermore National Laboratory, Livermore, CA 94551 (United States); Royal Institute of Technology (KTH), Stockholm S-10044 (Sweden); Gilmer, George H.; Sadigh, Babak; Kalos, Malvin H. [Lawrence Livermore National Laboratory, Livermore, CA 94551 (United States)

2010-05-01T23:59:59.000Z

153

A Monte Carlo model for seeded atomic flows in the transition regime

Science Conference Proceedings (OSTI)

A simple model for the numerical determination of separation effects in seeded atomic gas flows is presented. The model is based on the known possibility to provide a statistically convergent estimate of the exact solution for a linear transport equation ... Keywords: Compressible flows, Monte Carlo simulation, Multi-component flows, Numerical methods, Rarefied flows

S. Longo; P. Diomede

2009-06-01T23:59:59.000Z

154

Benchmark calculation of no-core Monte Carlo shell model in light nuclei

The Monte Carlo shell model is firstly applied to the calculation of the no-core shell model in light nuclei. The results are compared with those of the full configuration interaction. The agreements between them are within a few % at most.

Abe, T; Otsuka, T; Shimizu, N; Utsuno, Y; Vary, J P; 10.1063/1.3584062

2011-01-01T23:59:59.000Z

155

Benchmark calculation of no-core Monte Carlo shell model in light nuclei

The Monte Carlo shell model is firstly applied to the calculation of the no-core shell model in light nuclei. The results are compared with those of the full configuration interaction. The agreements between them are within a few % at most.

Abe, T.; Shimizu, N. [Department of Physics, the University of Tokyo, Hongo, Tokyo 113-0033 (Japan); Maris, P.; Vary, J. P. [Department of Physics and Astronomy, Iowa State University, Ames, Iowa 50011 (United States); Otsuka, T. [Department of Physics, University of Tokyo, Hongo, Tokyo 113-0033 (Japan); CNS, University of Tokyo, Hongo, Tokyo 113-0033 (Japan); NSCL, Michigan State University, East Lansing, Michigan 48824 (United States); Utsuno, Y. [ASRC, Japan Atomic Energy Agency, Tokai, Ibaraki 319-1195 (Japan)

2011-05-06T23:59:59.000Z

156

Modeling of Asymmetry between Gasoline and Crude Oil Prices: A Monte Carlo Comparison

Science Conference Proceedings (OSTI)

An Engle---Granger two-step procedure is commonly used to estimate cointegrating vectors and consequently asymmetric error-correction models. This study uses Monte Carlo methods and demonstrates that the Engle---Granger two-step method leads to biased ... Keywords: Asymmetry, Gasoline, Modeling, Oil prices

Afshin Honarvar

2010-10-01T23:59:59.000Z

157

Benchmark calculation of no-core Monte Carlo shell model in light nuclei

The Monte Carlo shell model is firstly applied to the calculation of the no-core shell model in light nuclei. The results are compared with those of the full configuration interaction. The agreements between them are within a few % at most.

T. Abe; P. Maris; T. Otsuka; N. Shimizu; Y. Utsuno; J. P. Vary

2011-07-09T23:59:59.000Z

158

Is the Standard Monte Carlo Power Iteration Approach the Wrong Approach? Part 2

Science Conference Proceedings (OSTI)

The recent work 'Is the Standard Monte Carlo Power Iteration Approach the Wrong Approach?' speculated that the second eigenfunction could be built using essentially the same 'building brick' approach that obtained the first eigenfunction in LA-UR-12-21928. This note shows that the speculation was at least partially correct, but not complete.

Booth, Thomas E. [Los Alamos National Laboratory

2012-07-11T23:59:59.000Z

159

Monte Carlo simulation-based algorithms for estimating the reliability of mobile agent-based systems

Science Conference Proceedings (OSTI)

In this paper, we propose two algorithms for estimating the task route reliability of mobile agent-based systems (MABS), which are based on the conditions of the underlying computer network. In addition, we propose a third algorithm for generating a ... Keywords: Algorithms, Mobile agents, Monte Carlo simulation, Random walk, Reliability

Mosaab Daoud; Qusay H. Mahmoud

2008-01-01T23:59:59.000Z

160

Generation reliability assessment in power markets using Monte Carlo simulation and soft computing

Science Conference Proceedings (OSTI)

Deregulation policy has caused some changes in the concepts of power systems reliability assessment and enhancement. In the present research, generation reliability is considered, and a method for its assessment is proposed using intelligent systems. ... Keywords: Generation reliability, Intelligent systems, Monte Carlo simulation, Power pool market

H. Haroonabadi; M. -R. Haghifam

2011-12-01T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

161

The Monte Carlo technique has been implemented to generate the dose distributions in a model prostate patient, implanted with iodine-125 ({sup 125}I) brachytherapy sources. The results of these calculations are also compared with the dose distributions calculated by a commercially available treatment planning system. The comparison shows that with the source strength suggested by the Monte Carlo technique, the current clinical planning system is found to provide 100% coverage of the prostate with the prescription dose for the same implant pattern. However, the dose-volume histogram of this investigation shows that the VariSeedTM treatment planning system has a 29% and 136% larger dose coverage for the 150% and 200% isodose lines, respectively, than the Monte Carlo simulation. These differences are attributed to the oversimplification of the current planning system using the point source approximation, and also to the interseed effects from multisources that are neglected in the conventional planning systems. The results of this study provide evidence that supports the use of the Monte Carlo technique in treatment planning systems to provide accurate dose calculations in brachytherapy implants.

Zhang Hualin [Department of Radiation Medicine, University of Kentucky Medical Center, Lexington, KY (United States)]. E-mail: hualinzhang@yahoo.com; Baker, Curtis [Department of Radiation Medicine, University of Kentucky Medical Center, Lexington, KY (United States); McKinsey, Rachel [Department of Radiation Medicine, University of Kentucky Medical Center, Lexington, KY (United States); Meigooni, Ali [Department of Radiation Medicine, University of Kentucky Medical Center, Lexington, KY (United States)

2005-06-30T23:59:59.000Z

162

Science Conference Proceedings (OSTI)

This paper reports simulation heuristics of Monte?Carlo Tree Search (MCTS) and shows an application example. MCTS introduced by Coulom is a best?first search where pseudorandom simulations guide the solution of problem. Recent improvements on MCTS have produced strong computer Go program

Shimpei Matsumoto; Kosuke Kato; Noriaki Hirosue; Hiroaki Ishii

2010-01-01T23:59:59.000Z

163

Monte Carlo simulation of solar radiation in maize canopies and its visualization

Science Conference Proceedings (OSTI)

The spatial distribution of solar radiation casts important influences on eco-physiological functions of plant canopies. A simulation model of the three-dimensional of direct and indirect solar radiation in real maize canopies is developed from measured ... Keywords: Monte Carlo algorithm, plant canopy, radiosity, ray tracing, three-dimensional distribution

Zhang Yuan; Lao Cai-lian; Lee Bao-Guo; Chen Yan; Guo Yan; Wang Xi-ping; Ma Yun-tao; Zhao Ming

2007-04-01T23:59:59.000Z

164

Use of single scatter electron monte carlo transport for medical radiation sciences

The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.

Svatos, Michelle M. (Oakland, CA)

2001-01-01T23:59:59.000Z

165

GPU-Based Monte-Carlo Volume Raycasting Christof Rezk Salama

- ducing convincing images, yet flexible enough for digital productions in practice. 1 Introduction Volume can be found in the book by Engel et al. [1]. The first solely GPU-based implementations of volumeGPU-Based Monte-Carlo Volume Raycasting Christof Rezk Salama Computer Graphics Group, University

Blanz, Volker

166

State space exploration using feedback constraint generation and Monte-Carlo sampling

Science Conference Proceedings (OSTI)

The systematic exploration of the space of all the behaviours of a software system forms the basis of numerous approaches to verification. However, existing approaches face many challenges with scalability and precision. We propose a framework for validating ... Keywords: model-checking, monte-carlo, statistical sampling, verification

Sriram Sankaranarayanan; Richard M. Chang; Guofei Jiang; Franjo Ivan?i?

2007-09-01T23:59:59.000Z

167

Monte Carlo Study of the Scattering Error of a Quartz Reflective Absorption Tube

Science Conference Proceedings (OSTI)

A Monte Carlo model was used to study the scattering error of an absorption meter with a divergent light beam and a limited acceptance angle of the receiver. Reflections at both ends of the tube were taken into account. Calculations of the effect ...

Jacek Piskozub; Piotr J. Flatau; J. V. Ronald Zaneveld

2001-03-01T23:59:59.000Z

168

Monte Carlo Estimation of Time Mismatch Effect in an OFDM EER Architecture

Monte Carlo Estimation of Time Mismatch Effect in an OFDM EER Architecture J-F.Bercher, A.Diet, C technique due to non-linearities of the power amplification operation. EER architecture can be used to solve-linearities in the radio- frequency transmitter. Linearization methods are necessary. EER (Envelope Elimination

Baudoin, GeneviÃ¨ve

169

A Mersenne Twister Hardware Implementation for the Monte Carlo Localization Algorithm

Science Conference Proceedings (OSTI)

Mobile robot localization is the problem of estimating a robot position based on sensor data and a map of the environment. One of the most used methods to address this problem is based on the Monte Carlo Localization (MCL) algorithm, which is a sample ... Keywords: Embedded mobile robotics, FPGA, Mersenne twister, Particle filter

Vanderlei Bonato; Bruno F. Mazzotti; Marcio Merino Fernandes; Eduardo Marques

2013-01-01T23:59:59.000Z

170

Acceleration of Monte Carlo Criticality Calculations Using Deterministic-Based Starting Sources

A new automatic approach that uses approximate deterministic solutions for providing the starting fission source for Monte Carlo eigenvalue calculations was evaluated in this analysis. By accelerating the Monte Carlo source convergence and decreasing the number of cycles that has to be skipped before the tallies estimation, this approach was found to increase the efficiency of the overall simulation, even with the inclusion of the extra computational time required by the deterministic calculation. This approach was also found to increase the reliability of the Monte Carlo criticality calculations of loosely coupled systems because the use of the better starting source reduces the likelihood of producing an undersampled k{sub eff} due to the inadequate source convergence. The efficiency improvement was demonstrated using two of the standard test problems devised by the OECD/NEA Expert Group on Source Convergence in Criticality-Safety Analysis to measure the source convergence in Monte Carlo criticality calculations. For a fixed uncertainty objective, this approach increased the efficiency of the overall simulation by factors between 1.2 and 3 depending on the difficulty of the source convergence in these problems. The reliability improvement was demonstrated in a modified version of the 'k{sub eff} of the world' problem that was specifically designed to demonstrate the limitations of the current Monte Carlo power iteration techniques. For this problem, the probability of obtaining a clearly undersampled k{sub eff} decreased from 5% with a uniform starting source to zero with a deterministic starting source when batch sizes with more than 15,000 neutron/cycle were used.

Ibrahim, A. [University of Wisconsin; Peplow, Douglas E. [ORNL; Wagner, John C [ORNL; Mosher, Scott W [ORNL; Evans, Thomas M [ORNL

2012-01-01T23:59:59.000Z

171

Monte Carlo Simulations for Homeland Security Using Anthropomorphic Phantoms

Science Conference Proceedings (OSTI)

A radiological dispersion device (RDD) is a device which deliberately releases radioactive material for the purpose of causing terror or harm. In the event that a dirty bomb is detonated, there may be airborne radioactive material that can be inhaled as well as settle on an individuals leading to external contamination.

Burns, Kimberly A.

2008-01-01T23:59:59.000Z

172

The energy injection and losses in the Monte Carlo simulations of a diffusive shock

Although diffusive shock acceleration (DSA) could be simulated by some well-established models, the assumption of the injection rate from the thermal particles to the superthermal population is still a contentious problem. But in the self-consistent Monte Carlo simulations, because of the prescribed scattering law instead of the assumption of the injected function, hence particle injection rate is intrinsically defined by the prescribed scattering law. We expect to examine the correlation of the energy injection with the prescribed multiple scattering angular distributions. According to the Rankine-Hugoniot conditions, the energy injection and the losses in the simulation system can directly decide the shock energy spectrum slope. By the simulations performed with multiple scattering law in the dynamical Monte Carlo model, the energy injection and energy loss functions are obtained. As results, the case applying anisotropic scattering law produce a small energy injection and large energy losses leading to a s...

Wang, Xin

2011-01-01T23:59:59.000Z

173

Adaptive kinetic Monte Carlo simulation of methanol decomposition on Cu(100)

The adaptive kinetic Monte Carlo method was used to calculate the dynamics of methanol decomposition on Cu(100) at room temperature over a time scale of minutes. Mechanisms of reaction were found using min-mode following saddle point searches based upon forces and energies from density functional theory. Rates of reaction were calculated with harmonic transition state theory. The dynamics followed a pathway from CH3-OH, CH3-O, CH2-O, CH-O and finally C-O. Our calculations confirm that methanol decomposition starts with breaking the O-H bond followed by breaking C-H bonds in the dehydrogenated intermediates until CO is produced. The bridge site on the Cu(100) surface is the active site for scissoring chemical bonds. Reaction intermediates are mobile on the surface which allows them to find this active reaction site. This study illustrates how the adaptive kinetic Monte Carlo method can model the dynamics of surface chemistry from first principles.

Xu, Lijun; Mei, Donghai; Henkelman, Graeme A.

2009-12-31T23:59:59.000Z

174

Calculating alpha Eigenvalues in a Continuous-Energy Infinite Medium with Monte Carlo

The {alpha} eigenvalue has implications for time-dependent problems where the system is sub- or supercritical. We present methods and results from calculating the {alpha}-eigenvalue spectrum for a continuous-energy infinite medium with a simplified Monte Carlo transport code. We formulate the {alpha}-eigenvalue problem, detail the Monte Carlo code physics, and provide verification and results. We have a method for calculating the {alpha}-eigenvalue spectrum in a continuous-energy infinite-medium. The continuous-time Markov process described by the transition rate matrix provides a way of obtaining the {alpha}-eigenvalue spectrum and kinetic modes. These are useful for the approximation of the time dependence of the system.

Betzler, Benjamin R. [Los Alamos National Laboratory; Kiedrowski, Brian C. [Los Alamos National Laboratory; Brown, Forrest B. [Los Alamos National Laboratory; Martin, William R. [Los Alamos National Laboratory

2012-09-04T23:59:59.000Z

175

Empirical Analysis of Stochastic Volatility Model by Hybrid Monte Carlo Algorithm

The stochastic volatility model is one of volatility models which infer latent volatility of asset returns. The Bayesian inference of the stochastic volatility (SV) model is performed by the hybrid Monte Carlo (HMC) algorithm which is superior to other Markov Chain Monte Carlo methods in sampling volatility variables. We perform the HMC simulations of the SV model for two liquid stock returns traded on the Tokyo Stock Exchange and measure the volatilities of those stock returns. Then we calculate the accuracy of the volatility measurement using the realized volatility as a proxy of the true volatility and compare the SV model with the GARCH model which is one of other volatility models. Using the accuracy calculated with the realized volatility we find that empirically the SV model performs better than the GARCH model.

Takaishi, Tetsuya

2013-01-01T23:59:59.000Z

176

Using high performance computing and Monte Carlo simulation for pricing american options

High performance computing (HPC) is a very attractive and relatively new area of research, which gives promising results in many applications. In this paper HPC is used for pricing of American options. Although the American options are very significant in computational finance; their valuation is very challenging, especially when the Monte Carlo simulation techniques are used. For getting the most accurate price for these types of options we use Quasi Monte Carlo simulation, which gives the best convergence. Furthermore, this algorithm is implemented on both GPU and CPU. Additionally, the CUDA architecture is used for harnessing the power and the capability of the GPU for executing the algorithm in parallel which is later compared with the serial implementation on the CPU. In conclusion this paper gives the reasons and the advantages of applying HPC in computational finance.

Cvetanoska, Verche

2012-01-01T23:59:59.000Z

177

Under-Prediction of Localized Tally Uncertainties in Monte Carlo Eigenvalue Calculations

Modeling and simulation using Monte Carlo methods is widely used in nuclear reactor criticality benchmarking applications. However, obtaining good statistics not only takes a large amount of computational time, but it has been shown that localized tally uncertainties may be under-predicted by a factor of five or more in select cases. The primary components of this under-prediction include poor sampling due to improper source convergence and cycle-to-cycle correlations in the fission source. Additional components relate to the flux shape and the size of the tally cells. These issues must be understood and dealt with in order to support the practical use of modern Monte Carlo software packages.

Mervin, Mervin Brenden [University of Tennessee, Knoxville (UTK); Mosher, Scott W [ORNL; Wagner, John C [ORNL; Maldonado, G. Ivan [University of Tennessee, Knoxville (UTK)

2011-01-01T23:59:59.000Z

178

In the OSTI Collections: Monte Carlo Methods | OSTI, US Dept of Energy,

Office of Scientific and Technical Information (OSTI)

Monte Carlo Methods Monte Carlo Methods "The first thoughts and attempts I made ... were suggested by a question which occurred to me in 1946 as I was convalescing from an illness and playing solitaires. The question was what are the chances that a Canfield solitaire laid out with 52 cards will come out successfully? After spending a lot of time trying to estimate them by pure combinatorial calculations, I wondered whether a more practical method than 'abstract thinking' might not be to lay it out say one hundred times and simply observe and count the number of successful plays. This was already possible to envisage with the beginning of the new era of fast computers, and I immediately thought of problems of neutron diffusion and other questions of mathematical physics,

179

Comments on the use of the Monte Carlo method for criticality calculations

As evidenced by recent papers given at Nuclear Criticality Safety Division meetings, the use of the Monte Carlo method has become a very popular computational tool. The ease of use has undoubtably been a primary reason for this popularity. This ease of use, however, may lead to a false sense of security when using the method. Guidance on the effective use of the method and some suggestions on how to avoid some of the pitfalls that can occur are presented. (TFD)

Whitesides, G.E.

1975-01-01T23:59:59.000Z

180

Shielding calculations for door thicknesses for megavoltage radiotherapy facilities with mazes are generally straightforward. To simplify the calculations, the standard formalism adopts several approximations relating to the average beam path, scattering coefficients, and the mean energy of the spectrum of scattered radiation. To test the accuracy of these calculations, the Monte Carlo program, ITS, was applied to this problem by determining the dose and energy spectrum of the radiation at the door for 4- and 10-MV bremsstrahlung beams incident on a phantom at isocenter. This was performed for mazes, one termed 'standard' and the other a shorter maze where the primary beam is incident on the wall adjacent to the door. The peak of the photon-energy spectrum at the door was found to be the same for both types of maze, independent of primary beam energy, and also, in the case of the conventional maze, of the primary beam orientation. The spectrum was harder for the short maze and for 10 MV vs. 4 MV. The thickness of the lead door for a short maze configuration was 1.5 cm for 10 MV and 1.2 cm for 4 MV vs. approximately less than 1 mm for a conventional maze. For the conventional maze, the Monte Carlo calculation predicts the dose at the door to be lower than given by NCRP 49 and NCRP 51 by about a factor of 2 at 4 MV but to be the same at 10 MV. For the short maze, the Monte Carlo predicts the dose to be a factor of 3 lower for 4 MV and about a factor of 1.5 lower for 10 MV. Experimental results support the Monte Carlo findings for the short maze.

Biggs, P.J. (Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston (United States))

1991-10-01T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

181

Radiative equilibrium in Monte Carlo radiative transfer using frequency distribution adjustment

The Monte Carlo method is a powerful tool for performing radiative equilibrium calculations, even in complex geometries. The main drawback of the standard Monte Carlo radiative equilibrium methods is that they require iteration, which makes them numerically very demanding. Bjorkman & Wood recently proposed a frequency distribution adjustment scheme, which allows radiative equilibrium Monte Carlo calculations to be performed without iteration, by choosing the frequency of each re-emitted photon such that it corrects for the incorrect spectrum of the previously re-emitted photons. Although the method appears to yield correct results, we argue that its theoretical basis is not completely transparent, and that it is not completely clear whether this technique is an exact rigorous method, or whether it is just a good and convenient approximation. We critically study the general problem of how an already sampled distribution can be adjusted to a new distribution by adding data points sampled from an adjustment distribution. We show that this adjustment is not always possible, and that it depends on the shape of the original and desired distributions, as well as on the relative number of data points that can be added. Applying this theorem to radiative equilibrium Monte Carlo calculations, we provide a firm theoretical basis for the frequency distribution adjustment method of Bjorkman & Wood, and we demonstrate that this method provides the correct frequency distribution through the additional requirement of radiative equilibrium. We discuss the advantages and limitations of this approach, and show that it can easily be combined with the presence of additional heating sources and the concept of photon weighting. However, the method may fail if small dust grains are included... (abridged)

Maarten Baes; Dimitris Stamatellos; Jonathan I. Davies; Anthony P. Whitworth; Sabina Sabatini; Sarah Roberts; Suzanne M. Linder; Rhodri Evans

2005-04-01T23:59:59.000Z

182

Validation of a Monte Carlo Based Depletion Methodology Using HFIR Post-Irradiation Measurements

Science Conference Proceedings (OSTI)

Post-irradiation uranium isotopic atomic densities within the core of the High Flux Isotope Reactor (HFIR) were calculated and compared to uranium mass spectrographic data measured in the late 1960s and early 70s [1]. This study was performed in order to validate a Monte Carlo based depletion methodology for calculating the burn-up dependent nuclide inventory, specifically the post-irradiation uranium

Chandler, David [ORNL; Maldonado, G Ivan [ORNL; Primm, Trent [ORNL

2009-11-01T23:59:59.000Z

183

Science Conference Proceedings (OSTI)

The daily distribution of sulfate concentration over the eastern United States during August 1977 is simulated by a Monte Carlo model using quantized emissions, positioned in accordance with the 1973 EPA SO2 emission inventory. Horizontal ...

D. E. Patterson; R. B. Husar; W. E. Wilson; L. F. Smith

1981-04-01T23:59:59.000Z

184

Science Conference Proceedings (OSTI)

The dispersion and concentration of particles (fluid elements) that are continuously released into a neutral planetary boundary layer is presented. The velocity fluctuations of the particles are generated using a Markov chain–Monte Carlo (MCMC) ...

R. Avila; S. S. Raza

2005-07-01T23:59:59.000Z

185

Science Conference Proceedings (OSTI)

A methodology combining Bayesian inference with Markov chain Monte Carlo (MCMC) sampling is applied to a real accidental radioactive release that occurred on a continental scale at the end of May 1998 near Algeciras, Spain. The source parameters (...

Luca Delle Monache; Julie K. Lundquist; Branko Kosovi?; Gardar Johannesson; Kathleen M. Dyer; Roger D. Aines; Fotini K. Chow; Rich D. Belles; William G. Hanley; Shawn C. Larsen; Gwen A. Loosmore; John J. Nitao; Gayle A. Sugiyama; Philip J. Vogt

2008-10-01T23:59:59.000Z

186

Science Conference Proceedings (OSTI)

This paper examines the tradeoffs between computational cost and accuracy for two new state-of-the-art codes for computing three-dimensional radiative transfer: a community Monte Carlo model and a parallel implementation of the Spherical ...

Robert Pincus; K. Franklin Evans

2009-10-01T23:59:59.000Z

187

Spatial homogenization of thermal feedback regions in Monte Carlo reactor calculations

An integrated thermal-hydraulic feedback module has previously been developed for the Monte Carlo transport solver, MC21. The module incorporates a flexible input format that allows the user to describe heat transfer and coolant flow paths within the geometric model at any level of spatial detail desired. The effect that the varying levels of spatial homogenization of thermal regions has on the accuracy of the Monte Carlo simulations is examined in this study. Six thermal feedback mappings are constructed from the same geometric model of the Calvert Cliffs core. The spatial homogenization of the thermal regions is varied, giving each scheme a different level of detail, and the adequacy of the spatial homogenization is determined based on the eigenvalue produced by each Monte Carlo calculation. The purpose of these numerical experiments is to determine the level of detail necessarily to accurately capture the thermal feedback effect on reactivity. Several different core models are considered: axial-flow only, axial and lateral flow, asymmetry due to control rod insertion, and fuel heating (temperature -dependent cross sections). The thermal results generated by the MC21 thermal feedback module are consistent with expectations. Based upon the numerical experiments conducted it is concluded that the amount of spatial detail necessary to accurately capture the feedback effect on reactivity is relatively small. Homogenization at the assembly level for the Calvert Cliffs PWR model results in a similar power defect to that calculated with individual pin-cells modeled as explicit thermal regions. (authors)

Hanna, B. R.; Gill, D. F.; Griesheimer, D. P. [Bertis Atomic Power Laboratory, Bechtel Marine Propulsion Corporation, P.O. Box 79, West Mifflin, PA 15122 (United States)

2012-07-01T23:59:59.000Z

188

A comparison of the Monte Carlo and the flux gradient method for atmospheric diffusion

In order to model the dispersal of atmospheric pollutants in the planetary boundary layer, various methods of parameterizing turbulent diffusion have been employed. The purpose of this paper is to use a three-dimensional particle-in-cell transport and diffusion model to compare the Markov chain (Monte Carlo) method of statistical particle diffusion with the deterministic flux gradient (K-theory) method. The two methods are heavily used in the study of atmospheric diffusion under complex conditions, with the Monte Carlo method gaining in popularity partly because of its more direct application of turbulence parameters. The basis of comparison is a data set from night-time drainage flow tracer experiments performed by the US Department of Energy Atmospheric Studies in Complex Terrain (ASCOT) program at the Geysers geothermal region in northern California. The Atmospheric Diffusion Particle-In-Cell (ADPIC) model used is the main model in the Lawrence Livermore National Laboratory emergency response program: Atmospheric Release Advisory Capability (ARAC). As a particle model, it can simulate diffusion in both the flux gradient and Monte Carlo modes. 9 refs., 6 figs.

Lange, R.

1990-05-01T23:59:59.000Z

189

Nonequilibrium candidate Monte Carlo: A new tool for efficient equilibrium simulation

Metropolis Monte Carlo simulation is a powerful tool for studying the equilibrium properties of matter. In complex condensed-phase systems, however, it is difficult to design Monte Carlo moves with high acceptance probabilities that also rapidly sample uncorrelated configurations. Here, we introduce a new class of moves based on nonequilibrium dynamics: candidate configurations are generated through a finite-time process in which a system is actively driven out of equilibrium, and accepted with criteria that preserve the equilibrium distribution. The acceptance rule is similar to the Metropolis acceptance probability, but related to the nonequilibrium work rather than the instantaneous energy difference. Our method is applicable to sampling from both a single thermodynamic state or a mixture of thermodynamic states, and allows both coordinates and thermodynamic parameters to be driven in nonequilibrium proposals. While generating finite-time switching trajectories incurs an additional cost, driving some degrees of freedom while allowing others to evolve naturally can lead to large enhancements in acceptance probabilities, greatly reducing structural correlation times. Using nonequilibrium driven processes vastly expands the repertoire of useful Monte Carlo proposals in simulations of dense solvated systems.

Nilmeier, Jerome P.; Crooks, Gavin E.; Minh, David D. L.; Chodera, John D.

2011-11-08T23:59:59.000Z

190

Quantum Monte-Carlo method applied to Non-Markovian barrier transmission

In nuclear fusion and fission, fluctuation and dissipation arise due to the coupling of collective degrees of freedom with internal excitations. Close to the barrier, both quantum, statistical and non-Markovian effects are expected to be important. In this work, a new approach based on quantum Monte-Carlo addressing this problem is presented. The exact dynamics of a system coupled to an environment is replaced by a set of stochastic evolutions of the system density. The quantum Monte-Carlo method is applied to systems with quadratic potentials. In all range of temperature and coupling, the stochastic method matches the exact evolution showing that non-Markovian effects can be simulated accurately. A comparison with other theories like Nakajima-Zwanzig or Time-ConvolutionLess ones shows that only the latter can be competitive if the expansion in terms of coupling constant is made at least to fourth order. A systematic study of the inverted parabola case is made at different temperatures and coupling constants. The asymptotic passing probability is estimated in different approaches including the Markovian limit. Large differences with the exact result are seen in the latter case or when only second order in the coupling strength is considered as it is generally assumed in nuclear transport models. On opposite, if fourth order in the coupling or quantum Monte-Carlo method is used, a perfect agreement is obtained.

G. Hupin; D. Lacroix

2010-01-05T23:59:59.000Z

191

A comparison of generalized hybrid Monte Carlo methods with and without momentum flip

Science Conference Proceedings (OSTI)

The generalized hybrid Monte Carlo (GHMC) method combines Metropolis corrected constant energy simulations with a partial random refreshment step in the particle momenta. The standard detailed balance condition requires that momenta are negated upon rejection of a molecular dynamics proposal step. The implication is a trajectory reversal upon rejection, which is undesirable when interpreting GHMC as thermostated molecular dynamics. We show that a modified detailed balance condition can be used to implement GHMC without momentum flips. The same modification can be applied to the generalized shadow hybrid Monte Carlo (GSHMC) method. Numerical results indicate that GHMC/GSHMC implementations with momentum flip display a favorable behavior in terms of sampling efficiency, i.e., the traditional GHMC/GSHMC implementations with momentum flip got the advantage of a higher acceptance rate and faster decorrelation of Monte Carlo samples. The difference is more pronounced for GHMC. We also numerically investigate the behavior of the GHMC method as a Langevin-type thermostat. We find that the GHMC method without momentum flip interferes less with the underlying stochastic molecular dynamics in terms of autocorrelation functions and it to be preferred over the GHMC method with momentum flip. The same finding applies to GSHMC.

Akhmatskaya, Elena [Fujitsu Laboratories of Europe Ltd (FLE), Hayes Park Central, Hayes End Road, Hayes UB4 8FE (United Kingdom); Bou-Rabee, Nawaf [Department of Mathematics, Freie Universitaet Berlin, Arnimallee 2-6, 14195 Berlin (Germany); Reich, Sebastian [Universitaet Potsdam, Institut fuer Mathematik, Am Neuen Palais 10, D-14469 Potsdam (Germany)], E-mail: s.reich@ic.ac.uk

2009-04-01T23:59:59.000Z

192

Analyses are carried out to assess the impact of nuclear data uncertainties on keff for the European Lead Cooled Training Reactor (ELECTRA) using the Total Monte Carlo method. A large number of Pu-239 random ENDF-formated libraries generated using the TALYS based system were processed into ACE format with NJOY99.336 code and used as input into the Serpent Monte Carlo neutron transport code to obtain distribution in keff. The keff distribution obtained was compared with the latest major nuclear data libraries - JEFF-3.1.2, ENDF/B-VII.1 and JENDL-4.0. A method is proposed for the selection of benchmarks for specific applications using the Total Monte Carlo approach. Finally, an accept/reject criterion was investigated based on chi square values obtained using the Pu-239 Jezebel criticality benchmark. It was observed that nuclear data uncertainties in keff were reduced considerably from 748 to 443 pcm by applying a more rigid acceptance criteria for accepting random files.

Alhassan, Erwin; Duan, Junfeng; Gustavsson, Cecilia; Koning, Arjan; Pomp, Stephan; Rochman, Dimitri; Österlund, Michael

2013-01-01T23:59:59.000Z

193

A second-quantized red herring in full configuration-interaction Monte Carlo

Full configuration-interaction quantum Monte Carlo (FCI-QMC) is a Monte Carlo method that allows for exact solution of the ground state of fermionic Hamiltonians (albeit at exponential cost). FCI-QMC involves stochastic projection to the ground state, working in a basis of second- quantized determinants. While a Fermi sign problem still exists within FCI-QMC, it has been suggested that even without annihilation the sign problem is fundamentally distinct from that of more standard techniques such as diffusion Monte Carlo, as a result of working in determinant space. Furthermore, it is widely believed that this distinction is at least partially responsible for the success of FCI-QMC in mitigating the sign problem. In this paper, we show that second quantization is a red herring; the sign problem of FCI-QMC comes from the conventional instability to a bosonic ground state, and in fact FCI-QMC without annihilation can be equated step-by-step to a first-quantized algorithm where anti-symmetry comes only from initi...

Kolodrubetz, Michael

2012-01-01T23:59:59.000Z

194

MC21 analysis of the nuclear energy agency Monte Carlo performance benchmark problem

Due to the steadily decreasing cost and wider availability of large scale computing platforms, there is growing interest in the prospects for the use of Monte Carlo for reactor design calculations that are currently performed using few-group diffusion theory or other low-order methods. To facilitate the monitoring of the progress being made toward the goal of practical full-core reactor design calculations using Monte Carlo, a performance benchmark has been developed and made available through the Nuclear Energy Agency. A first analysis of this benchmark using the MC21 Monte Carlo code was reported on in 2010, and several practical difficulties were highlighted. In this paper, a newer version of MC21 that addresses some of these difficulties has been applied to the benchmark. In particular, the confidence-interval-determination method has been improved to eliminate source correlation bias, and a fission-source-weighting method has been implemented to provide a more uniform distribution of statistical uncertainties. In addition, the Forward-Weighted, Consistent-Adjoint-Driven Importance Sampling methodology has been applied to the benchmark problem. Results of several analyses using these methods are presented, as well as results from a very large calculation with statistical uncertainties that approach what is needed for design applications. (authors)

Kelly, D. J.; Sutton, T. M. [Knolls Atomic Power Laboratory, Bechtel Marine Propulsion Corporation, P. O. Box 1072, Schenectady, NY 12301-1072 (United States); Wilson, S. C. [Bertis Atomic Power Laboratory, Bechtel Marine Propulsion Corporation, P. O. Box 79, West Mifflin, PA 15122-0079 (United States)

2012-07-01T23:59:59.000Z

195

Analyses are carried out to assess the impact of nuclear data uncertainties on keff for the European Lead Cooled Training Reactor (ELECTRA) using the Total Monte Carlo method. A large number of Pu-239 random ENDF-formated libraries generated using the TALYS based system were processed into ACE format with NJOY99.336 code and used as input into the Serpent Monte Carlo neutron transport code to obtain distribution in keff. The keff distribution obtained was compared with the latest major nuclear data libraries - JEFF-3.1.2, ENDF/B-VII.1 and JENDL-4.0. A method is proposed for the selection of benchmarks for specific applications using the Total Monte Carlo approach. Finally, an accept/reject criterion was investigated based on chi square values obtained using the Pu-239 Jezebel criticality benchmark. It was observed that nuclear data uncertainties in keff were reduced considerably from 748 to 443 pcm by applying a more rigid acceptance criteria for accepting random files.

Erwin Alhassan; Henrik Sjöstrand; Junfeng Duan; Cecilia Gustavsson; Arjan Koning; Stephan Pomp; Dimitri Rochman; Michael Österlund

2013-03-26T23:59:59.000Z

196

Currently, effective reservoir management systems play a very important part in exploiting reservoirs. Fully exploiting all the possible events for a petroleum reservoir is a challenge because of the infinite combinations of reservoir parameters. There is much unknown about the underlying reservoir model, which has many uncertain parameters. MCMC (Markov Chain Monte Carlo) is a more statistically rigorous sampling method, with a stronger theoretical base than other methods. The performance of the MCMC method on a high dimensional problem is a timely topic in the statistics field. This thesis suggests a way to quantify uncertainty for high dimensional problems by using the MCMC sampling process under the Bayesian frame. Based on the improved method, this thesis reports a new approach in the use of the continuous MCMC method for automatic history matching. The assimilation of the data in a continuous process is done sequentially rather than simultaneously. In addition, by doing a continuous process, the MCMC method becomes more applicable for the industry. Long periods of time to run just one realization will no longer be a big problem during the sampling process. In addition, newly observed data will be considered once it is available, leading to a better estimate. The PUNQ-S3 reservoir model is used to test two methods in this thesis. The methods are: STATIC (traditional) SIMULATION PROCESS and CONTINUOUS SIMULATION PROCESS. The continuous process provides continuously updated probabilistic forecasts of well and reservoir performance, accessible at any time. It can be used to optimize long-term reservoir performance at field scale.

Liu, Chang

2008-12-01T23:59:59.000Z

197

The purpose of this study was to determine how well the Monte Carlo transport code FLUKA can simulate a tissue-equivalent proportional counter (TEPC) and produce the expected delta ray events when exposed to high energy heavy ions (HZE) like in the galactic cosmic ray (GCR) environment. Accurate transport codes are desirable because of the high cost of beam time, the inability to measure the mixed field GCR on the ground and the flexibility they offer in the engineering and design process. A spherical TEPC simulating a 1 um site size was constructed in FLUKA and its response was compared to experimental data for an 56Fe beam at 360 MeV/nucleon. The response of several narrow beams at different impact parameters were used to explain the features of the response of the same detector exposed to a uniform field of radiation. Additionally, an investigation was made into the effect of the wall thickness on the response of the TEPC and the range of delta rays in the tissue-equivalent (TE) wall material. A full impact parameter test (from IP = 0 to IP = detector radius) was performed to show that FLUKA produces the expected wall effect. That is, energy deposition in the gas volume can occur even when the primary beam does not pass through the gas volume. A final comparison to experimental data was made for the simulated TEPC exposed to various broad beams in the energy range of 200 - 1000 MeV/nucleon. FLUKA overestimated energy deposition in the gas volume in all cases. The FLUKA results differed from the experimental data by an average of 25.2 % for yF and 12.4 % for yD. It is suggested that this difference can be reduced by adjusting the FLUKA default ionization potential and density correction factors.

Northum, Jeremy Dell

2010-05-01T23:59:59.000Z

198

TOPAS: An innovative proton Monte Carlo platform for research and clinical applications

Science Conference Proceedings (OSTI)

Purpose: While Monte Carlo particle transport has proven useful in many areas (treatment head design, dose calculation, shielding design, and imaging studies) and has been particularly important for proton therapy (due to the conformal dose distributions and a finite beam range in the patient), the available general purpose Monte Carlo codes in proton therapy have been overly complex for most clinical medical physicists. The learning process has large costs not only in time but also in reliability. To address this issue, we developed an innovative proton Monte Carlo platform and tested the tool in a variety of proton therapy applications. Methods: Our approach was to take one of the already-established general purpose Monte Carlo codes and wrap and extend it to create a specialized user-friendly tool for proton therapy. The resulting tool, TOol for PArticle Simulation (TOPAS), should make Monte Carlo simulation more readily available for research and clinical physicists. TOPAS can model a passive scattering or scanning beam treatment head, model a patient geometry based on computed tomography (CT) images, score dose, fluence, etc., save and restart a phase space, provides advanced graphics, and is fully four-dimensional (4D) to handle variations in beam delivery and patient geometry during treatment. A custom-designed TOPAS parameter control system was placed at the heart of the code to meet requirements for ease of use, reliability, and repeatability without sacrificing flexibility. Results: We built and tested the TOPAS code. We have shown that the TOPAS parameter system provides easy yet flexible control over all key simulation areas such as geometry setup, particle source setup, scoring setup, etc. Through design consistency, we have insured that user experience gained in configuring one component, scorer or filter applies equally well to configuring any other component, scorer or filter. We have incorporated key lessons from safety management, proactively removing possible sources of user error such as line-ordering mistakes. We have modeled proton therapy treatment examples including the UCSF eye treatment head, the MGH stereotactic alignment in radiosurgery treatment head and the MGH gantry treatment heads in passive scattering and scanning modes, and we have demonstrated dose calculation based on patient-specific CT data. Initial validation results show agreement with measured data and demonstrate the capabilities of TOPAS in simulating beam delivery in 3D and 4D. Conclusions: We have demonstrated TOPAS accuracy and usability in a variety of proton therapy setups. As we are preparing to make this tool freely available for researchers in medical physics, we anticipate widespread use of this tool in the growing proton therapy community.

Perl, J.; Shin, J.; Schuemann, J.; Faddegon, B.; Paganetti, H. [SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025 (United States); University of California San Francisco Comprehensive Cancer Center, 1600 Divisadero Street, San Francisco, California 94143-1708 (United States); Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States); University of California San Francisco Comprehensive Cancer Center, 1600 Divisadero Street, San Francisco, California 94143-1708 (United States); Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States)

2012-11-15T23:59:59.000Z

199

Science Conference Proceedings (OSTI)

Purpose: GATE is a Monte Carlo simulation toolkit based on the Geant4 package, widely used for many medical physics applications, including SPECT and PET image simulation and more recently CT image simulation and patient dosimetry. The purpose of the current study was to calculate dose point kernels (DPKs) using GATE, compare them against reference data, and finally produce a complete dataset of the total DPKs for the most commonly used radionuclides in nuclear medicine. Methods: Patient-specific absorbed dose calculations can be carried out using Monte Carlo simulations. The latest version of GATE extends its applications to Radiotherapy and Dosimetry. Comparison of the proposed method for the generation of DPKs was performed for (a) monoenergetic electron sources, with energies ranging from 10 keV to 10 MeV, (b) beta emitting isotopes, e.g., {sup 177}Lu, {sup 90}Y, and {sup 32}P, and (c) gamma emitting isotopes, e.g., {sup 111}In, {sup 131}I, {sup 125}I, and {sup 99m}Tc. Point isotropic sources were simulated at the center of a sphere phantom, and the absorbed dose was stored in concentric spherical shells around the source. Evaluation was performed with already published studies for different Monte Carlo codes namely MCNP, EGS, FLUKA, ETRAN, GEPTS, and PENELOPE. A complete dataset of total DPKs was generated for water (equivalent to soft tissue), bone, and lung. This dataset takes into account all the major components of radiation interactions for the selected isotopes, including the absorbed dose from emitted electrons, photons, and all secondary particles generated from the electromagnetic interactions. Results: GATE comparison provided reliable results in all cases (monoenergetic electrons, beta emitting isotopes, and photon emitting isotopes). The observed differences between GATE and other codes are less than 10% and comparable to the discrepancies observed among other packages. The produced DPKs are in very good agreement with the already published data, which allowed us to produce a unique DPKs dataset using GATE. The dataset contains the total DPKs for {sup 67}Ga, {sup 68}Ga, {sup 90}Y, {sup 99m}Tc, {sup 111}In, {sup 123}I, {sup 124}I, {sup 125}I, {sup 131}I, {sup 153}Sm, {sup 177}Lu {sup 186}Re, and {sup 188}Re generated in water, bone, and lung. Conclusions: In this study, the authors have checked GATE's reliability for absorbed dose calculation when transporting different kind of particles, which indicates its robustness for dosimetry applications. A novel dataset of DPKs is provided, which can be applied in patient-specific dosimetry using analytical point kernel convolution algorithms.

Papadimitroulas, Panagiotis; Loudos, George; Nikiforidis, George C.; Kagadis, George C. [Department of Medical Physics, School of Medicine, University of Patras, Rion, GR 265 04 (Greece) and Department of Medical Instruments Technology, Technological Educational institute of Athens, Ag. Spyridonos Street, Egaleo GR 122 10, Athens (Greece); Department of Medical Instruments Technology, Technological Educational institute of Athens, Ag. Spyridonos Street, Egaleo GR 122 10, Athens (Greece); Department of Medical Physics, School of Medicine, University of Patras, Rion, GR 265 04 (Greece)

2012-08-15T23:59:59.000Z

200

Imaging air Cherenkov telescopes (IACTs) detect the Cherenkov light from extensive air showers (EAS) initiated by very high energy (VHE) gamma-rays impinging on the Earth's atmosphere. Due to the overwhelming background from hadron induced EAS, the discrimination of the rare gamma-like events is vital. The influence of the geomagnetic field (GF) on the development of EAS can further complicate the imaging air Cherenkov technique. The amount and the angular distribution of Cherenkov light from EAS can be obtained by means of Monte Carlo (MC) simulations. Here we present the results from dedicated MC studies of GF effects on images from gamma-ray initiated EAS for the MAGIC telescope site, where the GF strength is ~40 micro Tesla. The results from the MC studies suggest that GF effects degrade not only measurements of very low energy gamma-rays below ~100 GeV but also those at TeV-energies.

S. C. Commichau; A. Biland; J. L. Contreras; R. de los Reyes; A. Moralejo; J. Sitarek; D. Sobczynska

2008-02-18T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

201

RBU: A COMBINED MONTE CARLO REACTOR-BURNUP PROGRAM FOR THE IBM 709

RBU is a digital computer program for the detailed calculation of the neutron, reactivity, and isotopic history of a reactor in which relatively exact models of the geometry and physical processes are included to permit reliable pre dictions of fuel costs and reactor performance. The program uses the Monte Carlo method to obtain the fine structure of the neutron flux in three space dimensions and energy. Using this fine structure, cross sections are averaged over space and energy to obtain the neutronic properties for equivalent homogeneous one- dimensional regions of space and ranges of energy. These are used in diffusion calculations to obtain the macroscopic flux distribution throughout the reactor. The consumption and production of isotopes is computed for a time step by the solution of sets of partial differential equations involving both the macroscopic and microscopic fluxes. With the new concentrations, diffusion calculations are performed again to obtain macroscopic fluxes for the next time step. At variable intervals, Monte Carlo calculations are again performed to determine the changes in microscopic flux distributions. The cycle is repeated until conditions on the reactivity or other properties dictate the end of the calculation. Programmed control rod manipulation may be included in the calculation. The Monte Carlo, diffusion, or burnup portions of the program may be used separately. The unresolved resonance range is treated by random selection of resonance parameters from appropriate distributions using the Doppler broadened single level Breit- Wigner formula. Resolved resonances are treated similarly with the exception that specific values of the resonance parameters are used. The effects of molecular binding and thermal motion of the nuclei on near-thermal scattering are treated by a simple model capable of incorporating the pertinent physical theory and data. (auth)

Leshan, E.J.; Burr, J.R.; Temme, M.; Thompson, G.T.; Triplett, J.R.

1959-09-30T23:59:59.000Z

202

A User's Manual for MASH V1.5 - A Monte Carlo Adjoint Shielding Code System

Science Conference Proceedings (OSTI)

The Monte Carlo ~djoint ~ielding Code System, MASH, calculates neutron and gamma- ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air- over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system includes the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. The current version, MASH v 1.5, is the successor to the original MASH v 1.0 code system initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the "dose importance" of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response as a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem.

C. O. Slater; J. M. Barnes; J. O. Johnson; J.D. Drischler

1998-10-01T23:59:59.000Z

203

First Results From GLAST-LAT Integrated Towers Cosmic Ray Data Taking And Monte Carlo Comparison

GLAST Large Area Telescope (LAT) is a gamma ray telescope instrumented with silicon-strip detector planes and sheets of converter, followed by a calorimeter (CAL) and surrounded by an anticoincidence system (ACD). This instrument is sensitive to gamma rays in the energy range between 20 MeV and 300 GeV. At present, the first towers have been integrated and pre-launch data taking with cosmic ray muons is being performed. The results from the data analysis carried out during LAT integration will be discussed and a comparison with the predictions from the Monte Carlo simulation will be shown.

Brigida, M.; Caliandro, A.; Favuzzi, C.; Fusco, P.; Gargano, F.; Giordano, F.; Giglietto, N.; Loparco, F.; Marangelli, B.; Mazziotta, M.N.; Mirizzi, N.; Raino, S.; Spinelli, P.; /Bari U. /INFN, Bari

2007-02-15T23:59:59.000Z

204

A Hybrid (Monte-Carlo/Deterministic) Approach for Multi-Dimensional Radiation Transport

A novel hybrid Monte Carlo transport scheme is demonstrated in a scene with solar illumination, scattering and absorbing 2D atmosphere, a textured reflecting mountain, and a small detector located in the sky (mounted on a satellite or a airplane). It uses a deterministic approximation of an adjoint transport solution to reduce variance, computed quickly by ignoring atmospheric interactions. This allows significant variance and computational cost reductions when the atmospheric scattering and absorption coefficient are small. When combined with an atmospheric photon-redirection scheme, significant variance reduction (equivalently acceleration) is achieved in the presence of atmospheric interactions.

Guillaume Bal; Anthony Davis; Ian Langmore

2011-05-07T23:59:59.000Z

205

Extrapolation method in the Monte Carlo Shell Model and its applications

We demonstrate how the energy-variance extrapolation method works using the sequence of the approximated wave functions obtained by the Monte Carlo Shell Model (MCSM), taking {sup 56}Ni with pf-shell as an example. The extrapolation method is shown to work well even in the case that the MCSM shows slow convergence, such as {sup 72}Ge with f5pg9-shell. The structure of {sup 72}Se is also studied including the discussion of the shape-coexistence phenomenon.

Shimizu, Noritaka; Abe, Takashi [Department of Physics, University of Tokyo, Hongo, Tokyo 113-0033 (Japan); Utsuno, Yutaka [Advanced Science Research Center, Japan Atomic Energy Agency, Tokai, Ibaraki 319-1195 (Japan); Mizusaki, Takahiro [Institute of Natural Sciences, Senshu University, Tokyo, 101-8425 (Japan); Otsuka, Takaharu [Department of Physics, University of Tokyo, Hongo, Tokyo 113-0033 (Japan); Center for Nuclear Study, University of Tokyo, Hongo, Tokyo 113-0033 (Japan); National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, Michigan (United States); Honma, Michio [Center for Mathematical Sciences, Aizu University, Aizu-Wakamatsu, Fukushima 965-8580 (Japan)

2011-05-06T23:59:59.000Z

206

Characterisation of radiation damage in silicon photomultipliers with a Monte Carlo model

Measured response functions and low photon yield spectra of silicon photomultipliers (SiPM) were compared to multi-photoelectron pulse-height distributions generated by a Monte Carlo model. Characteristic parameters for SiPM were derived. The devices were irradiated with 14 MeV electrons at the Mainz microtron MAMI. It is shown that the first noticeable damage consists of an increase in the rate of dark pulses and the loss of uniformity in the pixel gains. Higher radiation doses reduced also the photon detection efficiency. The results are especially relevant for applications of SiPM in fibre detectors at high luminosity experiments.

S. Sanchez Majos; P. Achenbach; J. Pochodzalla

2008-05-27T23:59:59.000Z

207

The paper illustrates the development and utilization of an annual chronological load curve for each load bus in a composite generation and transmission system and a sequential Monte Carlo simulation approach for composite system reliability assessment. Antithetic variates as a variance reduction technique has been applied to the simulation model to increase the efficiency of the simulation. An approximate method using a load duration curve of the system load and an enumeration process have been applied to the developed load model and the results are compared in this paper.

Sankarakrishnan, A.; Billinton, R. [Univ. of Saskatchewan, Saskatoon, Saskatchewan (Canada). Power Systems Research Group

1995-08-01T23:59:59.000Z

208

SIM-RIBRAS: A Monte-Carlo simulation package for RIBRAS system

Science Conference Proceedings (OSTI)

SIM-RIBRAS is a Root-based Monte-Carlo simulation tool designed to help RIBRAS users on experience planning and experimental setup enhancing and caracterization. It is divided into two main programs: CineRIBRAS, aiming beam kinematics, and SolFocus, aiming beam optics. SIM-RIBRAS replaces other methods and programs used in the past, providing more complete and accurate results and requiring much less manual labour. Moreover, the user can easily make modifications in the codes, adequating it for specific requirements of an experiment.

Leistenschneider, E.; Lepine-Szily, A.; Lichtenthaeler, R. [Departamento de Fisica Nuclear, Instituto de Fisica, Universidade de Sao Paulo (Brazil)

2013-05-06T23:59:59.000Z

209

ETRANMS: a one-dimensional Monte Carlo electron/photon transport code for multimaterial targets

ETRANMS is an LLL-modified version of the one-dimensional electron/ photon transport code ETRAN 15 developed at the National bureau of Standards. The major modifications include the use of LLL photon cross sections and the application to multislab, multimaterial targets. The code uses Monte Carlo sampling techniques to calculate electron and photon transport and energy and charge deposition within target material subjected to electron or photon radiation. It has been programmed to be a very rapid running, user-oriented code for use on LLL's CDC 7600 computers. (auth)

Kovar, F.R.

1973-11-30T23:59:59.000Z

210

A 3D Monte Carlo Photoionization Code for Modeling Diffuse Ionized Gas

We have developed a three dimensional Monte Carlo photoionization code tailored for the study of Galactic H II regions and the percolation of ionizing photons in diffuse ionized gas. We describe the code, our calculation of photoionization, heating & cooling, and the approximations we have employed for the low density H II regions we wish to study. Our code gives results in agreement with the Lexington H II region benchmarks. We show an example of a 2D shadowed region and point out the very significant effect that diffuse radiation produced by recombinations of helium has on the temperature within the shadow.

Wood, K; Ercolano, B

2004-01-01T23:59:59.000Z

211

A lattice-Monte Carlo approach was developed to simulate ferroelectric domain behavior. The model utilizes a Hamiltonian for the total energy that includes electrostatic terms (involving dipole-dipole interactions, local polarization gradients, and applied electric field), and elastic strain energy. The contributions of these energy components to the domain structure and to the overall applied field response of the system were examined. In general, the model exhibited domain structure characteristics consistent with those observed in a tetragonally distorted ferroelectric. Good qualitative agreement between the appearance of simulated electrical hysteresis loops and those characteristic of real ferroelectric materials was found.

POTTER JR.,BARRETT G.; TUTTLE,BRUCE A.; TIKARE,VEENA

2000-04-04T23:59:59.000Z

212

We present the achievements of the last years of the experimental and theoretical groups working on hadronic cross section measurements at the low energy e+e- colliders in Beijing, Frascati, Ithaca, Novosibirsk, Stanford and Tsukuba and on tau decays. We sketch the prospects in these fields for the years to come. We emphasise the status and the precision of the Monte Carlo generators used to analyse the hadronic cross section measurements obtained as well with energy scans as with radiative return, to determine luminosities and tau decays. The radiative corrections fully or approximately implemented in the various codes and the contribution of the vacuum polarisation are discussed.

Actis, S; Arbuzov, A; Balossini, G; Beltrame, P; Bignamini, C; Bonciani, R; Carloni Calame, C M; Cherepanov, V; Czakon, M; Czyz, H; Denig, A; Eidelman, S; Fedotovich, G V; Ferroglia, A; Gluza, J; Grzeli nska, A; Gunia, M; Hafner, A; Ignatov, F; Jadach, S; Jegerlehner, F; Kalinowski, A; Kluge, W; Korchin, A; Kuhn, J H; Kuraev, E A; Lukin, P; Mastrolia, P; Montagna, G; Muller, S E; Nguyen, F; Nicrosini, O; Nomura, D; Pakhlova, G; Pancheri, G; Passera, M; Penin, A; Piccinini, F; Placzek, W; Przedzinski, T; Remiddi, E; Riemann, T; Rodrigo, G; Roig, P; Shekhovtsova, O; Shen, C P; Sibidanov, A L; Teubner, T; Trentadue, L; Venanzoni, G; van der Bij, J J; Wang, P; Ward, B F L; Was, Z; Worek, M; Yuan, C Z

2010-01-01T23:59:59.000Z

213

The energy deposition characteristics of heavy ions vary substantially compared to those of photons. Many radiation biology studies have compared the damaging effects of different types of radiation to establish relative biological effectiveness among them. These studies are dependent on cell type, biological endpoint, radiation type, dose, and dose rate. The radiation field found in space is much more complicated than that simulated in most experiments, both from a point of dose-rate as well as the highly mixed field of radiative particles encompassing a broad spectrum of energies. To establish better estimates for radiation risks on long-term, deep space missions, the damaging ability of heavy ions requires further understanding. Track structure studies provide significant details about the spatial distribution of energy deposition events in and around the sensitive targets of a mammalian cell. The damage imparted by one heavy ion relative to another can be established by modeling the track structures of ions that make up the galactic cosmic ray (GCR) spectrum and emphasizing biologically relevant target geometries. This research was undertaken to provide a better understanding of the damaging ability of GCR at the cellular level. By comparing ions with equal stopping power values, the differences in track structure will illuminate variations in cell particle traversals and ionization density within cell nuclei. For a cellular target, increased particle traversals, along with increased ionization density, are key identifiers for increased damaging ability. Performing Monte Carlo simulations with the computer code, FLUKA, this research will provide cellular dosimetry data and detail the track structure of the ions. As shown in radiobiology studies, increased ionizations within a cell nucleus generally lead to increased DNA breaks and increased free radical production, resulting in increased carcinogenesis and cell death. The spatial distribution of dose surrounding ions tracks are compared for inter- and intracellular regions. A comparison can be made for many different ions based upon dose and particle fluence across those different regions to predict relative damaging ability. This information can be used to improve estimates for radiation quality and dose equivalent from the space radiation environment.

Cox, Bradley

2011-08-01T23:59:59.000Z

214

Basic physical and chemical information needed for development of Monte Carlo codes

It is important to view track structure analysis as an application of a branch of theoretical physics (i.e., statistical physics and physical kinetics in the language of the Landau school). Monte Carlo methods and transport equation methods represent two major approaches. In either approach, it is of paramount importance to use as input the cross section data that best represent the elementary microscopic processes. Transport analysis based on unrealistic input data must be viewed with caution, because results can be misleading. Work toward establishing the cross section data, which demands a wide scope of knowledge and expertise, is being carried out through extensive international collaborations. In track structure analysis for radiation biology, the need for cross sections for the interactions of electrons with DNA and neighboring protein molecules seems to be especially urgent. Finally, it is important to interpret results of Monte Carlo calculations fully and adequately. To this end, workers should document input data as thoroughly as possible and report their results in detail in many ways. Workers in analytic transport theory are then likely to contribute to the interpretation of the results.

Inokuti, M.

1993-08-01T23:59:59.000Z

215

Three mesh adaptivity algorithms were developed to facilitate and expedite the use of the CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques in accurate full-scale neutronics simulations of fusion energy systems with immense sizes and complicated geometries. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility and resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation. Additionally, because of the significant increase in the efficiency of FW-CADIS simulations, the three algorithms enabled this difficult calculation to be accurately solved on a regular computer cluster, eliminating the need for a world-class super computer.

Ibrahim, Ahmad M [ORNL; Wilson, P. [University of Wisconsin; Sawan, M. [University of Wisconsin; Mosher, Scott W [ORNL; Peplow, Douglas E. [ORNL; Grove, Robert E [ORNL

2013-01-01T23:59:59.000Z

216

Use of SCALE Continuous-Energy Monte Carlo Tools for Eigenvalue Sensitivity Coefficient Calculations

Science Conference Proceedings (OSTI)

The TSUNAMI code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the development of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The CLUTCH and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE KENO framework to generate the capability for TSUNAMI-3D to perform eigenvalue sensitivity calculations in continuous-energy applications. This work explores the improvements in accuracy that can be gained in eigenvalue and eigenvalue sensitivity calculations through the use of the SCALE CE KENO and CE TSUNAMI continuous-energy Monte Carlo tools as compared to multigroup tools. The CE KENO and CE TSUNAMI tools were used to analyze two difficult models of critical benchmarks, and produced eigenvalue and eigenvalue sensitivity coefficient results that showed a marked improvement in accuracy. The CLUTCH sensitivity method in particular excelled in terms of efficiency and computational memory requirements.

Perfetti, Christopher M [ORNL; Rearden, Bradley T [ORNL

2013-01-01T23:59:59.000Z

217

Quantum Monte Carlo algorithms for electronic structure at the petascale; the endstation project.

Over the past two decades, continuum quantum Monte Carlo (QMC) has proved to be an invaluable tool for predicting of the properties of matter from fundamental principles. By solving the Schrodinger equation through a stochastic projection, it achieves the greatest accuracy and reliability of methods available for physical systems containing more than a few quantum particles. QMC enjoys scaling favorable to quantum chemical methods, with a computational effort which grows with the second or third power of system size. This accuracy and scalability has enabled scientific discovery across a broad spectrum of disciplines. The current methods perform very efficiently at the terascale. The quantum Monte Carlo Endstation project is a collaborative effort among researchers in the field to develop a new generation of algorithms, and their efficient implementations, which will take advantage of the upcoming petaflop architectures. Some aspects of these developments are discussed here. These tools will expand the accuracy, efficiency and range of QMC applicability and enable us to tackle challenges which are currently out of reach. The methods will be applied to several important problems including electronic and structural properties of water, transition metal oxides, nanosystems and ultracold atoms.

Kim, J; Ceperley, D M; Purwanto, W; Walter, E J; Krakauer, H; Zhang, S W; Kent, P.R. C; Hennig, R G; Umrigar, C; Bajdich, M; Kolorenc, J; Mitas, L

2008-10-01T23:59:59.000Z

218

A Deterministic-Monte Carlo Hybrid Method for Time-Dependent Neutron Transport Problems

Science Conference Proceedings (OSTI)

A new deterministic-Monte Carlo hybrid solution technique is derived for the time-dependent transport equation. This new approach is based on dividing the time domain into a number of coarse intervals and expanding the transport solution in a series of polynomials within each interval. The solutions within each interval can be represented in terms of arbitrary source terms by using precomputed response functions. In the current work, the time-dependent response function computations are performed using the Monte Carlo method, while the global time-step march is performed deterministically. This work extends previous work by coupling the time-dependent expansions to space- and angle-dependent expansions to fully characterize the 1D transport response/solution. More generally, this approach represents and incremental extension of the steady-state coarse-mesh transport method that is based on global-local decompositions of large neutron transport problems. An example of a homogeneous slab is discussed as an example of the new developments.

Justin Pounders; Farzad Rahnema

2001-10-01T23:59:59.000Z

219

Science Conference Proceedings (OSTI)

Standard modeling approaches can produce the most likely values of the formation constants of metal-ligand complexes if a particular set of species containing the metal ion is known or assumed to exist in solution equilibrium with complexing ligands. Identifying the most likely set of species when more than one set is plausible is a more difficult problem to address quantitatively. A Monte Carlo method of data analysis is described that measures the relative abilities of different speciation models to fit optical spectra of open-shell actinide ions. The best model(s) can be identified from among a larger group of models initially judged to be plausible. The method is demonstrated by analyzing the absorption spectra of aqueous Pu(IV) titrated with nitrate ion at constant 2 molal ionic strength in aqueous perchloric acid. The best speciation model supported by the data is shown to include three Pu(IV) species with nitrate coordination numbers 0, 1, and 2. Formation constants are {beta}{sub 1}=3.2{+-}0.5 and {beta}{sub 2}=11.2{+-}1.2, where the uncertainties are 95% confidence limits estimated by propagating raw data uncertainties using Monte Carlo methods. Principal component analysis independently indicates three Pu(IV) complexes in equilibrium. (c) 2000 Society for Applied Spectroscopy.

Berg, John M. [Nuclear Materials Technology Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Veirs, D. Kirk [Nuclear Materials Technology Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Vaughn, Randolph B. [Nuclear Materials Technology Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Cisneros, Michael R. [Nuclear Materials Technology Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Smith, Coleman A. [Nuclear Materials Technology Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)

2000-06-01T23:59:59.000Z

220

Configuration-interaction Monte Carlo method and its application to the trapped unitary Fermi gas

We develop a quantum Monte Carlo method to estimate the ground-state energy of a fermionic many-particle system in the configuration-interaction shell model approach. The fermionic sign problem is circumvented by using a guiding wave function in Fock space. The method provides an upper bound on the ground-state energy whose tightness depends on the choice of the guiding wave function. We argue that the antisymmetric geminal product class of wave functions is a good choice for guiding wave functions. We demonstrate our method for the trapped two-species fermionic cold atom system in the unitary regime of infinite scattering length using the particle-number projected Hartree-Fock-Bogoliubov wave function as the guiding wave function. We estimate the ground-state energy and energy-staggering pairing gap as a function of the number of particles. Our results compare favorably with exact numerical diagonalization results and with previous coordinate-space Monte Carlo calculations.

Mukherjee, Abhishek

2013-01-01T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

221

Configuration-interaction Monte Carlo method and its application to the trapped unitary Fermi gas

We develop a quantum Monte Carlo method to estimate the ground-state energy of a fermionic many-particle system in the configuration-interaction shell model approach. The fermionic sign problem is circumvented by using a guiding wave function in Fock space. The method provides an upper bound on the ground-state energy whose tightness depends on the choice of the guiding wave function. We argue that the antisymmetric geminal product class of wave functions is a good choice for guiding wave functions. We demonstrate our method for the trapped two-species fermionic cold atom system in the unitary regime of infinite scattering length using the particle-number projected Hartree-Fock-Bogoliubov wave function as the guiding wave function. We estimate the ground-state energy and energy-staggering pairing gap as a function of the number of particles. Our results compare favorably with exact numerical diagonalization results and with previous coordinate-space Monte Carlo calculations.

Abhishek Mukherjee; Y. Alhassid

2013-04-05T23:59:59.000Z

222

Physics and Algorithm Enhancements for a Validated MCNP/X Monte Carlo Simulation Tool, Phase VII

Currently the US lacks an end-to-end (i.e., source-to-detector) radiation transport simulation code with predictive capability for the broad range of DHS nuclear material detection applications. For example, gaps in the physics, along with inadequate analysis algorithms, make it difficult for Monte Carlo simulations to provide a comprehensive evaluation, design, and optimization of proposed interrogation systems. With the development and implementation of several key physics and algorithm enhancements, along with needed improvements in evaluated data and benchmark measurements, the MCNP/X Monte Carlo codes will provide designers, operators, and systems analysts with a validated tool for developing state-of-the-art active and passive detection systems. This project is currently in its seventh year (Phase VII). This presentation will review thirty enhancements that have been implemented in MCNPX over the last 3 years and were included in the 2011 release of version 2.7.0. These improvements include 12 physics enhancements, 4 source enhancements, 8 tally enhancements, and 6 other enhancements. Examples and results will be provided for each of these features. The presentation will also discuss the eight enhancements that will be migrated into MCNP6 over the upcoming year.

McKinney, Gregg W [Los Alamos National Laboratory

2012-07-17T23:59:59.000Z

223

Surface Structures of Cubo-octahedral Pt-Mo Catalyst Nanoparticles from Monte Carlo Simulations

The surface structures of cubo-octahedral Pt-Mo nanoparticles have been investigated using the Monte Carlo method and modified embedded atom method potentials that we developed for Pt-Mo alloys. The cubo-octahedral Pt-Mo nanoparticles are constructed with disordered fcc configurations, with sizes from 2.5 to 5.0 nm, and with Pt concentrations from 60 to 90 at. percent. The equilibrium Pt-Mo nanoparticle configurations were generated through Monte Carlo simulations allowing both atomic displacements and element exchanges at 600 K. We predict that the Pt atoms weakly segregate to the surfaces of such nanoparticles. The Pt concentrations in the surface are calculated to be 5 to 14 at. percent higher than the Pt concentrations of the nanoparticles. Moreover, the Pt atoms preferentially segregate to the facet sites of the surface, while the Pt and Mo atoms tend to alternate along the edges and vertices of these nanoparticles. We found that decreasing the size or increasing the Pt concentration leads to higher Pt concentrations but fewer Pt-Mo pairs in the Pt-Mo nanoparticle surfaces.

Wang, Guofeng; Van Hove, M.A.; Ross, P.N.; Baskes, M.I.

2005-03-31T23:59:59.000Z

224

An Evaluation of Monte Carlo Simulations of Neutron Multiplicity Measurements of Plutonium Metal

Science Conference Proceedings (OSTI)

In January 2009, Sandia National Laboratories conducted neutron multiplicity measurements of a polyethylene-reflected plutonium metal sphere. Over the past 3 years, those experiments have been collaboratively analyzed using Monte Carlo simulations conducted by University of Michigan (UM), Los Alamos National Laboratory (LANL), Sandia National Laboratories (SNL), and North Carolina State University (NCSU). Monte Carlo simulations of the experiments consistently overpredict the mean and variance of the measured neutron multiplicity distribution. This paper presents a sensitivity study conducted to evaluate the potential sources of the observed errors. MCNPX-PoliMi simulations of plutonium neutron multiplicity measurements exhibited systematic over-prediction of the neutron multiplicity distribution. The over-prediction tended to increase with increasing multiplication. MCNPX-PoliMi had previously been validated against only very low multiplication benchmarks. We conducted sensitivity studies to try to identify the cause(s) of the simulation errors; we eliminated the potential causes we identified, except for Pu-239 {bar {nu}}. A very small change (-1.1%) in the Pu-239 {bar {nu}} dramatically improved the accuracy of the MCNPX-PoliMi simulation for all 6 measurements. This observation is consistent with the trend observed in the bias exhibited by the MCNPX-PoliMi simulations: a very small error in {bar {nu}} is 'magnified' by increasing multiplication. We applied a scalar adjustment to Pu-239 {bar {nu}} (independent of neutron energy); an adjustment that depends on energy is probably more appropriate.

Mattingly, John [North Carolina State University; Miller, Eric [University of Michigan; Solomon, Clell J. Jr. [Los Alamos National Laboratory; Dennis, Ben [University of Michigan; Meldrum, Amy [University of Michigan; Clarke, Shaun [University of Michigan; Pozzi, Sara [University of Michigan

2012-06-21T23:59:59.000Z

225

Massively parallel Monte Carlo for many-particle simulations on GPUs

Current trends in parallel processors call for the design of efficient massively parallel algorithms for scientific computing. Parallel algorithms for Monte Carlo simulations of thermodynamic ensembles of particles have received little attention because of the inherent serial nature of the statistical sampling. In this paper, we present a massively parallel method that obeys detailed balance and implement it for a system of hard disks on the GPU. We reproduce results of serial high-precision Monte Carlo runs to verify the method. This is a good test case because the hard disk equation of state over the range where the liquid transforms into the solid is particularly sensitive to small deviations away from the balance conditions. On a Tesla K20, our GPU implementation executes over one billion trial moves per second, which is 148 times faster than on a single Intel Xeon E5540 CPU core, enables 27 times better performance per dollar, and cuts energy usage by a factor of 13. With this improved performance we are able to calculate the equation of state for systems of up to one million hard disks. These large system sizes are required in order to probe the nature of the melting transition, which has been debated for the last forty years. In this paper we present the details of our computational method, and discuss the thermodynamics of hard disks separately in a companion paper.

Joshua A. Anderson; Eric Jankowski; Thomas L. Grubb; Michael Engel; Sharon C. Glotzer

2012-11-07T23:59:59.000Z

226

Exploiting symmetries for exponential error reduction in path integral Monte Carlo

The path integral of a quantum system with an exact symmetry can be written as a sum of functional integrals each giving the contribution from quantum states with definite symmetry properties. We propose a strategy to compute each of them, normalized to the one with vacuum quantum numbers, by a Monte Carlo procedure whose cost increases power-like with the time extent of the lattice. This is achieved thanks to a multi-level integration scheme, inspired by the transfer matrix formalism, which exploits the symmetry and the locality in time of the underlying statistical system. As a result the cost of computing the lowest energy level in a given channel, its multiplicity and its matrix elements is exponentially reduced with respect to the standard path-integral Monte Carlo. We test the strategy with a one-dimensional harmonic oscillator, by computing the ratio of the parity odd over the parity even functional integrals and the two-point correlation function. The cost of the simulations scales as expected. In par...

Della Morte, Michele

2009-01-01T23:59:59.000Z

227

Electron energy losses near pulsar polar caps: a Monte Carlo approach

We use Monte Carlo approach to study the energetics of electrons accelerated in a pulsar polar gap. As energy-loss mechanisms we consider magnetic Compton scattering of thermal X-ray photons and curvature radiation. The results are compared with previous calculations which assumed that changes of electron energy occurred smoothly according to approximations for the average energy loss rate due to the Compton scattering. We confirm a general dependence of efficiency of electron energy losses due to inverse Compton mechanism on the temperature and size of a thermal polar cap and on the pulsar magnetic field. However, we show that trajectories of electrons in energy-altitude space as calculated in the smooth way do not always coincide with averaged Monte Carlo behaviour. In particular, for pulsars with high magnetic field strength ($B_{pc} > 3\\times 10^{12}$ G) and low thermal polar cap temperatures ($T < 5\\times 10^6$ K) final electron Lorentz factors computed with the two methods may differ by a few orders of magnitude. We discuss consequences for particular objects with identified thermal X-ray spectral components like Geminga, Vela, and PSR B1055-52.

J. Dyks; B. Rudak

2000-03-07T23:59:59.000Z

228

Science Conference Proceedings (OSTI)

The GUINEVERE experiment (Generation of Uninterrupted Intense Neutrons at the lead Venus Reactor) is an experimental program in support of the ADS technology presently carried out at SCK-CEN in Mol (Belgium). In the experiment a modified lay-out of the original thermal VENUS critical facility is coupled to an accelerator, built by the French body CNRS in Grenoble, working in both continuous and pulsed mode and delivering 14 MeV neutrons by bombardment of deuterons on a tritium-target. The modified lay-out of the facility consists of a fast subcritical core made of 30% U-235 enriched metallic Uranium in a lead matrix. Several off-line and on-line reactivity measurement techniques will be investigated during the experimental campaign. This report is focused on the simulation by deterministic (ERANOS French code) and Monte Carlo (MCNPX US code) calculations of three reactivity measurement techniques, Slope ({alpha}-fitting), Area-ratio and Source-jerk, applied to a GUINEVERE subcritical configuration (namely SC1). The inferred reactivity, in dollar units, by the Area-ratio method shows an overall agreement between the two deterministic and Monte Carlo computational approaches, whereas the MCNPX Source-jerk results are affected by large uncertainties and allow only partial conclusions about the comparison. Finally, no particular spatial dependence of the results is observed in the case of the GUINEVERE SC1 subcritical configuration. (authors)

Bianchini, G.; Burgio, N.; Carta, M. [ENEA C.R. CASACCIA, via Anguillarese, 301, 00123 S. Maria di Galeria Roma (Italy); Peluso, V. [ENEA C.R. BOLOGNA, Via Martiri di Monte Sole, 4, 40129 Bologna (Italy); Fabrizio, V.; Ricci, L. [Univ. of Rome La Sapienza, C/o ENEA C.R. CASACCIA, via Anguillarese, 301, 00123 S. Maria di Galeria Roma (Italy)

2012-07-01T23:59:59.000Z

229

RunMC is an object-oriented framework aimed to generate and to analyse high-energy collisions of elementary particles using Monte Carlo simulations. This package, being based on C++ adopted by CERN as the main programming language for the LHC experiments, provides a common interface to different Monte Carlo models using modern physics libraries. Physics calculations (projects) can easily be loaded and saved as external modules. This simplifies the development of complicated calculations for high energy physics in large collaborations.This desktop program is open-source licensed and is available on the LINUX and Windows/Cygwin platforms.

S. Chekanov

2004-11-05T23:59:59.000Z

230

Improving computational efficiency of Monte-Carlo simulations with variance reduction

CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effect...

Turner, A

2013-01-01T23:59:59.000Z

231

Dynamic Monte Carlo simulation of coupled transport through a narrow multiply-occupied pore

Dynamic Monte Carlo simulations are used to study coupled transport (co-transport) through sub-nanometer-diameter pores. In this classic Hodgkin-Keynes mechanism, an ion species uses the large flux of an abundant ion species to move against its concentration gradient. The efficiency of co-transport is examined for various pore parameters so that synthetic nanopores can be engineered to maximize this effect. In general, the pore must be narrow enough that ions cannot pass each other and the charge of the pore large enough to attract many ions so that they exchange momentum. Co-transport efficiency increases as pore length increases, but even very short pores exhibit co-transport, in contradiction to the usual perception that long pores are necessary. The parameter ranges where co-transport occurs is consistent with current and near-future synthetic nanopore geometry parameters, suggesting that co-transport of ions may be a new application of nanopores.

Dezs? Boda; Éva Csányi; Dirk Gillespie; Tamás Kristóf

2013-10-08T23:59:59.000Z

232

Coupled coarse graining and Markov Chain Monte Carlo for lattice systems

We propose an efficient Markov Chain Monte Carlo method for sampling equilibrium distributions for stochastic lattice models, capable of handling correctly long and short-range particle interactions. The proposed method is a Metropolis-type algorithm with the proposal probability transition matrix based on the coarse-grained approximating measures introduced in a series of works of M. Katsoulakis, A. Majda, D. Vlachos and P. Plechac, L. Rey-Bellet and D.Tsagkarogiannis,. We prove that the proposed algorithm reduces the computational cost due to energy differences and has comparable mixing properties with the classical microscopic Metropolis algorithm, controlled by the level of coarsening and reconstruction procedure. The properties and effectiveness of the algorithm are demonstrated with an exactly solvable example of a one dimensional Ising-type model, comparing efficiency of the single spin-flip Metropolis dynamics and the proposed coupled Metropolis algorithm.

Kalligiannaki, Evangelia; Plechac, Petr

2010-01-01T23:59:59.000Z

233

Study of CANDU thorium-based fuel cycles by deterministic and Monte Carlo methods

Science Conference Proceedings (OSTI)

In the framework of the Generation IV forum, there is a renewal of interest in self-sustainable thorium fuel cycles applied to various concepts such as Molten Salt Reactors [1, 2] or High Temperature Reactors [3, 4]. Precise evaluations of the U-233 production potential relying on existing reactors such as PWRs [5] or CANDUs [6] are hence necessary. As a consequence of its design (online refueling and D{sub 2}O moderator in a thermal spectrum), the CANDU reactor has moreover an excellent neutron economy and consequently a high fissile conversion ratio [7]. For these reasons, we try here, with a shorter term view, to re-evaluate the economic competitiveness of once-through thorium-based fuel cycles in CANDU [8]. Two simulation tools are used: the deterministic Canadian cell code DRAGON [9] and MURE [10], a C++ tool for reactor evolution calculations based on the Monte Carlo code MCNP [11]. (authors)

Nuttin, A.; Guillemin, P. [LPSC Grenoble ENSPG (France); Courau, T. [EDF R and D Clamart (France); Marleau, G. [Ecole Polytechnique de Montreal (Canada); Meplan, O. [LPSC Grenoble UJF (France); David, S.; Michel-Sendis, F.; Wilson, J. N. [IPN Orsay CNRS (France)

2006-07-01T23:59:59.000Z

234

State and parameter estimation using Monte Carlo evaluation of path integrals

Transferring information from observations of a dynamical system to estimate the fixed parameters and unobserved states of a system model can be formulated as the evaluation of a discrete time path integral in model state space. The observations serve as a guiding potential working with the dynamical rules of the model to direct system orbits in state space. The path integral representation permits direct numerical evaluation of the conditional mean path through the state space as well as conditional moments about this mean. Using a Monte Carlo method for selecting paths through state space we show how these moments can be evaluated and demonstrate in an interesting model system the explicit influence of the role of transfer of information from the observations. We address the question of how many observations are required to estimate the unobserved state variables, and we examine the assumptions of Gaussianity of the underlying conditional probability.

John C. Quinn; Henry D. I. Abarbanel

2009-12-08T23:59:59.000Z

235

Validation of the Monte Carlo Criticality Program KENO V. a for highly-enriched uranium systems

A series of calculations based on critical experiments have been performed using the KENO V.a Monte Carlo Criticality Program for the purpose of validating KENO V.a for use in evaluating Y-12 Plant criticality problems. The experiments were reflected and unreflected systems of single units and arrays containing highly enriched uranium metal or uranium compounds. Various geometrical shapes were used in the experiments. The SCALE control module CSAS25 with the 27-group ENDF/B-4 cross-section library was used to perform the calculations. Some of the experiments were also calculated using the 16-group Hansen-Roach Library. Results are presented in a series of tables and discussed. Results show that the criteria established for the safe application of the KENO IV program may also be used for KENO V.a results.

Knight, J.R.

1984-11-01T23:59:59.000Z

236

In this note we develop a robust implicit Monte Carlo (IMC) algorithm based on more accurately updating the linearized equilibrium radiation energy density. The method does not introduce oscillations in the solution and has the same limit as {Delta}t{yields}{infinity} as the standard Fleck and Cummings IMC method. Moreover, the approach we introduce can be trivially added to current implementations of IMC by changing the definition of the Fleck factor. Using this new method we develop an adaptive scheme that uses either standard IMC or the modified method basing the adaptation on a zero-dimensional problem solved in each cell. Numerical results demonstrate that the new method alleviates both the nonphysical overheating that occurs in standard IMC when the time step is large and significantly diminishes the statistical noise in the solution.

Mcclarren, Ryan G [Los Alamos National Laboratory; Urbatsch, Todd J [Los Alamos National Laboratory

2008-01-01T23:59:59.000Z

237

Cation dopant distributions in nanostructures of transition-metal doped ZnO:Monte Carlo simulations

The path from trace doping to solid solution formation involves an intermediate regime in which the doping level is a few to several atomic percent. In this regime, dopant-dopant interactions, which are driven by the spatial arrangement of dopants, are critical factors in determining the resulting properties. Conventional wisdom counts on simple probabilistic methods for predicting dopant distributions. Here, we use Monte Carlo simulations to show that widely used, straightforward statistical models, such as that of Behringer1, are accurate only in the limit of infinitesimally small surface–to-volume ratio. For epitaxial films and nanoparticles, where much of the current interest resides, dopant distributions depend strongly on the surface-to-volume ratio. We present empirical expressions that accurately predict dopant bonding configurations as a function of film or particle size, shape and dopant concentration for doped ZnO, a material of particular interest in semiconductor spintronics.

Droubay, Timothy; Kaspar, Tiffany C.; Kaspar, Bryce P.; Chambers, Scott A.

2009-02-01T23:59:59.000Z

238

Bayesian Inference for LISA Pathfinder using Markov Chain Monte Carlo Methods

We present a parameter estimation procedure based on a Bayesian framework by applying a Markov Chain Monte Carlo algorithm to the calibration of the dynamical parameters of a space based gravitational wave detector. The method is based on the Metropolis-Hastings algorithm and a two-stage annealing treatment in order to ensure an effective exploration of the parameter space at the beginning of the chain. We compare two versions of the algorithm with an application to a LISA Pathfinder data analysis problem. The two algorithms share the same heating strategy but with one moving in coordinate directions using proposals from a multivariate Gaussian distribution, while the other uses the natural logarithm of some parameters and proposes jumps in the eigen-space of the Fisher Information matrix. The algorithm proposing jumps in the eigen-space of the Fisher Information matrix demonstrates a higher acceptance rate and a slightly better convergence towards the equilibrium parameter distributions in the application to...

Ferraioli, Luigi; Plagnol, Eric

2012-01-01T23:59:59.000Z

239

A Markov-Chain Monte-Carlo Based Method for Flaw Detection in Beams

A Bayesian inference methodology using a Markov Chain Monte Carlo (MCMC) sampling procedure is presented for estimating the parameters of computational structural models. This methodology combines prior information, measured data, and forward models to produce a posterior distribution for the system parameters of structural models that is most consistent with all available data. The MCMC procedure is based upon a Metropolis-Hastings algorithm that is shown to function effectively with noisy data, incomplete data sets, and mismatched computational nodes/measurement points. A series of numerical test cases based upon a cantilever beam is presented. The results demonstrate that the algorithm is able to estimate model parameters utilizing experimental data for the nodal displacements resulting from specified forces.

Glaser, R E; Lee, C L; Nitao, J J; Hickling, T L; Hanley, W G

2006-09-28T23:59:59.000Z

240

Monte-Carlo study of quasiparticle dispersion relation in monolayer graphene

The density of electronic one-particle states in monolayer graphene is studied by performing the Hybrid Monte-Carlo simulations of the tight-binding model for electrons on the pi orbitals of carbon atoms which make up the graphene lattice. Density of states is approximated as a derivative of the number of particles over the chemical potential at sufficiently small temperature. Simulations are performed in the partially quenched approximation, in which virtual particles and holes have zero chemical potential. It is found that the Van Hove singularity becomes much sharper than in the free tight-binding model. Simulation results also suggest that the Fermi velocity increases with interaction strength up to the transition to the phase with spontaneously broken chiral symmetry.

P. V. Buividovich

2013-01-07T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

241

Hybrid Monte-Carlo simulation of interacting tight-binding model of graphene

In this work, results are presented of Hybrid-Monte-Carlo simulations of the tight-binding Hamiltonian of graphene, coupled to an instantaneous long-range two-body potential which is modeled by a Hubbard-Stratonovich auxiliary field. We present an investigation of the spontaneous breaking of the sublattice symmetry, which corresponds to a phase transition from a conducting to an insulating phase and which occurs when the effective fine-structure constant $\\alpha$ of the system crosses above a certain threshold $\\alpha_C$. Qualitative comparisons to earlier works on the subject (which used larger system sizes and higher statistics) are made and it is established that $\\alpha_C$ is of a plausible magnitude in our simulations. Also, we discuss differences between simulations using compact and non-compact variants of the Hubbard field and present a quantitative comparison of distinct discretization schemes of the Euclidean time-like dimension in the Fermion operator.

Dominik Smith; Lorenz von Smekal

2013-11-05T23:59:59.000Z

242

Solid modeling computer software systems provide for the design of three-dimensional solid models used in the design and analysis of physical components. The current state-of-the-art in solid modeling representation uses a boundary representation format in which geometry and topology are used to form three-dimensional boundaries of the solid. The geometry representation used in these systems is cubic B-spline curves and surfaces--a network of cubic B-spline functions in three-dimensional Cartesian coordinate space. Many Monte Carlo codes, however, use a geometry representation in which geometry units are specified by intersections and unions of half-spaces. This paper describes an algorithm for converting from a boundary representation to a half-space representation.

Davis JE, Eddy MJ, Sutton TM, Altomari TJ

2007-03-01T23:59:59.000Z

243

Monte Carlo simulation of ion trajectories in the modified PDX thermal charge exchange analyzer

An improved design for the present PDX thermal charge exchange analyzer (MACE) has been proposed by one of the authors, in which the five cylindrical electrostatic plates for mass separation are replaced by a single flat, electrostatic deflection plate. An existihg Monte Carlo code that simulated the passage of ions through the MACE analyzer was modified to examine the feasibility of this change. The resulting calculations were used to optimize detector positions and collimation requirements. The first analyzer to be placed on PDX will be of the old design, similar to the present PLT analyzer. However, if the design reported here is successful on the test stand, the future PDX analyzers will all be of the new, single electrostatic plate variety. A further advantage will be the ability to install as many as ten detectors instead of the current five, thus providing twice as many energy channels for each shot. Also, both mass species (H, D) can be measured concurrently, if desired.

Kaita, R.; Davis, S.L.; Medley, S.S.

1978-12-01T23:59:59.000Z

244

Investigation of a V{sub 15} magnetic molecular nanocluster by the Monte Carlo method

Exchange interactions in a V{sub 15} magnetic molecular nanocluster are considered, and the process of magnetization reversal for various values of the set of exchange constants is analyzed by the Monte Carlo method. It is shown that the best agreement between the field dependence of susceptibility and experimental results is observed for the following set of exchange interaction constants in a V{sub 15} magnetic molecular nanocluster: J = 500 K, J Prime = 150 K, J Double-Prime = 225 K, J{sub 1} = 50 K, and J{sub 2} = 50 K. It is observed for the first time that, in a strong magnetic field, for each of the three transitions from low-spin to high-spin states, the heat capacity exhibits two closely spaced maxima.

Khizriev, K. Sh., E-mail: kamal71@mail.ru [Russian Academy of Sciences, Kh.I. Amirkhanov Institute of Physics, Dagestan Scientific Center (Russian Federation); Dzhamalutdinova, I. S.; Taaev, T. A. [Dagestan State University (Russian Federation)

2013-06-15T23:59:59.000Z

245

MaGe - a Geant4-based Monte Carlo framework for low-background experiments

A Monte Carlo framework, MaGe, has been developed based on the Geant4 simulation toolkit. Its purpose is to simulate physics processes in low-energy and low-background radiation detectors, specifically for the Majorana and Gerda $^{76}$Ge neutrinoless double-beta decay experiments. This jointly-developed tool is also used to verify the simulation of physics processes relevant to other low-background experiments in Geant4. The MaGe framework contains simulations of prototype experiments and test stands, and is easily extended to incorporate new geometries and configurations while still using the same verified physics processes, tunings, and code framework. This reduces duplication of efforts and improves the robustness of and confidence in the simulation output.

Yuen-Dat Chan; Jason A. Detwiler; Reyco Henning; Victor M. Gehman; Rob A. Johnson; David V. Jordan; Kareem Kazkaz; Markus Knapp; Kevin Kroninger; Daniel Lenz; Jing Liu; Xiang Liu; Michael G. Marino; Akbar Mokhtarani; Luciano Pandola; Alexis G. Schubert; Claudia Tomei

2008-02-06T23:59:59.000Z

246

MC++: A parallel, portable, Monte Carlo neutron transport code in C++

MC++ is an implicit multi-group Monte Carlo neutron transport code written in C++ and based on the Parallel Object-Oriented Methods and Applications (POOMA) class library. MC++ runs in parallel on and is portable to a wide variety of platforms, including MPPs, SMPs, and clusters of UNIX workstations. MC++ is being developed to provide transport capabilities to the Accelerated Strategic Computing Initiative (ASCI). It is also intended to form the basis of the first transport physics framework (TPF), which is a C++ class library containing appropriate abstractions, objects, and methods for the particle transport problem. The transport problem is briefly described, as well as the current status and algorithms in MC++ for solving the transport equation. The alpha version of the POOMA class library is also discussed, along with the implementation of the transport solution algorithms using POOMA. Finally, a simple test problem is defined and performance and physics results from this problem are discussed on a variety of platforms.

Lee, S.R.; Cummings, J.C. [Los Alamos National Lab., NM (United States); Nolen, S.D. [Texas A & M Univ., College Station, TX (United States)

1997-03-01T23:59:59.000Z

247

A Monte-Carlo method for ex-core neutron response

A Monte Carlo neutron transport kernel capability primarily for ex-core neutron response is described. The capability consists of the generation of a set of response kernels, which represent the neutron transport from the core to a specific ex-core volume. This is accomplished by tagging individual neutron histories from their initial source sites and tracking them throughout the problem geometry, tallying those that interact in the geometric regions of interest. These transport kernels can subsequently be combined with any number of core power distributions to determine detector response for a variety of reactor Thus, the transport kernels are analogous to an integrated adjoint response. Examples of pressure vessel response and ex-core neutron detector response are provided to illustrate the method.

Gamino, R.G.; Ward, J.T.; Hughes, J.C. [Lockheed Martin Corp., Schenectady, NY (United States)

1997-10-01T23:59:59.000Z

248

Frozen-orbital and downfolding calculations with auxiliary-field quantum Monte Carlo

We describe the implementation of the frozen-orbital and downfolding approximations in the auxiliary-field quantum Monte Carlo (AFQMC) method. These approaches can provide significant computational savings compared to fully correlating all the electrons. While the many-body wave function is never explicit in AFQMC, its random walkers are Slater determinants, whose orbitals may be expressed in terms of any one-particle orbital basis. It is therefore straightforward to partition the full N-particle Hilbert space into active and inactive parts to implement the frozen-orbital method. In the frozen-core approximation, for example, the core electrons can be eliminated in the correlated part of the calculations, greatly increasing the computational efficiency, especially for heavy atoms. Scalar relativistic effects are easily included using the Douglas-Kroll-Hess theory. Using this method, we obtain a way to effectively eliminate the error due to single-projector, norm-conserving pseudopotentials in AFQMC. We also i...

Purwanto, Wirawan; Krakauer, Henry

2013-01-01T23:59:59.000Z

249

The transition temperature of the spatially anisotropic three-dimensional classical {ital XY} model is calculated by means of Monte Carlo (MC) simulations. In the {ital XY} model the spins are not restricted to the {ital XY} plane, as in the plane rotator model. The results are compared with a self-consistent harmonic approximation calculation (SCHA). In two dimensions the inclusion of effect of vortices pushes down the transition temperature from {ital T}{sub KT}/{ital J}=1.08, given by the standard SCHA to {ital T}{sub KT}/{ital J}=0.70, in good agreement with the MC estimate {ital T}{sub KT}/{ital J}=0.725. {copyright} {ital 1996 The American Physical Society.}

Costa, B.V.; Pereira, A.R.; Pires, A.S. [Departamento de Fisica, Universidade Federal de Minas Gerais, CP 702, Belo Horizonte, 30161970 Minas Gerais (Brazil)] [Departamento de Fisica, Universidade Federal de Minas Gerais, CP 702, Belo Horizonte, 30161970 Minas Gerais (Brazil)

1996-08-01T23:59:59.000Z

250

Overview of Geometry Representation in Monte Carlo Codes Ronald P. Kensek

National Nuclear Security Administration (NNSA)

Overview of Geometry Representation Overview of Geometry Representation in Monte Carlo Codes Ronald P. Kensek Brian C. Franke Thomas W. Laub Leonard J. Lorence Matthew R. Martin Sandia National Laboratories Steve Warren Kansas State University Joint Russian-American Five-Laboratory Conference on Computational Mathematics / Physics Vienna, Austria June 19-23, 2005 Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States National Nuclear Security Administration and the Department of Energy under contract DE-AC04-94AL85000. 2 Problem Setup: Engineering designs CG vs. CAD Combinatorial Geometry (CG) * Engineering designs are not typically created in this format * No general automatic translation from CAD to CG yet exists * Problem setup is difficult: Creation

251

Monte Carlo based dosimetry and treatment planning for neutron capture therapy of brain tumors

Science Conference Proceedings (OSTI)

Monte Carlo based dosimetry and computer-aided treatment planning for neutron capture therapy have been developed to provide the necessary link between physical dosimetric measurements performed on the MITR-II epithermal-neutron beams and the need of the radiation oncologist to synthesize large amounts of dosimetric data into a clinically meaningful treatment plan for each individual patient. Monte Carlo simulation has been employed to characterize the spatial dose distributions within a skull/brain model irradiated by an epithermal-neutron beam designed for neutron capture therapy applications. The geometry and elemental composition employed for the mathematical skull/brain model and the neutron and photon fluence-to-dose conversion formalism are presented. A treatment planning program, NCTPLAN, developed specifically for neutron capture therapy, is described. Examples are presented illustrating both one and two-dimensional dose distributions obtainable within the brain with an experimental epithermal-neutron beam, together with beam quality and treatment plan efficacy criteria which have been formulated for neutron capture therapy. The incorporation of three-dimensional computed tomographic image data into the treatment planning procedure is illustrated. The experimental epithermal-neutron beam has a maximum usable circular diameter of 20 cm, and with 30 ppm of B-10 in tumor and 3 ppm of B-10 in blood, it produces a beam-axis advantage depth of 7.4 cm, a beam-axis advantage ratio of 1.83, a global advantage ratio of 1.70, and an advantage depth RBE-dose rate to tumor of 20.6 RBE-cGy/min (cJ/kg-min). These characteristics make this beam well suited for clinical applications, enabling an RBE-dose of 2,000 RBE-cGy/min (cJ/kg-min) to be delivered to tumor at brain midline in six fractions with a treatment time of approximately 16 minutes per fraction.

Zamenhof, R.G.; Clement, S.D.; Harling, O.K.; Brenner, J.F.; Wazer, D.E.; Madoc-Jones, H.; Yanch, J.C. (Tufts-New England Medical Center, Boston, MA (USA))

1990-01-01T23:59:59.000Z

252

The present state of modeling radio-induced effects at the cellular level neglects to account for the microscopic inhomogeneity of the nucleus from the non-aqueous contents by approximating the entire cellular nucleus as a homogenous medium of water. Charged particle track-structure calculations utilizing this principle of superposition are thereby neglecting to account for approximately 30% of the molecular variation within the nucleus. To truly understand what happens when biological matter is irradiated, charged particle track-structure calculations need detailed knowledge of the secondary electron cascade, resulting from interactions with not only the primary biological component – water – but also the non-aqueous contents, down to very low energies. This paper presents developments for a novel approach, which to our knowledge has never been done before, to reducing the homogenous water approximation. The purpose of our work is to develop of a completely self-consistent computational method for predicting molecule-specific ionization, excitation, and scattering cross sections in the very low energy regime that can be applied in a condensed history Monte Carlo track-structure code. The present methodology begins with the calculation of a solution to the many-body Schrödinger equation and proceeds to use Monte Carlo methods to calculate the perturbations in the internal electron field to determine the aforementioned processes. Results are computed for molecular water in the form of linear energy loss, secondary electron energies, and ionization-to-excitation ratios and compared against the low energy predictions of the GEANT4-DNA physics package of the Geant4 simulation toolkit.

Madsen, Jonathan R

2013-08-01T23:59:59.000Z

253

Science Conference Proceedings (OSTI)

Testing for stochastic dominance among distributions is an important issue in the study of asset management, income inequality, and market efficiency. This paper conducts Monte Carlo simulations to examine the sizes and powers of several commonly used ... Keywords: C12, Correlated distributions, D31, G11, Grid points, Heteroskedasticity, Stochastic dominance

Hooi-Hooi Lean; Wing-Keung Wong; Xibin Zhang

2008-10-01T23:59:59.000Z

254

Science Conference Proceedings (OSTI)

The algorithms of estimation of the time series correlation functions in nuclear reactor calculations using the Monte Carlo method are described. Correlation functions are used for the estimation of biases, for calculations of variance taking into account the correlations between neutron generations, and for choosing skipped generations.

Kalugin, M. A.; Oleynik, D. S.; Sukhino-Khomenko, E. A., E-mail: sukhino-khomenko@adis.vver.kiae.ru [National Research Centre Kurchatov Institute (Russian Federation)

2012-12-15T23:59:59.000Z

255

Science Conference Proceedings (OSTI)

A statistical Bayesian framework is used to solve the inverse problem and develop the posterior distributions of parameters for a density-driven groundwater flow model. This Bayesian approach is implemented using a Markov Chain Monte Carlo (MCMC) sampling ... Keywords: Conditioning, Groundwater model calibration, Inverse problems, MCMC, Numerical modelling

Ahmed E. Hassan; Hesham M. Bekhit; Jenny B. Chapman

2009-06-01T23:59:59.000Z

256

. A Monte Carlo based production cost simulation model is introduced in the paper. The model closely for wind power can be established. Over the years, analytical methods [2]-[4] had been extensively utilized based production cost simulation model had been investigated and developed. The model simulates

McCalley, James D.

257

Science Conference Proceedings (OSTI)

Graphs of all neutron cross sections and photon production cross sections on the Recommended Monte Carlo Cross Section (RMCCS) library have been plotted along with local neutron heating numbers. Values for anti ..nu.., the average number of neutrons per fission, are also given.

Soran, P.D.; Seamon, R.E.

1980-05-01T23:59:59.000Z

258

A simplified transport problem is presented with continuous energy and neutrons moving only into the +X and –X direction. An exact analytical solution is given with a 1/E energy dependence of the space, direction and energy dependent neutron flux. The source function, which also behaves basically as 1/E has to be modified for a certain energy range if a maximum energy is introduced in the problem. At the cost of more complicated mathematics the total and scattering cross sections and the anisotropy of scattering may very with energy. This model can be implemented in a general purpose Monte Carlo code like MCNP5 without modification, but needs a specially prepared cross section library file. The model can be applied to test Monte Carlo procedures, like the generation of multi-group cross section and scattering matrices which can be calculated analytically from the continuous-energy cross section data. From the adjoint equation the optimum importance function can be derived, which can be used to devise a continuous-energy zero-variance Monte Carlo scheme. Key Words: transport problem, analytical solution, continuous energy, Monte Carlo 1.

J. Eduard Hoogenboom

2007-01-01T23:59:59.000Z

259

Science Conference Proceedings (OSTI)

We present a modular, collaborative, open-source architecture for rigid body modelling based upon small angle scattering data, named sas_rigid. It is designed to provide a fast and extensible scripting interface using the easy-to-learn Python programming ... Keywords: Hemocyanin, Monte Carlo with simulated annealing, Rigid body modelling, Small angle scattering (SAS)

Christian Meesters; Bruno Pairet; Anja Rabenhorst; Heinz Decker; Elmar Jaenicke

2010-06-01T23:59:59.000Z

260

Science Conference Proceedings (OSTI)

In this Monte Carlo algorithm for polarizable force fields, the fluctuating charges are treated as special degrees of freedom subject to a secondary low-temperature thermostat in close analogy to the extended Lagrangian formalism commonly used in molecular dynamics simulations of such systems. The algorithm is applied to Berne{close_quote}s SPC-FQ (simple point charge{endash}fluctuating charge) model for water. The robustness of the algorithm with respect to the temperature of the secondary thermostat and to the fraction of fluctuating-charge moves is investigated. With the new algorithm, the cost of Monte Carlo simulations using fluctuating-charge force fields increases by less than an order of magnitude compared to simulations using the parent fixed-charge force fields. {copyright} {ital 1998 American Institute of Physics.}

Martin, M.G.; Chen, B.; Siepmann, J.I. [Department of Chemistry, University of Minnesota, 207 Pleasant St. SE, Minneapolis, Minnesota 55455-0431 (United States)

1998-03-01T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

261

A Monte Carlo method of evaluating heterogeneous effects in plate-fueled reactors

Few-group nuclear cross sections for small plate-fueled, light and heavy water test reactors are frequently generated with unit cell models that contain a homogeneous mixture of fuel, cladding, and water. The heterogeneous unit cells do not need to be represented explicitly for neutronics calculations when the plate and coolant channel thicknesses are small compared with the mean-free-path of neutrons. However, neutron and photon heating calculations were performed with heterogeneous fuel models to predict accurately the heat deposited in the fuel meat, cladding, and coolant. Heat deposited in the coolant channels and outside the fuel elements does not have a direct impact on the peak fuel meat temperature but must be included in the total coolant system heat balance. The results of a heterogeneous Monte Carlo calculation that estimates the heat loads in different fuel regions are presented and the fact that similar homogeneous fuel models can be used for many calculations. The calculations presented here were performed on models of the Advanced Neutron Source (ANS) and the Massachusetts Institute of Technology Reactor 2 (MITR-2). The ANS is a small, 362-MW (fission), plate-fueled, heavy water reactor designed to produce an intense steady-state source of neutrons.

Thayer, R.C.; Redmond, E.L. II; Ryskamp, J.M. (Idaho National Engineering Lab., Idaho Falls (United States))

1991-01-01T23:59:59.000Z

262

Monte Carlo analysis of a monolithic interconnected module with a back surface reflector

Recently, the photon Monte Carlo code, RACER-X, was modified to include wave-length dependent absorption coefficients and indices of refraction. This work was done in an effort to increase the code`s capabilities to be more applicable to a wider range of problems. These new features make RACER-X useful for analyzing devices like monolithic interconnected modules (MIMs) which have etched surface features and incorporates a back surface reflector (BSR) for spectral control. A series of calculations were performed on various MIM structures to determine the impact that surface features and component reflectivities have on spectral utilization. The traditional concern of cavity photonics is replaced with intra-cell photonics in the MIM design. Like the cavity photonic problems previously discussed, small changes in optical properties and/or geometry can lead to large changes in spectral utilization. The calculations show that seemingly innocuous surface features (e.g., trenches and grid lines) can significantly reduce the spectral utilization due to the non-normal incident photon flux. Photons that enter the device through a trench edge are refracted onto a trajectory where they will not escape. This leads to a reduction in the number of reflected below bandgap photons that return to the radiator and reduce the spectral utilization. In addition, trenches expose a lateral conduction layer in this particular series of calculations which increase the absorption of above bandgap photons in inactive material.

Ballinger, C.T.; Charache, G.W. [Lockheed Martin Corp., Schenectady, NY (United States); Murray, C.S. [Bettis Atomic Power Lab., West Mifflin, PA (United States)

1998-10-01T23:59:59.000Z

263

ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

Science Conference Proceedings (OSTI)

ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.

Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

2008-04-01T23:59:59.000Z

264

Self-Evolving Atomistic Kinetic Monte Carlo (SEAKMC): Fundamentals and Applications

Science Conference Proceedings (OSTI)

The fundamentals of the framework and the details of each component of the self-evolving atomistic kinetic Monte Carlo (SEAKMC) are presented. The strength of this new technique is the ability to simulate dynamic processes with atomistic fidelity that is comparable to molecular dynamics (MD) but on a much longer time scale. The observation that the dimer method preferentially finds the saddle point (SP) with the lowest energy is investigated and found to be true only for defects with high symmetry. In order to estimate the fidelity of dynamics and accuracy of the simulation time, a general criterion is proposed and applied to two representative problems. Applications of SEAKMC for investigating the diffusion of interstitials and vacancies in bcc iron are presented and compared directly with MD simulations, demonstrating that SEAKMC provides results that formerly could be obtained only through MD. The correlation factor for interstitial diffusion in the dumbbell configuration, which is extremely difficult to obtain using MD, is predicted using SEAKMC. The limitations of SEAKMC are also discussed. The paper presents a comprehensive picture of the SEAKMC method in both its unique predictive capabilities and technically important details.

Xu, Haixuan [ORNL; Osetskiy, Yury N [ORNL; Stoller, Roger E [ORNL

2012-01-01T23:59:59.000Z

265

A high-fidelity Monte Carlo evaluation of CANDU-6 safety parameters

Science Conference Proceedings (OSTI)

Important safety parameters such as the fuel temperature coefficient (FTC) and the power coefficient of reactivity (PCR) of the CANDU-6 (CANada Deuterium Uranium) reactor have been evaluated by using a modified MCNPX code. For accurate analysis of the parameters, the DBRC (Doppler Broadening Rejection Correction) scheme was implemented in MCNPX in order to account for the thermal motion of the heavy uranium nucleus in the neutron-U scattering reactions. In this work, a standard fuel lattice has been modeled and the fuel is depleted by using the MCNPX and the FTC value is evaluated for several burnup points including the mid-burnup representing a near-equilibrium core. The Doppler effect has been evaluated by using several cross section libraries such as ENDF/B-VI, ENDF/B-VII, JEFF, JENDLE. The PCR value is also evaluated at mid-burnup conditions to characterize safety features of equilibrium CANDU-6 reactor. To improve the reliability of the Monte Carlo calculations, huge number of neutron histories are considered in this work and the standard deviation of the k-inf values is only 0.5{approx}1 pcm. It has been found that the FTC is significantly enhanced by accounting for the Doppler broadening of scattering resonance and the PCR are clearly improved. (authors)

Kim, Y.; Hartanto, D. [Korea Advanced Inst. of Science and Technology KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon, 305-701 (Korea, Republic of)

2012-07-01T23:59:59.000Z

266

Lattice Monte Carlo calculations for unitary fermions in a harmonic trap

Science Conference Proceedings (OSTI)

We present a lattice Monte Carlo approach developed for studying large numbers of strongly interacting nonrelativistic fermions and apply it to a dilute gas of unitary fermions confined to a harmonic trap. In place of importance sampling, our approach makes use of high statistics, an improved action, and recently proposed statistical techniques. We show how improvement of the lattice action can remove discretization and finite volume errors systematically. For N=3 unitary fermions in a box, our errors in the energy scale as the inverse lattice volume, and we reproduce a previous high-precision benchmark calculation to within our 0.3% uncertainty; as additional benchmarks we reproduce precision calculations of N=3,...,6 unitary fermions in a harmonic trap to within our {approx}1% uncertainty. We then use this action to determine the ground-state energies of up to 70 unpolarized fermions trapped in a harmonic potential on a lattice as large as 64{sup 3}x72. In contrast to variational calculations, we find evidence for persistent deviations from the thermodynamic limit for the range of N considered.

Endres, Michael G.; Kaplan, David B.; Lee, Jong-Wan; Nicholson, Amy N. [Physics Department, Columbia University, New York, New York 10027 (United States); Theoretical Research Division, RIKEN Nishina Center, Wako, Saitama 351-0198 (Japan); Institute for Nuclear Theory, University of Washington, Seattle, Washington 98195-1550 (United States)

2011-10-15T23:59:59.000Z

267

Bayesian Inference for LISA Pathfinder using Markov Chain Monte Carlo Methods

We present a parameter estimation procedure based on a Bayesian framework by applying a Markov Chain Monte Carlo algorithm to the calibration of the dynamical parameters of a space based gravitational wave detector. The method is based on the Metropolis-Hastings algorithm and a two-stage annealing treatment in order to ensure an effective exploration of the parameter space at the beginning of the chain. We compare two versions of the algorithm with an application to a LISA Pathfinder data analysis problem. The two algorithms share the same heating strategy but with one moving in coordinate directions using proposals from a multivariate Gaussian distribution, while the other uses the natural logarithm of some parameters and proposes jumps in the eigen-space of the Fisher Information matrix. The algorithm proposing jumps in the eigen-space of the Fisher Information matrix demonstrates a higher acceptance rate and a slightly better convergence towards the equilibrium parameter distributions in the application to LISA Pathfinder data . For this experiment, we return parameter values that are all within $\\sim1\\sigma$ of the injected values. When we analyse the accuracy of our parameter estimation in terms of the effect they have on the force-per-unit test mass noise estimate, we find that the induced errors are three orders of magnitude less than the expected experimental uncertainty in the power spectral density.

Luigi Ferraioli; Edward K. Porter; Eric Plagnol

2012-11-30T23:59:59.000Z

268

A BAYESIAN MONTE CARLO ANALYSIS OF THE M-{sigma} RELATION

We present an analysis of selection biases in the M{sub bh}-{sigma} relation using Monte Carlo simulations including the sphere of influence resolution selection bias and a selection bias in the velocity dispersion distribution. We find that the sphere of influence selection bias has a significant effect on the measured slope of the M{sub bh}-{sigma} relation, modeled as {beta}{sub intrinsic} = -4.69 + 2.22{beta}{sub measured}, where the measured slope is shallower than the model slope in the parameter range of {beta} > 4, with larger corrections for steeper model slopes. Therefore, when the sphere of influence is used as a criterion to exclude unreliable measurements, it also introduces a selection bias that needs to be modeled to restore the intrinsic slope of the relation. We find that the selection effect due to the velocity dispersion distribution of the sample, which might not follow the overall distribution of the population, is not important for slopes of {beta} {approx} 4-6 of a logarithmically linear M{sub bh}-{sigma} relation, which could impact some studies that measure low (e.g., {beta} < 4) slopes. Combining the selection biases in velocity dispersions and the sphere of influence cut, we find that the uncertainty of the slope is larger than the value without modeling these effects and estimate an intrinsic slope of {beta} = 5.28{sup +0.84}{sub -0.55}.

Morabito, Leah K.; Dai Xinyu, E-mail: morabito@nhn.ou.edu, E-mail: dai@nhn.ou.edu [Homer L. Dodge Department of Physics and Astronomy, University of Oklahoma, Norman, OK 73019 (United States)

2012-10-01T23:59:59.000Z

269

This paper describes the characterization of radiation doses to the hands of nuclear medicine technicians resulting from the handling of radiopharmaceuticals. Radiation monitoring using ring dosimeters indicates that finger dosimeters that are used to show compliance with applicable regulations may overestimate or underestimate radiation doses to the skin depending on the nature of the particular procedure and the radionuclide being handled. To better understand the parameters governing the absorbed dose distributions, a detailed model of the hands was created and used in Monte Carlo simulations of selected nuclear medicine procedures. Simulations of realistic configurations typical for workers handling radiopharmaceuticals were performedfor a range of energies of the source photons. The lack of charged-particle equilibrium necessitated full photon-electron coupled transport calculations. The results show that the dose to different regions of the fingers can differ substantially from dosimeter readings when dosimeters are located at the base of the finger. We tried to identify consistent patterns that relate the actual dose to the dosimeter readings. These patterns depend on the specific work conditions and can be used to better assess the absorbed dose to different regions of the exposed skin.

Ilas, Dan [ORNL; Eckerman, Keith F [ORNL; Karagiannis, Harriet [ORNL

2009-01-01T23:59:59.000Z

270

A spherical Monte-Carlo model of aerosols: Validation and first applications to Mars and Titan

The atmospheres of Mars and Titan are loaded with aerosols that impact remote sensing observations of their surface. Here we present the algorithm and the first applications of a radiative transfer model in spherical geometry designed for planetary data analysis. We first describe a fast Monte-Carlo code that takes advantage of symmetries and geometric redundancies. We then apply this model to observations of the surface of Mars and Titan at the terminator as acquired by OMEGA/Mars Express and VIMS/Cassini. These observations are used to probe the vertical distribution of aerosols down to the surface. On Mars, we find the scale height of dust particles to vary between 6 km and 12 km depending on season. Temporal variations in the vertical size distribution of aerosols are also highlighted. On Titan, an aerosols scale height of 80 \\pm 10 km is inferred, and the total optical depth is found to decrease with wavelength as a power-law with an exponent of -2.0 \\pm 0.4 from a value of 2.3 \\pm 0.5 at 1.08 {\\mu}m. On...

Vincendon, Mathieu; 10.1016/j.icarus.2009.12.018

2011-01-01T23:59:59.000Z

271

Towards a frequency-dependent discrete maximum principle for the implicit Monte Carlo equations

It has long been known that temperature solutions of the Implicit Monte Carlo (IMC) equations can exceed the external boundary temperatures, a so-called violation of the 'maximum principle.' Previous attempts at prescribing a maximum value of the time-step size {Delta}{sub t} that is sufficient to eliminate these violations have recommended a {Delta}{sub t} that is typically too small to be used in practice and that appeared to be much too conservative when compared to numerical solutions of the IMC equations for practical problems. In this paper, we derive a new estimator for the maximum time-step size that includes the spatial-grid size {Delta}{sub x}. This explicitly demonstrates that the effect of coarsening {Delta}{sub x} is to reduce the limitation on {Delta}{sub t}, which helps explain the overly conservative nature of the earlier, grid-independent results. We demonstrate that our new time-step restriction is a much more accurate means of predicting violations of the maximum principle. We discuss how the implications of the new, grid-dependent timestep restriction can impact IMC solution algorithms.

Wollaber, Allan B [Los Alamos National Laboratory; Larsen, Edward W [Los Alamos National Laboratory; Densmore, Jeffery D [Los Alamos National Laboratory

2010-12-15T23:59:59.000Z

272

Postimplant Dosimetry Using a Monte Carlo Dose Calculation Engine: A New Clinical Standard

Purpose: To use the Monte Carlo (MC) method as a dose calculation engine for postimplant dosimetry. To compare the results with clinically approved data for a sample of 28 patients. Two effects not taken into account by the clinical calculation, interseed attenuation and tissue composition, are being specifically investigated. Methods and Materials: An automated MC program was developed. The dose distributions were calculated for the target volume and organs at risk (OAR) for 28 patients. Additional MC techniques were developed to focus specifically on the interseed attenuation and tissue effects. Results: For the clinical target volume (CTV) D{sub 90} parameter, the mean difference between the clinical technique and the complete MC method is 10.7 Gy, with cases reaching up to 17 Gy. For all cases, the clinical technique overestimates the deposited dose in the CTV. This overestimation is mainly from a combination of two effects: the interseed attenuation (average, 6.8 Gy) and tissue composition (average, 4.1 Gy). The deposited dose in the OARs is also overestimated in the clinical calculation. Conclusions: The clinical technique systematically overestimates the deposited dose in the prostate and in the OARs. To reduce this systematic inaccuracy, the MC method should be considered in establishing a new standard for clinical postimplant dosimetry and dose-outcome studies in a near future.

Carrier, Jean-Francois [Departement de Radio-Oncologie, et Centre de Recherche du CHUM, Hopital Notre-Dame du CHUM, Montreal, Quebec (Canada) and Departement de Radio-Oncologie et Centre de Recherche en Cancerologie de Universite Laval, CHUQ Pavillon Hotel-Dieu de Quebec, Quebec (Canada)]. E-mail: jean-francois.carrier.chum@ssss.gouv.qc.ca; D' Amours, Michel [Departement de Radio-Oncologie et Centre de Recherche en Cancerologie de Universite Laval, CHUQ Pavillon Hotel-Dieu de Quebec, Quebec (Canada); Verhaegen, Frank [Medical Physics Unit, McGill University, Montreal, Quebec (Canada); Reniers, Brigitte [Medical Physics Unit, McGill University, Montreal, Quebec (Canada); Martin, Andre-Guy [Departement de Radio-Oncologie et Centre de Recherche en Cancerologie de Universite Laval, CHUQ Pavillon Hotel-Dieu de Quebec, Quebec (Canada); Vigneault, Eric [Departement de Radio-Oncologie et Centre de Recherche en Cancerologie de Universite Laval, CHUQ Pavillon Hotel-Dieu de Quebec, Quebec (Canada); Beaulieu, Luc [Departement de Radio-Oncologie et Centre de Recherche en Cancerologie de Universite Laval, CHUQ Pavillon Hotel-Dieu de Quebec, Quebec (Canada)

2007-07-15T23:59:59.000Z

273

Purpose: A simulation of buildup factors for ordinary concrete, steel, lead, plate glass, lead glass, and gypsum wallboard in broad beam geometry for photons energies from 10 keV to 150 keV at 5 keV intervals is presented. Methods: Monte Carlo N-particle radiation transport computer code has been used to determine the buildup factors for the studied shielding materials. Results: An example concretizing the use of the obtained buildup factors data in computing the broad beam transmission for tube potentials at 70, 100, 120, and 140 kVp is given. The half value layer, the tenth value layer, and the equilibrium tenth value layer are calculated from the broad beam transmission for these tube potentials. Conclusions: The obtained values compared with those calculated from the published data show the ability of these data to predict shielding transmission curves. Therefore, the buildup factors data can be combined with primary, scatter, and leakage x-ray spectra to provide a computationally based solution to broad beam transmission for barriers in shielding x-ray facilities.

Kharrati, Hedi; Agrebi, Amel; Karoui, Mohamed Karim [Ecole Superieure des Sciences et Techniques de la Sante de Monastir, Avenue Avicenne, 5000 Monastir (Tunisia); Faculte des Sciences de Monastir, 5000 Monastir (Tunisia)

2012-10-15T23:59:59.000Z

274

MONTE CARLO SIMULATIONS OF THE PHOTOSPHERIC EMISSION IN GAMMA-RAY BURSTS

Science Conference Proceedings (OSTI)

We studied the decoupling of photons from ultra-relativistic spherically symmetric outflows expanding with constant velocity by means of Monte Carlo simulations. For outflows with finite widths we confirm the existence of two regimes: photon-thick and photon-thin, introduced recently by Ruffini et al. (RSV). The probability density function of the last scattering of photons is shown to be very different in these two cases. We also obtained spectra as well as light curves. In the photon-thick case, the time-integrated spectrum is much broader than the Planck function and its shape is well described by the fuzzy photosphere approximation introduced by RSV. In the photon-thin case, we confirm the crucial role of photon diffusion, hence the probability density of decoupling has a maximum near the diffusion radius well below the photosphere. The time-integrated spectrum of the photon-thin case has a Band shape that is produced when the outflow is optically thick and its peak is formed at the diffusion radius.

Begue, D.; Siutsou, I. A.; Vereshchagin, G. V. [University of Roma ''Sapienza'', I-00185, p.le A. Moro 5, Rome (Italy)

2013-04-20T23:59:59.000Z

275

Composition PDF/photon Monte Carlo modeling of moderately sooting turbulent jet flames

A comprehensive model for luminous turbulent flames is presented. The model features detailed chemistry, radiation and soot models and state-of-the-art closures for turbulence-chemistry interactions and turbulence-radiation interactions. A transported probability density function (PDF) method is used to capture the effects of turbulent fluctuations in composition and temperature. The PDF method is extended to include soot formation. Spectral gas and soot radiation is modeled using a (particle-based) photon Monte Carlo method coupled with the PDF method, thereby capturing both emission and absorption turbulence-radiation interactions. An important element of this work is that the gas-phase chemistry and soot models that have been thoroughly validated across a wide range of laminar flames are used in turbulent flame simulations without modification. Six turbulent jet flames are simulated with Reynolds numbers varying from 6700 to 15,000, two fuel types (pure ethylene, 90% methane-10% ethylene blend) and different oxygen concentrations in the oxidizer stream (from 21% O{sub 2} to 55% O{sub 2}). All simulations are carried out with a single set of physical and numerical parameters (model constants). Uniformly good agreement between measured and computed mean temperatures, mean soot volume fractions and (where available) radiative fluxes is found across all flames. This demonstrates that with the combination of a systematic approach and state-of-the-art physical models and numerical algorithms, it is possible to simulate a broad range of luminous turbulent flames with a single model. (author)

Mehta, R.S.; Haworth, D.C.; Modest, M.F. [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, University Park, PA 16802 (United States)

2010-05-15T23:59:59.000Z

276

Statistical Properties of Nuclei by the Shell Model Monte Carlo Method

We use quantum Monte Carlo methods in the framework of the interacting nuclear shell model to calculate the statistical properties of nuclei at finite temperature and/or excitation energies. With this approach we can carry out realistic calculations in much larger configuration spaces than are possible by conventional methods. A major application of the methods has been the microscopic calculation of nuclear partition functions and level densities, taking into account both correlations and shell effects. Our results for nuclei in the mass region A ~ 50 - 70 are in remarkably good agreement with experimental level densities without any adjustable parameters and are an improvement over empirical formulas. We have recently extended the shell model theory of level statistics to higher temperatures, including continuum effects. We have also constructed simple statistical models to explain the dependence of the microscopically calculated level densities on good quantum numbers such as parity. Thermal signatures of pairing correlations are identified through odd-even effects in the heat capacity.

Y. Alhassid

2006-04-26T23:59:59.000Z

277

Monte Carlo Simulation of the Conversion X-Rays from the Electron Beam of PFMA-3

PFMA-3, a dense Plasma Focus device, is being optimized as an X-ray generator. X-rays are obtained from the conversion of the electron beam emitted in the backward direction and driven to impinge on a 50 {mu}m brass foil. Monte Carlo simulations of the X-ray emission have been conducted with MCNPX. The electron spectrum had been determined experimentally and is used in the present work as input to the simulations. Dose to the brass foil has been determined both from simulations and from measurements with a thermographic camera, and the two results are found in excellent agreement, thus validating further the electron spectrum assumed as well as the simulation set-up. X-ray emission has been predicted both from bremsstrahlung and from characteristic lines. The spectrum has been found to be comprised of two components of which the one at higher energy, 30 divide 70 keV, is most useful for IORT applications. The results are necessary to estimate penetration in and dose to Standard Human Tissue.

Ceccolini, E.; Mostacci, D.; Sumini, M. [Montecuccolino Nuclear Engineering Laboratory, University of Bologna, via dei Colli 16, I-40136 Bologna (Italy); Rocchi, F. [Montecuccolino Nuclear Engineering Laboratory, University of Bologna, via dei Colli 16, I-40136 Bologna (Italy); UTFISSM-PRONOC, ENEA, via Martiri di Monte Sole 4, I-40129 Bologna (Italy); Tartari, A. [Department of Physics, University of Ferrara, Via Saragat 1, I-44122 Ferrara (Italy)

2011-12-13T23:59:59.000Z

278

A Monte Carlo Analysis of Gas Centrifuge Enrichment Plant Process Load Cell Data

Science Conference Proceedings (OSTI)

As uranium enrichment plants increase in number, capacity, and types of separative technology deployed (e.g., gas centrifuge, laser, etc.), more automated safeguards measures are needed to enable the IAEA to maintain safeguards effectiveness in a fiscally constrained environment. Monitoring load cell data can significantly increase the IAEA s ability to efficiently achieve the fundamental safeguards objective of confirming operations as declared (i.e., no undeclared activities), but care must be taken to fully protect the operator s proprietary and classified information related to operations. Staff at ORNL, LANL, JRC/ISPRA, and University of Glasgow are investigating monitoring the process load cells at feed and withdrawal (F/W) stations to improve international safeguards at enrichment plants. A key question that must be resolved is what is the necessary frequency of recording data from the process F/W stations? Several studies have analyzed data collected at a fixed frequency. This paper contributes to load cell process monitoring research by presenting an analysis of Monte Carlo simulations to determine the expected errors caused by low frequency sampling and its impact on material balance calculations.

Garner, James R [ORNL; Whitaker, J Michael [ORNL

2013-01-01T23:59:59.000Z

279

Electron energy and charge albedos - calorimetric measurement vs Monte Carlo theory

A new calorimetric method has been employed to obtain saturated electron energy albedos for Be, C, Al, Ti, Mo, Ta, U, and UO/sub 2/ over the range of incident energies from 0.1 to 1.0 MeV. The technique was so designed to permit the simultaneous measurement of saturated charge albedos. In the cases of C, Al, Ta, and U the measurements were extended down to about 0.025 MeV. The angle of incidence was varied from 0/sup 0/ (normal) to 75/sup 0/ in steps of 15/sup 0/, with selected measurements at 82.5/sup 0/ in Be and C. In each case, state-of-the-art predictions were obtained from a Monte Carlo model. The generally good agreement between theory and experiment over this extensive parameter space represents a strong validation of both the theoretical model and the new experimental method. Nevertheless, certain discrepancies at low incident energies, especially in high-atomic-number materials, and at all energies in the case of the U energy albedos are not completely understood.

Lockwood, G.J.; Ruggles, L.E.; Miller, G.H.; Halbleib, J.A.

1981-11-01T23:59:59.000Z

280

The TSUNAMI computational sequences currently in the SCALE 5 code system provide an automated approach to performing sensitivity and uncertainty analysis for eigenvalue responses, using either one-dimensional discrete ordinates or three-dimensional Monte Carlo methods. This capability has recently been expanded to address eigenvalue-difference responses such as reactivity changes. This paper describes the methodology and presents results obtained for an example advanced CANDU reactor design. (authors)

Williams, M. L.; Gehin, J. C.; Clarno, K. T. [Oak Ridge National Laboratory, Bldg. 5700, P.O. Box 2008, Oak Ridge, TN 37831-6170 (United States)

2006-07-01T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

281

Science Conference Proceedings (OSTI)

Calculations of thermodynamic properties of Helium plasma by using the Reaction Ensemble Monte Carlo method (REMC) are presented. Non ideal effects at high pressure are observed. Calculations, performed by using Exp-6 or multi-potential curves in the case of neutral-charge interactions, show that in the thermodynamic conditions considered no significative differences are observed. Results have been obtained by using a Graphics Processing Unit (GPU)-CUDA C version of REMC.

D'Angola, A.; Tuttafesta, M.; Guadagno, M.; Santangelo, P.; Laricchiuta, A.; Colonna, G.; Capitelli, M. [Scuola di Ingegneria SI, Universita della Basilicata, via dell'Ateneo Lucano, 10 - 85100 Potenza (Italy); Universita di Bari, via Orabona, 4 - 70126 Bari (Italy); Scuola di Ingegneria SI, Universita della Basilicata, via dell'Ateneo Lucano, 10 - 85100 Potenza (Italy); CNR-IMIP Bari, via Amendola 122/D - 70126 Bari (Italy); Universita di Bari, via Orabona, 4 - 70126 Bari (Italy) and CNR-IMIP Bari, via Amendola 122/D - 70126 Bari (Italy)

2012-11-27T23:59:59.000Z

282

FPGA acceleration using high-level languages of a Monte-Carlo method for pricing complex options

Science Conference Proceedings (OSTI)

In this paper we present an FPGA implementation of a Monte-Carlo method for pricing Asian options using Impulse C and floating-point arithmetic. In an Altera Stratix-V FPGA, a 149x speedup factor was obtained against an OpenMP-based solution in a 4-core ... Keywords: Field programmable gate arrays, Financial data processing, Floating-point arithmetic, High level language synthesis, Parallel machines

Diego Sanchez-Roman, Victor Moreno, Sergio Lopez-Buedo, Gustavo Sutter, Ivan Gonzalez, Francisco J. Gomez-Arribas, Javier Aracil

2013-03-01T23:59:59.000Z

283

We present an approach to calculation of point-defect optical and thermal ionization energies based on the highly accurate quantum Monte Carlo methods. The use of an inherently many-body theory that directly treats electron ...

Ertekin, Elif

284

Science Conference Proceedings (OSTI)

Purpose: A grid intensity?based dose algorithm to realize MLC irregular?inhomogeneous field modeling is presented for Monte Carlo clinical application in ARTS (Accurate Radiotherapy System). Methods: Linac modeling actually is a multi?parameter optimization process

2013-01-01T23:59:59.000Z

285

The choice of appropriate interaction models is among the major disadvantages of conventional methods such as molecular dynamics and Monte Carlo simulations. On the other hand, the so-called reverse Monte Carlo (RMC) method, based on experimental data, can be applied without any interatomic and/or intermolecular interactions. The RMC results are accompanied by artificial satellite peaks. To remedy this problem, we use an extension of the RMC algorithm, which introduces an energy penalty term into the acceptance criteria. This method is referred to as the hybrid reverse Monte Carlo (HRMC) method. The idea of this paper is to test the validity of a combined potential model of coulomb and Lennard-Jones in a fluoride glass system BaMnMF_{7} (M=Fe,V) using HRMC method. The results show a good agreement between experimental and calculated characteristics, as well as a meaningful improvement in partial pair distribution functions. We suggest that this model should be used in calculating the structural properties and in describing the average correlations between components of fluoride glass or a similar system. We also suggest that HRMC could be useful as a tool for testing the interaction potential models, as well as for conventional applications.

S. M. Mesli; M. Habchi; M. Kotbi; H. Xu

2013-03-25T23:59:59.000Z

286

A reverse Monte Carlo (RMC) method is developed to obtain the energy loss function (ELF) and optical constants from a measured reflection electron energy-loss spectroscopy (REELS) spectrum by an iterative Monte Carlo (MC) simulation procedure. The method combines the simulated annealing method, i.e., a Markov chain Monte Carlo (MCMC) sampling of oscillator parameters, surface and bulk excitation weighting factors, and band gap energy, with a conventional MC simulation of electron interaction with solids, which acts as a single step of MCMC sampling in this RMC method. To examine the reliability of this method, we have verified that the output data of the dielectric function are essentially independent of the initial values of the trial parameters, which is a basic property of a MCMC method. The optical constants derived for SiO{sub 2} in the energy loss range of 8-90 eV are in good agreement with other available data, and relevant bulk ELFs are checked by oscillator strength-sum and perfect-screening-sum rules. Our results show that the dielectric function can be obtained by the RMC method even with a wide range of initial trial parameters. The RMC method is thus a general and effective method for determining the optical properties of solids from REELS measurements.

Da, B.; Sun, Y.; Ding, Z. J. [Hefei National Laboratory for Physical Sciences at Microscale and Department of Physics, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026, People's Republic of China (China)] [Hefei National Laboratory for Physical Sciences at Microscale and Department of Physics, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026, People's Republic of China (China); Mao, S. F. [School of Nuclear Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026, People's Republic of China (China)] [School of Nuclear Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026, People's Republic of China (China); Zhang, Z. M. [Centre of Physical Experiments, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026, People's Republic of China (China)] [Centre of Physical Experiments, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026, People's Republic of China (China); Jin, H.; Yoshikawa, H.; Tanuma, S. [Advanced Surface Chemical Analysis Group, National Institute for Materials Science, 1-2-1 Sengen Tsukuba, Ibaraki 305-0047 (Japan)] [Advanced Surface Chemical Analysis Group, National Institute for Materials Science, 1-2-1 Sengen Tsukuba, Ibaraki 305-0047 (Japan)

2013-06-07T23:59:59.000Z

287

Science Conference Proceedings (OSTI)

A series of lanthanide coordination polymers have been obtained through the hydrothermal reaction of N-(sulfoethyl) iminodiacetic acid (H{sub 3}SIDA) and Ln(NO{sub 3}){sub 3} (Ln=La, 1; Pr, 2; Nd, 3; Gd, 4). Crystal structure analysis exhibits that lanthanide ions affect the coordination number, bond length and dimension of compounds 1-4, which reveal that their structure diversity can be attributed to the effect of lanthanide contraction. Furthermore, the combination of magnetic measure with quantum Monte Carlo(QMC) studies exhibits that the coupling parameters between two adjacent Gd{sup 3+} ions for anti-anti and syn-anti carboxylate bridges are -1.0 Multiplication-Sign 10{sup -3} and -5.0 Multiplication-Sign 10{sup -3} cm{sup -1}, respectively, which reveals weak antiferromagnetic interaction in 4. - Graphical abstract: Four lanthanide coordination polymers with N-(sulfoethyl) iminodiacetic acid were obtained under hydrothermal condition and reveal the weak antiferromagnetic coupling between two Gd{sup 3+} ions by Quantum Monte Carlo studies. Highlights: Black-Right-Pointing-Pointer Four lanthanide coordination polymers of H{sub 3}SIDA ligand were obtained. Black-Right-Pointing-Pointer Lanthanide ions play an important role in their structural diversity. Black-Right-Pointing-Pointer Magnetic measure exhibits that compound 4 features antiferromagnetic property. Black-Right-Pointing-Pointer Quantum Monte Carlo studies reveal the coupling parameters of two Gd{sup 3+} ions.

Zhuang Guilin, E-mail: glzhuang@zjut.edu.cn [Institute of Industrial Catalysis, College of Chemical Engineering and Materials Science, Zhejiang University of Technology, Hangzhou 310032 (China); Chen Wulin [Institute of Industrial Catalysis, College of Chemical Engineering and Materials Science, Zhejiang University of Technology, Hangzhou 310032 (China); Zheng Jun [Center of Modern Experimental Technology, Anhui University, Hefei 230039 (China); Yu Huiyou [Institute of Industrial Catalysis, College of Chemical Engineering and Materials Science, Zhejiang University of Technology, Hangzhou 310032 (China); Wang Jianguo, E-mail: jgw@zjut.edu.cn [Institute of Industrial Catalysis, College of Chemical Engineering and Materials Science, Zhejiang University of Technology, Hangzhou 310032 (China)

2012-08-15T23:59:59.000Z

288

Purpose: To establish an organ dose database for pediatric and adolescent reference individuals undergoing computed tomography (CT) examinations by using Monte Carlo simulation. The data will permit rapid estimates of organ and effective doses for patients of different age, gender, examination type, and CT scanner model. Methods: The Monte Carlo simulation model of a Siemens Sensation 16 CT scanner previously published was employed as a base CT scanner model. A set of absorbed doses for 33 organs/tissues normalized to the product of 100 mAs and CTDI{sub vol} (mGy/100 mAs mGy) was established by coupling the CT scanner model with age-dependent reference pediatric hybrid phantoms. A series of single axial scans from the top of head to the feet of the phantoms was performed at a slice thickness of 10 mm, and at tube potentials of 80, 100, and 120 kVp. Using the established CTDI{sub vol}- and 100 mAs-normalized dose matrix, organ doses for different pediatric phantoms undergoing head, chest, abdomen-pelvis, and chest-abdomen-pelvis (CAP) scans with the Siemens Sensation 16 scanner were estimated and analyzed. The results were then compared with the values obtained from three independent published methods: CT-Expo software, organ dose for abdominal CT scan derived empirically from patient abdominal circumference, and effective dose per dose-length product (DLP). Results: Organ and effective doses were calculated and normalized to 100 mAs and CTDI{sub vol} for different CT examinations. At the same technical setting, dose to the organs, which were entirely included in the CT beam coverage, were higher by from 40 to 80% for newborn phantoms compared to those of 15-year phantoms. An increase of tube potential from 80 to 120 kVp resulted in 2.5-2.9-fold greater brain dose for head scans. The results from this study were compared with three different published studies and/or techniques. First, organ doses were compared to those given by CT-Expo which revealed dose differences up to several-fold when organs were partially included in the scan coverage. Second, selected organ doses from our calculations agreed to within 20% of values derived from empirical formulae based upon measured patient abdominal circumference. Third, the existing DLP-to-effective dose conversion coefficients tended to be smaller than values given in the present study for all examinations except head scans. Conclusions: A comprehensive organ/effective dose database was established to readily calculate doses for given patients undergoing different CT examinations. The comparisons of our results with the existing studies highlight that use of hybrid phantoms with realistic anatomy is important to improve the accuracy of CT organ dosimetry. The comprehensive pediatric dose data developed here are the first organ-specific pediatric CT scan database based on the realistic pediatric hybrid phantoms which are compliant with the reference data from the International Commission on Radiological Protection (ICRP). The organ dose database is being coupled with an adult organ dose database recently published as part of the development of a user-friendly computer program enabling rapid estimates of organ and effective dose doses for patients of any age, gender, examination types, and CT scanner model.

Lee, Choonsik; Kim, Kwang Pyo; Long, Daniel J.; Bolch, Wesley E. [Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institute of Health, Bethesda, Maryland 20852 (United States); Department of Nuclear Engineering, Kyung Hee University, Gyeonggi-do, 446906 (Korea, Republic of); J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, Florida 32611 (United States)

2012-04-15T23:59:59.000Z

289

Reaction mechanisms of ethanol decomposition on Rh(1 1 1) were elucidated by means of periodic density functional theory (DFT) calculations and kinetic Monte Carlo (KMC) simulations. We propose that the most probable reaction pathway is via CH{sub 3}CH{sub 2}O* on the basis of our mechanistic study: CH{sub 3}CH{sub 2}OH* {yields} CH{sub 3}CH{sub 2}O* {yields} CH{sub 2}CH{sub 2}O* {yields} CH{sub 2}CHO* {yields} CH{sub 2}CO* {yields} CHCO* {yields} CH* + CO* {yields} C* + CO*. In contrast, the contribution from the pathway via CH{sub 3}CHOH* is relatively small, CH{sub 3}CH{sub 2}OH* {yields} CH{sub 3}CHOH* {yields} CH{sub 3}CHO* {yields} CH{sub 3}CO* {yields} CH{sub 2}CO* {yields} CHCO* {yields} CH* + CO* {yields} C* + CO*. According to our calculations, one of the slow steps is the formation of the oxametallacycle CH{sub 2}CH{sub 2}O* species, which leads to the production of CHCO*, the precursor for C-C bond breaking. Finally, the decomposition of ethanol leads to the production of C and CO. Our calculations, for ethanol combustion on Rh, the major obstacle is not C-C bond cleavage, but the C contamination on Rh(1 1 1). The strong C-Rh interaction may deactivate the Rh catalyst. The formation of Rh alloys with Pt and Pd weakens the C-Rh interaction, easing the removal of C, and, as expected, in accordance with the experimental findings, facilitating ethanol combustion.

Liu, P.; Choi, Y.M.

2011-05-16T23:59:59.000Z

290

Science Conference Proceedings (OSTI)

Monte Carlo methods of coupled neutron/photon transport are being used in the design of filtered beams for Neutron Capture Therapy (NCT). This method of beam analysis provides segregation of each individual dose component, and thereby facilitates beam optimization. The Monte Carlo method is discussed in some detail in relation to NCT epithermal beam design. Ideal neutron beams (i.e., plane-wave monoenergetic neutron beams with no primary gamma-ray contamination) have been modeled both for comparison and to establish target conditions for a practical NCT epithermal beam design. Detailed models of the 5 MWt Massachusetts Institute of Technology Research Reactor (MITR-II) together with a polyethylene head phantom have been used to characterize approximately 100 beam filter and moderator configurations. Using the Monte Carlo methodology of beam design and benchmarking/calibrating our computations with measurements, has resulted in an epithermal beam design which is useful for therapy of deep-seated brain tumors. This beam is predicted to be capable of delivering a dose of 2000 RBE-cGy (cJ/kg) to a therapeutic advantage depth of 5.7 cm in polyethylene assuming 30 micrograms/g 10B in tumor with a ten-to-one tumor-to-blood ratio, and a beam diameter of 18.4 cm. The advantage ratio (AR) is predicted to be 2.2 with a total irradiation time of approximately 80 minutes. Further optimization work on the MITR-II epithermal beams is expected to improve the available beams. 20 references.

Clement, S.D.; Choi, J.R.; Zamenhof, R.G.; Yanch, J.C.; Harling, O.K. (Massachusetts Institute of Technology, Cambridge (USA))

1990-01-01T23:59:59.000Z

291

A user`s manual for MASH 1.0: A Monte Carlo Adjoint Shielding Code System

Science Conference Proceedings (OSTI)

The Monte Carlo Adjoint Shielding Code System, MASH, calculates neutron and gamma-ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air-over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system include the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. MASH is the successor to the Vehicle Code System (VCS) initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the ``dose importance`` of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response a a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user`s manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem (input data and selected output edits) for each code.

Johnson, J.O. [ed.

1992-03-01T23:59:59.000Z

292

Science Conference Proceedings (OSTI)

This study primarily aimed to obtain the dosimetric characteristics of the Model 6733 {sup 125}I seed (EchoSeed) with improved precision and accuracy using a more up-to-date Monte-Carlo code and data (MCNP5) compared to previously published results, including an uncertainty analysis. Its secondary aim was to compare the results obtained using the MCNP5, MCNP4c2, and PTRAN codes for simulation of this low-energy photon-emitting source. The EchoSeed geometry and chemical compositions together with a published {sup 125}I spectrum were used to perform dosimetric characterization of this source as per the updated AAPM TG-43 protocol. These simulations were performed in liquid water material in order to obtain the clinically applicable dosimetric parameters for this source model. Dose rate constants in liquid water, derived from MCNP4c2 and MCNP5 simulations, were found to be 0.993 cGyh{sup -1} U{sup -1} ({+-}1.73%) and 0.965 cGyh{sup -1} U{sup -1} ({+-}1.68%), respectively. Overall, the MCNP5 derived radial dose and 2D anisotropy functions results were generally closer to the measured data (within {+-}4%) than MCNP4c and the published data for PTRAN code (Version 7.43), while the opposite was seen for dose rate constant. The generally improved MCNP5 Monte Carlo simulation may be attributed to a more recent and accurate cross-section library. However, some of the data points in the results obtained from the above-mentioned Monte Carlo codes showed no statistically significant differences. Derived dosimetric characteristics in liquid water are provided for clinical applications of this source model.

Mosleh-Shirazi, M. A.; Hadad, K.; Faghihi, R.; Baradaran-Ghahfarokhi, M.; Naghshnezhad, Z.; Meigooni, A. S. [Center for Research in Medical Physics and Biomedical Engineering and Physics Unit, Radiotherapy Department, Shiraz University of Medical Sciences, Shiraz 71936-13311 (Iran, Islamic Republic of); Radiation Research Center and Medical Radiation Department, School of Engineering, Shiraz University, Shiraz 71936-13311 (Iran, Islamic Republic of); Comprehensive Cancer Center of Nevada, Las Vegas, Nevada 89169 (United States)

2012-08-15T23:59:59.000Z

293

Science Conference Proceedings (OSTI)

A Monte Carlo model of electron thermalization in inorganic scintillators, which was developed and applied to CsI in a previous publication [Wang et al., J. Appl. Phys. 110, 064903 (2011)], is extended to another material of the alkali halide class, NaI, and to two materials from the alkaline-earth halide class, CaF2 and BaF2. This model includes electron scattering with both longitudinal optical (LO) and acoustic phonons as well as the effects of internal electric fields. For the four pure materials, a significant fraction of the electrons recombine with self-trapped holes and the thermalization distance distributions of the electrons that do not recombine peak between approximately 25 and 50 {per_thousand}nm and extend up to a few hundreds of nanometers. The thermalization time distributions of CaF2, BaF2, NaI, and CsI extend to approximately 0.5, 1, 2, and 7 ps, respectively. The simulations show that the LO phonon energy is a key factor that affects the electron thermalization process. Indeed, the higher the LO phonon energy is, the shorter the thermalization time and distance are. The thermalization time and distance distributions show no dependence on the incident {gamma}-ray energy. The four materials also show different extents of electron-hole pair recombination due mostly to differences in their electron mean free paths (MFPs), LO phonon energies, initial densities of electron-hole pairs, and static dielectric constants. The effect of thallium doping is also investigated for CsI and NaI as these materials are often doped with activators. Comparison between CsI and NaI shows that both the larger size of Cs+ relative to Na+, i.e., the greater atomic density of NaI, and the longer electron mean free path in NaI compared to CsI contribute to an increased probability for electron trapping at Tl sites in NaI versus CsI.

Wang, Zhiguo; Xie, YuLong; Campbell, Luke W.; Gao, Fei; Kerisit, Sebastien N.

2012-07-01T23:59:59.000Z

294

A Monte Carlo model of electron thermalization in inorganic scintillators, which was developed and applied to CsI in a previous publication [Wang et al., J. Appl. Phys. 110, 064903 (2011)], is extended to another material of the alkali halide class, NaI, and to two materials from the alkaline-earth halide class, CaF{sub 2} and BaF{sub 2}. This model includes electron scattering with both longitudinal optical (LO) and acoustic phonons as well as the effects of internal electric fields. For the four pure materials, a significant fraction of the electrons recombine with self-trapped holes and the thermalization distance distributions of the electrons that do not recombine peak between approximately 25 and 50 nm and extend up to a few hundreds of nanometers. The thermalization time distributions of CaF{sub 2}, BaF{sub 2}, NaI, and CsI extend to approximately 0.5, 1, 2, and 7 ps, respectively. The simulations show that the LO phonon energy is a key factor that affects the electron thermalization process. Indeed, the higher the LO phonon energy is, the shorter the thermalization time and distance are. The thermalization time and distance distributions show no dependence on the incident {gamma}-ray energy. The four materials also show different extents of electron-hole pair recombination due mostly to differences in their electron mean free paths (MFPs), LO phonon energies, initial densities of electron-hole pairs, and static dielectric constants. The effect of thallium doping is also investigated for CsI and NaI as these materials are often doped with activators. Comparison between CsI and NaI shows that both the larger size of Cs{sup +} relative to Na{sup +}, i.e., the greater atomic density of NaI, and the longer electron mean free path in NaI compared to CsI contribute to an increased probability for electron trapping at Tl sites in NaI versus CsI.

Wang Zhiguo; Gao Fei; Kerisit, Sebastien [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352 (United States); Xie Yulong [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352 (United States); Campbell, Luke W. [National Security Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352 (United States)

2012-07-01T23:59:59.000Z

295

Creation of a GUI for Zori, a Quantum Monte Carlo program, usingRappture

In their research laboratories, academic institutions produce some of the most advanced software for scientific applications. However, this software is usually developed only for local application in the research laboratory or for method development. In spite of having the latest advances in the particular field of science, such software often lacks adequate documentation and therefore is difficult to use by anyone other than the code developers. As such codes become more complex, so typically do the input files and command statements necessary to operate them. Many programs offer the flexibility of performing calculations based on different methods that have their own set of variables and options to be specified. Moreover, situations can arise in which certain options are incompatible with each other. For this reason, users outside the development group can be unaware of how the program runs in detail. The opportunity can be lost to make the software readily available outside of the laboratory of origin. This is a long-standing problem in scientific programming. Rappture, Rapid Application Infrastructure [1], is a new GUI development kit that enables a developer to build an I/O interface for a specific application. This capability enables users to work only with the generated GUI and avoids the problem of the user needing to learn details of the code. Further, it reduces input errors by explicitly specifying the variables required. Zori, a quantum Monte Carlo (QMC) program, developed by the Lester group at the University of California, Berkeley [2], is one of the few free tools available for this field. Like many scientific computer packages, Zori suffers from the problems described above. Potential users outside the research group have acquired it, but some have found the code difficult to use. Furthermore, new members of the Lester group usually have to take considerable time learning all the options the code has to offer before they can use it successfully. In this paper we describe the use of the Rappture toolkit to generate a GUI, labeled Zopi (Zori Processing Interface), for the Zori computer code.

Olivares-Amaya, R.; Salomon Ferrer, R.; Lester Jr., W.A.; Amador-Bedolla, C.

2007-12-01T23:59:59.000Z

296

Monte Carlo simulation of the effect of miniphantom on in-air output ratio

Science Conference Proceedings (OSTI)

Purpose: The aim of the study was to quantify the effect of miniphantoms on in-air output ratio measurements, i.e., to determine correction factors for in-air output ratio. Methods: Monte Carlo (MC) simulations were performed to simulate in-air output ratio measurements by using miniphantoms made of various materials (PMMA, graphite, copper, brass, and lead) and with different longitudinal thicknesses or depths (2-30 g/cm{sup 2}) in photon beams of 6 and 15 MV, respectively, and with collimator settings ranging from 3x3 to 40x40 cm{sup 2}. EGSnrc and BEAMnrc (2007) software packages were used. Photon energy spectra corresponding to the collimator settings were obtained from BEAMnrc code simulations on a linear accelerator and were used to quantify the components of in-air output ratio correction factors, i.e., attenuation, mass energy absorption, and phantom scatter correction factors. In-air output ratio correction factors as functions of miniphantom material, miniphantom longitudinal thickness, and collimator setting were calculated and compared to a previous experimental study. Results: The in-air output ratio correction factors increase with collimator opening and miniphantom longitudinal thickness for all the materials and for both energies. At small longitudinal thicknesses, the in-air output ratio correction factors for PMMA and graphite are close to 1. The maximum magnitudes of the in-air output ratio correction factors occur at the largest collimator setting (40x40 cm{sup 2}) and the largest miniphantom longitudinal thickness (30 g/cm{sup 2}): 1.008{+-}0.001 for 6 MV and 1.012{+-}0.001 for 15 MV, respectively. The MC simulations of the in-air output ratio correction factor confirm the previous experimental study. Conclusions: The study has verified that a correction factor for in-air output ratio can be obtained as a product of attenuation correction factor, mass energy absorption correction factor, and phantom scatter correction factor. The correction factors obtained in the present study can be used in studies involving in-air output ratio measurements using miniphantoms.

Li Jun; Zhu, Timothy C. [Department of Radiation Oncology, Thomas Jefferson University, Philadelphia, Pennsylvania 19107 (United States); Department of Radiation Oncology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States)

2010-10-15T23:59:59.000Z

297

A novel approach in electron beam radiation therapy of lips carcinoma: A Monte Carlo study

Purpose: Squamous cell carcinoma (SCC) is commonly treated by electron beam radiotherapy (EBRT) followed by a boost via brachytherapy. Considering the limitations associated with brachytherapy, in this study, a novel boosting technique in EBRT of lip carcinoma using an internal shield as an internal dose enhancer tool (IDET) was evaluated. An IDET is referred to a partially covered internal shield located behind the lip. It was intended to show that while the backscattered electrons are absorbed in the portion covered with a low atomic number material, they will enhance the target dose in the uncovered area. Methods: Monte-Carlo models of 6 and 8 MeV electron beams were developed using BEAMnrc code and were validated against experimental measurements. Using the developed models, dose distributions in a lip phantom were calculated and the effect of an IDET on target dose enhancement was evaluated. Typical lip thicknesses of 1.5 and 2.0 cm were considered. A 5 Multiplication-Sign 5 cm{sup 2} of lead covered by 0.5 cm of polystyrene was used as an internal shield, while a 4 Multiplication-Sign 4 cm{sup 2} uncovered area of the shield was used as the dose enhancer. Results: Using the IDET, the maximum dose enhancement as a percentage of dose at d{sub max} of the unshielded field was 157.6% and 136.1% for 6 and 8 MeV beams, respectively. The best outcome was achieved for lip thickness of 1.5 cm and target thickness of less than 0.8 cm. For lateral dose coverage of planning target volume, the 80% isodose curve at the lip-IDET interface showed a 1.2 cm expansion, compared to the unshielded field. Conclusions: This study showed that a boost concomitant EBRT of lip is possible by modifying an internal shield into an IDET. This boosting method is especially applicable to cases in which brachytherapy faces limitations, such as small thicknesses of lips and targets located at the buccal surface of the lip.

Shokrani, Parvaneh [Medical Physics and Medical Engineering Department, School of Medicine, Isfahan University of Medical Sciences, Isfahan 81746-73461 (Iran, Islamic Republic of); Baradaran-Ghahfarokhi, Milad [Medical Physics and Medical Engineering Department, School of Medicine, Isfahan University of Medical Sciences, Isfahan 81746-73461, Iran and Medical Radiation Engineering Department, Faculty of Advanced Sciences and Technologies, Isfahan University, Isfahan 81746-73441 (Iran, Islamic Republic of); Zadeh, Maryam Khorami [Medical Physics Department, School of Medicine, Ahwaz Jundishapour University of Medical Sciences, Ahwaz 15794-61357 (Iran, Islamic Republic of)

2013-04-15T23:59:59.000Z

298

Science Conference Proceedings (OSTI)

We present results on phonon quasidiffusion and Transition Edge Sensor (TES) studies in a large, 3-inch diameter, 1-inch thick [100] high purity germanium crystal, cooled to 50 mK in the vacuum of a dilution refrigerator, and exposed with 59.5 keV gamma-rays from an Am-241 calibration source. We compare calibration data with results from a Monte Carlo which includes phonon quasidiffusion and the generation of phonons created by charge carriers as they are drifted across the detector by ionization readout channels. The phonon energy is then parsed into TES based phonon readout channels and input into a TES simulator.

Leman, S.W.; McCarthy, K.A.; /MIT, MKI; Brink, P.L.; Cabrera, B.; Cherry, M.; /Stanford U., Phys. Dept.; Silva, E.Do Couto E; /SLAC; Figueroa-Feliciano, E.; /MIT, MKI; Kim, P.; /SLAC; Mirabolfathi, N.; /UC, Berkeley; Pyle, M.; /Stanford U., Phys. Dept.; Resch, R.; /SLAC; Sadoulet, B.; Serfass, B.; Sundqvist, K.M.; /UC, Berkeley; Tomada, A.; /Stanford U., Phys. Dept.; Young, B.A.; /Santa Clara U.

2012-06-05T23:59:59.000Z

299

The behavior of hydrogen isotopes implanted into tungsten containing vacancies was simulated using a Monte Carlo technique. The correlations between the distribution of implanted deuterium and fluence, trap density and trap distribution were evaluated. Throughout the present study, qualitatively understandable results were obtained. In order to improve the precision of the model and obtain quantitatively reliable results, it is necessary to deal with the following subjects: (1) how to balance long-time irradiation processes with a rapid diffusion process, (2) how to prevent unrealistic accumulation of hydrogen, and (3) how to model the release of hydrogen forcibly loaded into a region where hydrogen densely exist already.

T. Oda; M. Shimada; K. Zhang; P. Calderoni; Y. Oya; M. Sokolov; R. Kolasinski

2011-11-01T23:59:59.000Z

300

We present a simple and powerful method for extrapolating finite-volume Monte Carlo data to infinite volume, based on finite-size-scaling theory. We discuss carefully its systematic and statistical errors, and we illustrate it using three examples: the two-dimensional three-state Potts antiferromagnet on the square lattice, and the two-dimensional {ital O}(3) and {ital O}({infinity}) {sigma} models. In favorable cases it is possible to obtain reliable extrapolations (errors of a few percent) even when the correlation length is 1000 times larger than the lattice.

Caracciolo, S. [Dipartimento di Fisica, Universita di Lecce and INFN-Sezione di Lecce, I-73100 Lecce (Italy)] [Dipartimento di Fisica, Universita di Lecce and INFN-Sezione di Lecce, I-73100 Lecce (Italy); Edwards, R.G. [Supercomputer Computations Research Institute, Florida State University, Tallahassee, Florida 32306 (United States)] [Supercomputer Computations Research Institute, Florida State University, Tallahassee, Florida 32306 (United States); Ferreira, S.J. [Departamento de Fisica, Instituto de Ciencias Exatas, Universidade Federal de Minas Gerais, Caixa Postal 702, Belo Horizonte, Minas Gerais 30161 (Brazil)] [Departamento de Fisica, Instituto de Ciencias Exatas, Universidade Federal de Minas Gerais, Caixa Postal 702, Belo Horizonte, Minas Gerais 30161 (Brazil); Pelissetto, A.; Sokal, A.D. [Department of Physics, New York University, 4 Washington Place, New York, New York 10003 (United States)] [Department of Physics, New York University, 4 Washington Place, New York, New York 10003 (United States)

1995-04-10T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

301

For the first time, we report a unified microscopic-macroscopic Monte Carlo simulation of gas-grain chemistry in cold interstellar clouds in which both the gas-phase and the grain-surface chemistry are simulated by a stochastic technique. The surface chemistry is simulated with a microscopic Monte Carlo method in which the chemistry occurs on an initially flat surface. The surface chemical network consists of 29 reactions initiated by the accreting species H, O, C, and CO. Four different models are run with diverse but homogeneous physical conditions including temperature, gas density, and diffusion-barrier-to-desorption energy ratio. As time increases, icy interstellar mantles begin to grow. Our approach allows us to determine the morphology of the ice, layer by layer, as a function of time, and to ascertain the environment or environments for individual molecules. Our calculated abundances can be compared with observations of ices and gas-phase species, as well as the results of other models.

Chang Qiang [Department of Chemistry, University of Virginia, Charlottesville, VA 22904 (United States); Herbst, Eric [Also at Departments of Astronomy and Physics, University of Virginia, Charlottesville, VA 22904, USA. (United States)

2012-11-10T23:59:59.000Z

302

Science Conference Proceedings (OSTI)

Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running.

O'Brien, M J; Procassini, R J; Joy, K I

2009-03-09T23:59:59.000Z

303

This review discusses detector physics and Monte Carlo techniques for cryogenic, radiation detectors that utilize combined phonon and ionization readout. A general review of cryogenic phonon and charge transport is provided along with specific details of the Cryogenic Dark Matter Search detector instrumentation. In particular this review covers quasidiffusive phonon transport, which includes phonon focusing, anharmonic decay and isotope scattering. The interaction of phonons in the detector surface is discussed along with the downconversion of phonons in superconducting films. The charge transport physics include a mass tensor which results from the crystal band structure and is modeled with a Herring Vogt transformation. Charge scattering processes involve the creation of Neganov-Luke phonons. Transition-edge-sensor (TES) simulations include a full electric circuit description and all thermal processes including Joule heating, cooling to the substrate and thermal diffusion within the TES, the latter of which is necessary to model normal-superconducting phase separation. Relevant numerical constants are provided for these physical processes in germanium, silicon, aluminum and tungsten. Random number sampling methods including inverse cumulative distribution function (CDF) and rejection techniques are reviewed. To improve the efficiency of charge transport modeling, an additional second order inverse CDF method is developed here along with an efficient barycentric coordinate sampling method of electric fields. Results are provided in a manner that is convenient for use in Monte Carlo and references are provided for validation of these models.

S. W. Leman

2011-09-06T23:59:59.000Z

304

Quantum Monte Carlo methods are accurate and promising many body techniques for electronic structure calculations which, in the last years, are encountering a growing interest thanks to their favorable scaling with the system size and their efficient parallelization, particularly suited for the modern high performance computing facilities. The ansatz of the wave function and its variational flexibility are crucial points for both the accurate description of molecular properties and the capabilities of the method to tackle large systems. In this paper, we extensively analyze, using different variational ansatzes, several properties of the water molecule, namely: the total energy, the dipole and quadrupole momenta, the ionization and atomization energies, the equilibrium configuration, and the harmonic and fundamental frequencies of vibration. The investigation mainly focuses on variational Monte Carlo calculations, although several lattice regularized diffusion Monte Carlo calculations are also reported. Throu...

Zen, Andrea; Sorella, Sandro; Guidoni, Leonardo

2013-01-01T23:59:59.000Z

305

Science Conference Proceedings (OSTI)

Purpose: The objective of this work is to assess the sensitivity of Monte Carlo (MC) dose calculations to uncertainties in human tissue composition for a range of low photon energy brachytherapy sources: {sup 125}I, {sup 103}Pd, {sup 131}Cs, and an electronic brachytherapy source (EBS). The low energy photons emitted by these sources make the dosimetry sensitive to variations in tissue atomic number due to the dominance of the photoelectric effect. This work reports dose to a small mass of water in medium D{sub w,m} as opposed to dose to a small mass of medium in medium D{sub m,m}. Methods: Mean adipose, mammary gland, and breast tissues (as uniform mixture of the aforementioned tissues) are investigated as well as compositions corresponding to one standard deviation from the mean. Prostate mean compositions from three different literature sources are also investigated. Three sets of MC simulations are performed with the GEANT4 code: (1) Dose calculations for idealized TG-43-like spherical geometries using point sources. Radial dose profiles obtained in different media are compared to assess the influence of compositional uncertainties. (2) Dose calculations for four clinical prostate LDR brachytherapy permanent seed implants using {sup 125}I seeds (Model 2301, Best Medical, Springfield, VA). The effect of varying the prostate composition in the planning target volume (PTV) is investigated by comparing PTV D{sub 90} values. (3) Dose calculations for four clinical breast LDR brachytherapy permanent seed implants using {sup 103}Pd seeds (Model 2335, Best Medical). The effects of varying the adipose/gland ratio in the PTV and of varying the elemental composition of adipose and gland within one standard deviation of the assumed mean composition are investigated by comparing PTV D{sub 90} values. For (2) and (3), the influence of using the mass density from CT scans instead of unit mass density is also assessed. Results: Results from simulation (1) show that variations in the mean compositions of tissues affect low energy brachytherapy dosimetry. Dose differences between mean and one standard deviation of the mean composition increasing with distance from the source are observed. It is established that the {sup 125}I and {sup 131}Cs sources are the least sensitive to variations in elemental compositions while {sup 103}Pd is most sensitive. The EBS falls in between and exhibits complex behavior due to significant spectral hardening. Results from simulation (2) show that two prostate compositions are dosimetrically equivalent to water while the third shows D{sub 90} differences of up to 4%. Results from simulation (3) show that breast is more sensitive than prostate with dose variations of up to 30% from water for 70% adipose/30% gland breast. The variability of the breast composition adds a {+-}10% dose variation. Conclusions: Low energy brachytherapy dose distributions in tissue differ from water and are influenced by density, mean tissue composition, and patient-to-patient composition variations. The results support the use of a dose calculation algorithm accounting for heterogeneities such as MC. Since this work shows that variations in mean tissue compositions affect MC dosimetry and result in increased dose uncertainties, the authors conclude that imaging tools providing more accurate estimates of elemental compositions such as dual energy CT would be beneficial.

Landry, Guillaume; Reniers, Brigitte; Murrer, Lars; Lutgens, Ludy; Bloemen-Van Gurp, Esther; Pignol, Jean-Philippe; Keller, Brian; Beaulieu, Luc; Verhaegen, Frank [Department of Radiation Oncology (MAASTRO), GROW-School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Department of Radiation Oncology, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario M4N 3M5 (Canada); Departement de Radio-Oncologie et Centre de Recherche en Cancerologie, de l'Universite Laval, CHUQ, Pavillon L'Hotel-Dieu de Quebec, Quebec G1R 2J6 (Canada) and Departement de Physique, de Genie Physique et d'Optique, Universite Laval, Quebec G1K 7P4 (Canada); Department of Radiation Oncology (MAASTRO), GROW-School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands) and Medical Physics Unit, McGill University, Montreal General Hospital, Montreal, Quebec H3G 1A4 (Canada)

2010-10-15T23:59:59.000Z

306

Photon energy-modulated radiotherapy: Monte Carlo simulation and treatment planning study

Purpose: To demonstrate the feasibility of photon energy-modulated radiotherapy during beam-on time. Methods: A cylindrical device made of aluminum was conceptually proposed as an energy modulator. The frame of the device was connected with 20 tubes through which mercury could be injected or drained to adjust the thickness of mercury along the beam axis. In Monte Carlo (MC) simulations, a flattening filter of 6 or 10 MV linac was replaced with the device. The thickness of mercury inside the device varied from 0 to 40 mm at the field sizes of 5 x 5 cm{sup 2} (FS5), 10 x 10 cm{sup 2} (FS10), and 20 x 20 cm{sup 2} (FS20). At least 5 billion histories were followed for each simulation to create phase space files at 100 cm source to surface distance (SSD). In-water beam data were acquired by additional MC simulations using the above phase space files. A treatment planning system (TPS) was commissioned to generate a virtual machine using the MC-generated beam data. Intensity modulated radiation therapy (IMRT) plans for six clinical cases were generated using conventional 6 MV, 6 MV flattening filter free, and energy-modulated photon beams of the virtual machine. Results: As increasing the thickness of mercury, Percentage depth doses (PDD) of modulated 6 and 10 MV after the depth of dose maximum were continuously increased. The amount of PDD increase at the depth of 10 and 20 cm for modulated 6 MV was 4.8% and 5.2% at FS5, 3.9% and 5.0% at FS10 and 3.2%-4.9% at FS20 as increasing the thickness of mercury from 0 to 20 mm. The same for modulated 10 MV was 4.5% and 5.0% at FS5, 3.8% and 4.7% at FS10 and 4.1% and 4.8% at FS20 as increasing the thickness of mercury from 0 to 25 mm. The outputs of modulated 6 MV with 20 mm mercury and of modulated 10 MV with 25 mm mercury were reduced into 30%, and 56% of conventional linac, respectively. The energy-modulated IMRT plans had less integral doses than 6 MV IMRT or 6 MV flattening filter free plans for tumors located in the periphery while maintaining the similar quality of target coverage, homogeneity, and conformity. Conclusions: The MC study for the designed energy modulator demonstrated the feasibility of energy-modulated photon beams available during beam-on time. The planning study showed an advantage of energy-and intensity modulated radiotherapy in terms of integral dose without sacrificing any quality of IMRT plan.

Park, Jong Min; Kim, Jung-in; Heon Choi, Chang; Chie, Eui Kyu; Kim, Il Han; Ye, Sung-Joon [Interdiciplinary Program in Radiation Applied Life Science, Seoul National University, Seoul, 110-744, Korea and Department of Radiation Oncology, Seoul National University Hospital, Seoul, 110-744 (Korea, Republic of); Interdiciplinary Program in Radiation Applied Life Science, Seoul National University, Seoul, 110-744 (Korea, Republic of); Department of Radiation Oncology, Seoul National University Hospital, Seoul, 110-744 (Korea, Republic of); Interdiciplinary Program in Radiation Applied Life Science, Seoul National University, Seoul, 110-744 (Korea, Republic of) and Department of Radiation Oncology, Seoul National University Hospital, Seoul, 110-744 (Korea, Republic of); Interdiciplinary Program in Radiation Applied Life Science, Seoul National University, Seoul, 110-744 (Korea, Republic of); Department of Radiation Oncology, Seoul National University Hospital, Seoul, 110-744 (Korea, Republic of) and Department of Intelligent Convergence Systems, Seoul National University, Seoul, 151-742 (Korea, Republic of)

2012-03-15T23:59:59.000Z

307

Science Conference Proceedings (OSTI)

Purpose: To demonstrate potential of correlated sampling Monte Carlo (CMC) simulation to improve the calculation efficiency for permanent seed brachytherapy (PSB) implants without loss of accuracy. Methods: CMC was implemented within an in-house MC code family (PTRAN) and used to compute 3D dose distributions for two patient cases: a clinical PSB postimplant prostate CT imaging study and a simulated post lumpectomy breast PSB implant planned on a screening dedicated breast cone-beam CT patient exam. CMC tallies the dose difference, {Delta}D, between highly correlated histories in homogeneous and heterogeneous geometries. The heterogeneous geometry histories were derived from photon collisions sampled in a geometrically identical but purely homogeneous medium geometry, by altering their particle weights to correct for bias. The prostate case consisted of 78 Model-6711 {sup 125}I seeds. The breast case consisted of 87 Model-200 {sup 103}Pd seeds embedded around a simulated lumpectomy cavity. Systematic and random errors in CMC were unfolded using low-uncertainty uncorrelated MC (UMC) as the benchmark. CMC efficiency gains, relative to UMC, were computed for all voxels, and the mean was classified in regions that received minimum doses greater than 20%, 50%, and 90% of D{sub 90}, as well as for various anatomical regions. Results: Systematic errors in CMC relative to UMC were less than 0.6% for 99% of the voxels and 0.04% for 100% of the voxels for the prostate and breast cases, respectively. For a 1 x 1 x 1 mm{sup 3} dose grid, efficiency gains were realized in all structures with 38.1- and 59.8-fold average gains within the prostate and breast clinical target volumes (CTVs), respectively. Greater than 99% of the voxels within the prostate and breast CTVs experienced an efficiency gain. Additionally, it was shown that efficiency losses were confined to low dose regions while the largest gains were located where little difference exists between the homogeneous and heterogeneous doses. On an AMD 1090T processor, computing times of 38 and 21 sec were required to achieve an average statistical uncertainty of 2% within the prostate (1 x 1 x 1 mm{sup 3}) and breast (0.67 x 0.67 x 0.8 mm{sup 3}) CTVs, respectively. Conclusions: CMC supports an additional average 38-60 fold improvement in average efficiency relative to conventional uncorrelated MC techniques, although some voxels experience no gain or even efficiency losses. However, for the two investigated case studies, the maximum variance within clinically significant structures was always reduced (on average by a factor of 6) in the therapeutic dose range generally. CMC takes only seconds to produce an accurate, high-resolution, low-uncertainly dose distribution for the low-energy PSB implants investigated in this study.

Sampson, Andrew; Le Yi; Williamson, Jeffrey F. [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)

2012-02-15T23:59:59.000Z

308

Numerical Issues of Monte Carlo PDF for Large Eddy Simulations of Turbulent Flames

Carlo PDF Methods for Turbulent Diffusion Flames,” Combust.Flame 124:519-534 (2001). Muradoglu, M. , Jenny, P. Pope,Methane-Air Nonpremixed Jet Flames,” Combustion Science and

Bisetti, Fabrizio; Chen, J Y

2005-01-01T23:59:59.000Z

309

A Monte-Carlo Approach for Full-Ahead Stochastic DAG Scheduling Department of Computer Science

-carlo methods; I. INTRODUCTION As heterogeneous distributed computing systems (e.g., clusters, Grids, Clouds etc modelled by DAGs [1]. In a DAG, nodes denote tasks and edges represent data transmission among tasks. Given a set of resources, a schedule for a DAG is an assignment which specifies the mapping of tasks

Sakellariou, Rizos

310

NLE Websites -- All DOE Office Websites (Extended Search)

NUCLEAR DATA AND MEASUREMENT SERIES NUCLEAR DATA AND MEASUREMENT SERIES ANL/NDM-166 A Unified Monte Carlo Approach to Fast Neutron Cross Section Data Evaluation Donald L. Smith January 2008 NUCLEAR ENGINEERING DIVISION ARGONNE NATIONAL LABORATORY 9700 SOUTH CASS AVENUE ARGONNE, ILLINOIS 60439, U.S.A. 1 About Argonne National Laboratory Argonne is a U.S. Department of Energy laboratory managed by UChicago Argonne, LLC under contract DE-AC02-06CH11357. The Laboratory's main facility is in the suburbs of Chicago at 9700 South Cass Avenue, Argonne, Illinois 60439. For information about Argonne National Laboratory see http://www.anl.gov. Availability of this Report This report is available, at no cost, at http://www.osti.gov/bridge. It is also available on paper from the U.S. Department of Energy and its contractors, for a processing fee, from:

311

Monte-Carlo and Variational Calculations of the Magnetic Phase Diagram of CuFeO2

Monte-Carlo and variational calculations are used to revise the phase diagram of the magnetically- frustrated material CuFeO2. For fields 50 T < H < 65 T, a new spin flop phase is predicted between a canted three-sublattice phase and the conventional conical spin-flop phase. This phase has wavevector Q (0.8 , 0.43 ) and is commensurate in the x direction but incommensurate in the y direction. A canted five-sublattice phase is predicted between the multiferroic phase and either a collinear five-sublattice phase for pure CuFeO2 or a canted three-sublattice phase for Al- or Ga-doped CuFeO2.

Fishman, Randy Scott [ORNL; Brown, Gregory [Florida State University; Haraldsen, Jason T [ORNL; Haraldsen, Jason T. [Los Alamos National Laboratory (LANL)

2012-01-01T23:59:59.000Z

312

We introduce a new Markov-Chain Monte Carlo (MCMC) approach designed for efficient sampling of highly correlated and multimodal posteriors. Parallel tempering, though effective, is a costly technique for sampling such posteriors. Our approach minimizes the use of parallel tempering, only using it for a short time to tune a new jump proposal. For complex posteriors we find efficiency improvements up to a factor of ~13. The estimation of parameters of gravitational-wave signals measured by ground-based detectors is currently done through Bayesian inference with MCMC one of the leading sampling methods. Posteriors for these signals are typically multimodal with strong non-linear correlations, making sampling difficult. As we enter the advanced-detector era, improved sensitivities and wider bandwidths will drastically increase the computational cost of analyses, demanding more efficient search algorithms to meet these challenges.

Farr, Benjamin; Luijten, Erik

2013-01-01T23:59:59.000Z

313

Parameters of a subcritical cascade reactor driven by a proton accelerator and based on a primary lead-bismuth target, main reactor constructed analogously to the molten salt breeder (MSBR) reactor core and a booster-reactor analogous to the core of the BN-350 liquid metal cooled fast breeder reactor (LMFBR). It is shown by means of Monte-Carlo modeling that the reactor under study provides safe operation modes (k_{eff}=0.94-0.98), is apable to transmute effectively radioactive nuclear waste and reduces by an order of magnitude the requirements on the accelerator beam current. Calculations show that the maximal neutron flux in the thermal zone is 10^{14} cm^{12}\\cdot s^_{-1}, in the fast booster zone is 5.12\\cdot10^{15} cm^{12}\\cdot s{-1} at k_{eff}=0.98 and proton beam current I=2.1 mA.

Bznuni, S A; Zhamkochyan, V M; Polanski, A; Sosnin, A N; Khudaverdyan, A H

2001-01-01T23:59:59.000Z

314

Science Conference Proceedings (OSTI)

Purpose: To commission Monte Carlo beam models for five Varian megavoltage photon beams (4, 6, 10, 15, and 18 MV). The goal is to closely match measured dose distributions in water for a wide range of field sizes (from 2x2 to 35x35 cm{sup 2}). The second objective is to reinvestigate the sensitivity of the calculated dose distributions to variations in the primary electron beam parameters. Methods: The GEPTS Monte Carlo code is used for photon beam simulations and dose calculations. The linear accelerator geometric models are based on (i) manufacturer specifications, (ii) corrections made by Chibani and Ma [''On the discrepancies between Monte Carlo dose calculations and measurements for the 18 MV Varian photon beam,'' Med. Phys. 34, 1206-1216 (2007)], and (iii) more recent drawings. Measurements were performed using pinpoint and Farmer ionization chambers, depending on the field size. Phase space calculations for small fields were performed with and without angle-based photon splitting. In addition to the three commonly used primary electron beam parameters (E{sub AV} is the mean energy, FWHM is the energy spectrum broadening, and R is the beam radius), the angular divergence ({theta}) of primary electrons is also considered. Results: The calculated and measured dose distributions agreed to within 1% local difference at any depth beyond 1 cm for different energies and for field sizes varying from 2x2 to 35x35 cm{sup 2}. In the penumbra regions, the distance to agreement is better than 0.5 mm, except for 15 MV (0.4-1 mm). The measured and calculated output factors agreed to within 1.2%. The 6, 10, and 18 MV beam models use {theta}=0 deg., while the 4 and 15 MV beam models require {theta}=0.5 deg. and 0.6 deg., respectively. The parameter sensitivity study shows that varying the beam parameters around the solution can lead to 5% differences with measurements for small (e.g., 2x2 cm{sup 2}) and large (e.g., 35x35 cm{sup 2}) fields, while a perfect agreement is maintained for the 10x10 cm{sup 2} field. The influence of R on the central-axis depth dose and the strong influence of {theta} on the lateral dose profiles are demonstrated. Conclusions: Dose distributions for very small and very large fields were proved to be more sensitive to variations in E{sub AV}, R, and {theta} in comparison with the 10x10 cm{sup 2} field. Monte Carlo beam models need to be validated for a wide range of field sizes including small field sizes (e.g., 2x2 cm{sup 2}).

Chibani, Omar; Moftah, Belal; Ma, C.-M. Charlie [Department of Biomedical Physics, King Faisal Specialist Hospital and Research Center, Riyadh 11211 (Saudi Arabia) and Fox Chase Cancer Center, Philadelphia, Pennsylvania 19111 (United States); Department of Biomedical Physics, King Faisal Specialist Hospital and Research Center, Riyadh 11211 (Saudi Arabia); Fox Chase Cancer Center, Philadelphia, Pennsylvania 19111 (United States)

2011-01-15T23:59:59.000Z

315

There is a great need in the safeguards community to be able to nondestructively quantify the mass of plutonium of a spent nuclear fuel assembly. As part of the Next Generation of Safeguards Initiative, we are investigating several techniques, or detector systems, which, when integrated, will be capable of quantifying the plutonium mass of a spent fuel assembly without dismantling the assembly. This paper reports on the simulation of one of these techniques, the Passive Neutron Albedo Reactivity with Fission Chambers (PNAR-FC) system. The response of this system over a wide range of spent fuel assemblies with different burnup, initial enrichment, and cooling time characteristics is shown. A Monte Carlo method of using these modeled results to estimate the fissile content of a spent fuel assembly has been developed. A few numerical simulations of using this method are shown. Finally, additional developments still needed and being worked on are discussed.

Conlin, Jeremy Lloyd [Los Alamos National Laboratory; Tobin, Stephen J [Los Alamos National Laboratory

2010-10-13T23:59:59.000Z

316

Science Conference Proceedings (OSTI)

Two recent reports on Monte Carlo studies have examined the angular response of a multiple-rod neutron scintillator and the energy response of a moderated {sup 3}He neutron counter. This report extends those studies to provide calculations of the effective area and angular sensitivity of a polyethylene-moderated neutron detector that has multiple {sup 3}He tubes. The results (1) provide a more accurate and general determination of the sensor`s detection efficiency, (2) suggest new techniques for obtaining information about the source direction, and (3) allow evaluation of proposals to improve the high-energy detection efficiency by using the production of (n,2n) neutrons in high-density material added to the moderator.

Not Available

1995-01-01T23:59:59.000Z

317

Nuclear interactions of 160 MeV protons stopping in copper: A test of Monte Carlo nuclear models

Science Conference Proceedings (OSTI)

To estimate the influence of nuclear interactions on dose or biological effect one uses Monte Carlo programs which include nuclear models. We introduce an experimental method to check these models at proton therapy energies. We have measured the distribution of charge deposited by 160 MeV protons stopping in a stack of insulated copper plates. A buildup region ahead of the main peak contains ?20% of the total charge and is entirely due to charged secondaries from inelastic nuclear interactions. The acceptance for charged secondaries is 100%. Therefore the data are a good benchmark for nuclear models. We have simulated the stack using GEANT with two nuclear models.FLUKA agrees fairly well with the measurement but GHEISHA

Bernard Gottschalk; Rachel Platais; Harald Paganetti

1999-01-01T23:59:59.000Z

318

Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation

Science Conference Proceedings (OSTI)

Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI for performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)

Pecchia, M.; D'Auria, F. [San Piero A Grado Nuclear Research Group GRNSPG, Univ. of Pisa, via Diotisalvi, 2, 56122 - Pisa (Italy); Mazzantini, O. [Nucleo-electrica Argentina Societad Anonima NA-SA, Buenos Aires (Argentina)

2012-07-01T23:59:59.000Z

319

NLE Websites -- All DOE Office Websites (Extended Search)

Application of Distribution Application of Distribution Transformer Thermal Life Models to Electrified Vehicle Charging Loads Using Monte-Carlo Method Preprint Michael Kuss, Tony Markel, and William Kramer Presented at the 25th World Battery, Hybrid and Fuel Cell Electric Vehicle Symposium & Exhibition Shenzhen, China November 5 - 9, 2010 Conference Paper NREL/CP-5400-48827 January 2011 NOTICE The submitted manuscript has been offered by an employee of the Alliance for Sustainable Energy, LLC (Alliance), a contractor of the US Government under Contract No. DE-AC36-08GO28308. Accordingly, the US Government and Alliance retain a nonexclusive royalty-free license to publish or reproduce the published form of this contribution, or allow others to do so, for US Government purposes.

320

Science Conference Proceedings (OSTI)

Purpose: External beam radiotherapy is the only conservative curative approach for Stage I non-Hodgkin lymphomas of the conjunctiva. The target volume is geometrically complex because it includes the eyeball and lid conjunctiva. Furthermore, the target volume is adjacent to radiosensitive structures, including the lens, lacrimal glands, cornea, retina, and papilla. The radiotherapy planning and optimization requires accurate calculation of the dose in these anatomical structures that are much smaller than the structures traditionally considered in radiotherapy. Neither conventional treatment planning systems nor dosimetric measurements can reliably determine the dose distribution in these small irradiated volumes. Methods and Materials: The Monte Carlo simulations of a Varian Clinac 2100 C/D and human eye were performed using the PENELOPE and PENEASYLINAC codes. Dose distributions and dose volume histograms were calculated for the bulbar conjunctiva, cornea, lens, retina, papilla, lacrimal gland, and anterior and posterior hemispheres. Results: The simulated results allow choosing the most adequate treatment setup configuration, which is an electron beam energy of 6 MeV with additional bolus and collimation by a cerrobend block with a central cylindrical hole of 3.0 cm diameter and central cylindrical rod of 1.0 cm diameter. Conclusions: Monte Carlo simulation is a useful method to calculate the minute dose distribution in ocular tissue and to optimize the electron irradiation technique in highly critical structures. Using a voxelized eye phantom based on patient computed tomography images, the dose distribution can be estimated with a standard statistical uncertainty of less than 2.4% in 3 min using a computing cluster with 30 cores, which makes this planning technique clinically relevant.

Brualla, Lorenzo, E-mail: lorenzo.brualla@uni-due.de [NCTeam, Strahlenklinik, Universitaetsklinikum Essen, Essen (Germany); Zaragoza, Francisco J.; Sempau, Josep [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Barcelona (Spain); Wittig, Andrea [Department of Radiation Oncology, University Hospital Giessen and Marburg, Philipps-University Marburg, Marburg (Germany); Sauerwein, Wolfgang [NCTeam, Strahlenklinik, Universitaetsklinikum Essen, Essen (Germany)

2012-07-15T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

321

Purpose: A linac delivering intensity-modulated radiotherapy (IMRT) can benefit from a flattening filter free (FFF) design which offers higher dose rates and reduced accelerator head scatter than for conventional (flattened) delivery. This reduction in scatter simplifies beam modeling, and combining a Monte Carlo dose engine with a FFF accelerator could potentially increase dose calculation accuracy. The objective of this work was to model a FFF machine using an adapted version of a previously published virtual source model (VSM) for Monte Carlo calculations and to verify its accuracy. Methods: An Elekta Synergy linear accelerator operating at 6 MV has been modified to enable irradiation both with and without the flattening filter (FF). The VSM has been incorporated into a commercially available treatment planning system (Monaco Trade-Mark-Sign v 3.1) as VSM 1.6. Dosimetric data were measured to commission the treatment planning system (TPS) and the VSM adapted to account for the lack of angular differential absorption and general beam hardening. The model was then tested using standard water phantom measurements and also by creating IMRT plans for a range of clinical cases. Results: The results show that the VSM implementation handles the FFF beams very well, with an uncertainty between measurement and calculation of <1% which is comparable to conventional flattened beams. All IMRT beams passed standard quality assurance tests with >95% of all points passing gamma analysis ({gamma} < 1) using a 3%/3 mm tolerance. Conclusions: The virtual source model for flattened beams was successfully adapted to a flattening filter free beam production. Water phantom and patient specific QA measurements show excellent results, and comparisons of IMRT plans generated in conventional and FFF mode are underway to assess dosimetric uncertainties and possible improvements in dose calculation and delivery.

Cashmore, Jason; Golubev, Sergey; Dumont, Jose Luis; Sikora, Marcin; Alber, Markus; Ramtohul, Mark [Hall-Edwards Radiotherapy Research Group, University Hospital Birmingham NHS Foundation Trust, United Kingdom, B15 2TH (United Kingdom); Elekta CMS Software, St. Louis, Missouri 63043 (United States); Department of Oncology and Medical Physics, Haukeland University Hospital, Bergen 5021 (Norway); Section for Biomedical Physics, University Hospital for Radiation Oncology, Hoppe-Seyler-Str 3, 72076, Tuebingen (Germany); Hall-Edwards Radiotherapy Research Group, University Hospital Birmingham NHS Foundation Trust, United Kingdom, B15 2TH (United Kingdom)

2012-06-15T23:59:59.000Z

322

Purpose: The formalism recommended by Task Group 60 (TG-60) of the American Association of Physicists in Medicine (AAPM) is applicable for {beta} sources. Radioactive biocompatible and biodegradable {sup 153}Sm glass seed without encapsulation is a {beta}{sup -} emitter radionuclide with a short half-life and delivers a high dose rate to the tumor in the millimeter range. This study presents the results of Monte Carlo calculations of the dosimetric parameters for the {sup 153}Sm brachytherapy source. Methods: Version 5 of the (MCNP) Monte Carlo radiation transport code was used to calculate two-dimensional dose distributions around the source. The dosimetric parameters of AAPM TG-60 recommendations including the reference dose rate, the radial dose function, the anisotropy function, and the one-dimensional anisotropy function were obtained. Results: The dose rate value at the reference point was estimated to be 9.21{+-}0.6 cGy h{sup -1} {mu}Ci{sup -1}. Due to the low energy beta emitted from {sup 153}Sm sources, the dose fall-off profile is sharper than the other beta emitter sources. The calculated dosimetric parameters in this study are compared to several beta and photon emitting seeds. Conclusions: The results show the advantage of the {sup 153}Sm source in comparison with the other sources because of the rapid dose fall-off of beta ray and high dose rate at the short distances of the seed. The results would be helpful in the development of the radioactive implants using {sup 153}Sm seeds for the brachytherapy treatment.

Sadeghi, Mahdi; Taghdiri, Fatemeh; Hamed Hosseini, S.; Tenreiro, Claudio [Agricultural, Medical and Industrial School, P.O. Box 31485-498, Karaj (Iran, Islamic Republic of); Engineering Faculty, Research and Science Campus, Islamic Azad University, Tehran (Iran, Islamic Republic of); Department of Energy Science, SungKyunKwan University, 300 Cheoncheon-dong, Suwon (Korea, Republic of)

2010-10-15T23:59:59.000Z

323

Lower and upper bounds for the absolute free energy by the hypothetical scanning Monte Carlo method The hypothetical scanning HS method is a general approach for calculating the absolute entropy S and free energy F to provide the free energy through the analysis of a single configuration. Â© 2004 American Institute

Meirovitch, Hagai

324

PHYSICAL REVIEW C 83, 064612 (2011) Advanced Monte Carlo modeling of prompt fission neutrons of the 6th All Union Conference on Neutron Physics, Kiev, 2Â6 October 1983, p. 285, EXFOR entry 40871, A. F. Semenov, and B. I. Starostov, Proceedings of the 6th All Union Conference on Neutron Physics

Danon, Yaron

325

Science Conference Proceedings (OSTI)

Purpose: To investigate the use of various breast tissue segmentation models in Monte Carlo dose calculations for low-energy brachytherapy. Methods: The EGSnrc user-code BrachyDose is used to perform Monte Carlo simulations of a breast brachytherapy treatment using TheraSeed Pd-103 seeds with various breast tissue segmentation models. Models used include a phantom where voxels are randomly assigned to be gland or adipose (randomly segmented), a phantom where a single tissue of averaged gland and adipose is present (averaged tissue), and a realistically segmented phantom created from previously published numerical phantoms. Radiation transport in averaged tissue while scoring in gland along with other combinations is investigated. The inclusion of calcifications in the breast is also studied in averaged tissue and randomly segmented phantoms. Results: In randomly segmented and averaged tissue phantoms, the photon energy fluence is approximately the same; however, differences occur in the dose volume histograms (DVHs) as a result of scoring in the different tissues (gland and adipose versus averaged tissue), whose mass energy absorption coefficients differ by 30%. A realistically segmented phantom is shown to significantly change the photon energy fluence compared to that in averaged tissue or randomly segmented phantoms. Despite this, resulting DVHs for the entire treatment volume agree reasonably because fluence differences are compensated by dose scoring differences. DVHs for the dose to only the gland voxels in a realistically segmented phantom do not agree with those for dose to gland in an averaged tissue phantom. Calcifications affect photon energy fluence to such a degree that the differences in fluence are not compensated for (as they are in the no calcification case) by dose scoring in averaged tissue phantoms. Conclusions: For low-energy brachytherapy, if photon transport and dose scoring both occur in an averaged tissue, the resulting DVH for the entire treatment volume is reasonably accurate because inaccuracies in photon energy fluence are compensated for by inaccuracies in localized dose scoring. If dose to fibroglandular tissue in the breast is of interest, then the inaccurate photon energy fluence calculated in an averaged tissue phantom will result in inaccurate DVHs and average doses for those tissues. Including calcifications necessitates the use of proper tissue segmentation.

Sutherland, J. G. H.; Thomson, R. M.; Rogers, D. W. O. [Carleton Laboratory for Radiotherapy Physics, Department of Physics, Carleton University, Ottawa K1S 5B6 (Canada)

2011-08-15T23:59:59.000Z

326

Science Conference Proceedings (OSTI)

The purpose of this study was to develop an efficient method to determine the optimal intensity distribution of the pretarget electron beam in a Monte Carlo (MC) accelerator model able to most accurately reproduce a set of measured photon field profiles for a given accelerator geometry and nominal photon beam energy. The method has the ability to reduce the number of simulations required to commission a MC accelerator model and has achieved better agreement with measurement than other methods described in literature. The method begins from a cylindrically symmetric pretarget electron beam (radius of 0.5 cm) of uniform intensity. This beam is subdivided into annular regions of fluence for which each region is individually transported through the accelerator head and into a water phantom. A simulated annealing search is then performed to determine the optimal combination of weights of the annular fluences that provide a best match between the measured dose distributions and the weighted sum of annular dose distributions for particular pretarget electron energy. When restricted to Gaussian intensity distributions, the optimization determined an optimal FWHM=1.34 mm for 18.0 MeV electrons, with a RMSE=0.49% on 40x40 cm{sup 2} lateral profiles. When allowed to deviate from Gaussian intensities a further reduction in RMSE was achieved. For our Clinac 21 EX accelerator MC model (based on the 1996 Varian Oncology Systems, Monte Carlo Project package), the optimal unrestricted intensity distribution was found to be a Gaussian-like solution (18.0 MeV, FWHM=1.10 mm, 40x40 cm{sup 2} profile, and RMSE=0.15%) with the presence of an extra focal halo contribution on the order of 10% of the maximum Gaussian intensity. Using the optimally derived intensity, 10x10 and 4x4 cm{sup 2} profiles were found to be in agreement with measurement with a maximum RMSE=0.49%. The optimized Gaussian and unrestricted values of the electron beam FWHM were both within the range of those inferred by focal spot image measurements performed by Jaffray et al.[''X-ray sources of medical linear accelerators: Focal and extra-focal radiation,'' Med. Phys. 20, 1417-1427 (1993)]. The inference of an extra focal pretarget electron component may be an indicator of a deficiency in the MC model and needs further investigation.

Bush, Karl; Zavgorodni, Sergei; Beckham, Wayne [Department of Physics and Astronomy, University of Victoria, P. O. Box 3055 STN CSC, Victoria, British Columbia V8W 3P6 (Canada); Department of Medical Physics, British Columbia Cancer Agency-Vancouver Island Center, Victoria, British Columbia V8R 6V5 (Canada) and Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8R 6V5 (Canada)

2009-06-15T23:59:59.000Z

327

Science Conference Proceedings (OSTI)

The purpose of this study is to validate a Monte Carlo based depletion methodology by comparing calculated post-irradiation uranium isotopic compositions in the fuel elements of the High Flux Isotope Reactor (HFIR) core to values measured using uranium mass-spectrographic analysis. Three fuel plates were analyzed: two from the outer fuel element (OFE) and one from the inner fuel element (IFE). Fuel plates O-111-8, O-350-1, and I-417-24 from outer fuel elements 5-O and 21-O and inner fuel element 49-I, respectively, were selected for examination. Fuel elements 5-O, 21-O, and 49-1 were loaded into HFIR during cycles 4, 16, and 35, respectively (mid to late 1960s). Approximately one year after each of these elements were irradiated, they were transferred to the High Radiation Level Examination Laboratory (HRLEL) where samples from these fuel plates were sectioned and examined via uranium mass-spectrographic analysis. The isotopic composition of each of the samples was used to determine the atomic percent of the uranium isotopes. A Monte Carlo based depletion computer program, ALEPH, which couples the MCNP and ORIGEN codes, was utilized to calculate the nuclide inventory at the end-of-cycle (EOC). A current ALEPH/MCNP input for HFIR fuel cycle 400 was modified to replicate cycles 4, 16, and 35. The control element withdrawal curves and flux trap loadings were revised, as well as the radial zone boundaries and nuclide concentrations in the MCNP model. The calculated EOC uranium isotopic compositions for the analyzed plates were found to be in good agreement with measurements, which reveals that ALEPH/MCNP can accurately calculate burn-up dependent uranium isotopic concentrations for the HFIR core. The spatial power distribution in HFIR changes significantly as irradiation time increases due to control element movement. Accurate calculation of the end-of-life uranium isotopic inventory is a good indicator that the power distribution variation as a function of space and time is accurately calculated, i.e. an integral check. Hence, the time dependent heat generation source terms needed for reactor core thermal hydraulic analysis, if derived from this methodology, have been shown to be accurate for highly enriched uranium (HEU) fuel.

Chandler, David [ORNL; Maldonado, G Ivan [ORNL; Primm, Trent [ORNL

2010-01-01T23:59:59.000Z

328

Science Conference Proceedings (OSTI)

In this paper, Monte Carlo optimization and nuclear data evaluation are combined to produce optimal adjusted nuclear data files. The methodology is based on the so-called 'Total Monte Carlo' and the TALYS system. Not only a single nuclear data file is produced for a given isotope, but virtually an infinite number, defining probability distributions for each nuclear quantity. Then each of these random nuclear data libraries is used in a series of benchmark calculations. With a goodness-of-fit estimator, best {sup 239}Pu, {sup 56}Fe, {sup 28}Si and {sup 95}Mo evaluations for that benchmark set can be selected. A few thousands of random files are used and each of them is tested with a large number of fast, thermal and intermediate energy criticality benchmarks. From this, the best performing random file is chosen and proposed as the optimum choice among the studied random set. (authors)

Rochman, D.; Koning, A. J. [Nuclear Research and Consultancy Group, Petten (Netherlands)

2012-07-01T23:59:59.000Z

329

Science Conference Proceedings (OSTI)

ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.

Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

2004-06-01T23:59:59.000Z

330

Science Conference Proceedings (OSTI)

A Monte Carlo simulation of the fission fragment deexcitation process was developed in order to analyze and predict postfission-related nuclear data which are of crucial importance for basic and applied nuclear physics. The basic ideas of such a simulation were already developed in the past. In the present work, a refined model is proposed in order to make a reliable description of the distributions related to fission fragments as well as to prompt neutron and {gamma} energies and multiplicities. This refined model is mainly based on a mass-dependent temperature ratio law used for the initial excitation energy partition of the fission fragments and a spin-dependent excitation energy limit for neutron emission. These phenomenological improvements allow us to reproduce with a good agreement the {sup 252}Cf(sf) experimental data on prompt fission neutron multiplicity {nu}(A), {nu}(TKE), the neutron multiplicity distribution P({nu}), as well as their energy spectra N(E), and lastly the energy release in fission.

Litaize, O.; Serot, O. [CEA Cadarache, F-13108 Saint Paul lez Durance (France)

2010-11-15T23:59:59.000Z

331

Science Conference Proceedings (OSTI)

A detailed description of a compact Monte Carlo simulation code ''G3sim'' for studying the performance of a plastic scintillator detector with wavelength shifter (WLS) fiber readout is presented. G3sim was developed for optimizing the design of new scintillator detectors used in the GRAPES-3 extensive air shower experiment. Propagation of the blue photons produced by the passage of relativistic charged particles in the scintillator is treated by incorporating the absorption, total internal, and diffuse reflections. Capture of blue photons by the WLS fibers and subsequent re-emission of longer wavelength green photons is appropriately treated. The trapping and propagation of green photons inside the WLS fiber is treated using the laws of optics for meridional and skew rays. Propagation time of each photon is taken into account for the generation of the electrical signal at the photomultiplier. A comparison of the results from G3sim with the performance of a prototype scintillator detector showed an excellent agreement between the simulated and measured properties. The simulation results can be parametrized in terms of exponential functions providing a deeper insight into the functioning of these versatile detectors. G3sim can be used to aid the design and optimize the performance of scintillator detectors prior to actual fabrication that may result in a considerable saving of time, labor, and money spent.

Mohanty, P. K.; Dugad, S. R.; Gupta, S. K. [Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005 (India)

2012-04-15T23:59:59.000Z

332

The new approach outlined in Paper I (Spurzem \\& Giersz 1996) to follow the individual formation and evolution of binaries in an evolving, equal point-mass star cluster is extended for the self-consistent treatment of relaxation and close three- and four-body encounters for many binaries (typically a few percent of the initial number of stars in the cluster). The distribution of single stars is treated as a conducting gas sphere with a standard anisotropic gaseous model. A Monte Carlo technique is used to model the motion of binaries, their formation and subsequent hardening by close encounters, and their relaxation (dynamical friction) with single stars and other binaries. The results are a further approach towards a realistic model of globular clusters with primordial binaries without using special hardware. We present, as our main result, the self-consistent evolution of a cluster consisting of 300.000 equal point-mass stars, plus 30.000 equal mass binaries over several hundred half-mass relaxation tim...

Giersz, M

1999-01-01T23:59:59.000Z

333

A Monte Carlo model has been developed for interrogation of fissionable material embedded in thick cargos when high-energy {beta}-delayed {gamma} rays are detected following neutron-induced fission. The model includes the principal structural components of the laboratory, the neutron source and collimator assembly in which it resides, the assembly that represents cargo of given characteristics, a target of highly-enriched uranium (HEU) and large external plastic scintillators for photon detection. The ability of this model to reproduce experimental measurements was tested by comparing simulations with measurements of the number of induced fissions and the number of detected photons when the HUE target was irradiated with 14.25-MeV neutrons in the absence of any cargo and while embedded in assemblies of plywood and iron pipes. The simulations agreed with experimental measurements within a factor of about 2 for irradiation of the bare target and when the areal density of intervening cargo was 33 g cm{sup -2} (wood) and 61 g cm{sup -2} (steel pipes). This suggests that the model can permit exploration of a large range in parameter space with reasonable fidelity.

Prussin, S; Descalle, M; Hall, J; Pruet, J; Slaughter, D; Accatino, M; Alford, O; Asztalos, S; Bernstein, A; Church, J; Gosnell, T; Loshak, A; Madden, N; Manatt, D; Mauger, G; Meyer, A; Moore, T; Norman, E; Pohl, B; Petersen, D; Rusnak, B; Sundsmo, T; Tembrook, W; Walling, R

2006-06-08T23:59:59.000Z

334

VBFNLO: A parton level Monte Carlo for processes with electroweak bosons -- Manual for Version 2.6.0

Vbfnlo is a flexible parton level Monte Carlo program for the simulation of vector boson fusion (VBF), double and triple vector boson production (plus jet) in hadronic collisions at next-to-leading order (NLO) in the strong coupling constant, as well as Higgs boson plus two jet production via gluon fusion at the one-loop level. In the new release -- Version 2.6.0 -- several new processes have been added at NLO QCD: diboson production W\\gamma, WZ, ZZ, Z\\gamma and \\gamma\\gamma), same-sign W pair production via vector boson fusion and the triboson plus jet process W\\gamma\\gamma j. In addition, gluon induced diboson production has been implemented at the leading order (one-loop) level. The diboson processes WW, WZ and W\\gamma can be run with anomalous gauge boson couplings, and anomalous couplings between a Higgs and a pair of gauge bosons is included in WW, ZZ, Z\\gamma and \\gamma\\gamma diboson production. The code has also been extended to include anomalous couplings for single vector boson production via VBF, and a spin-2 model has been implemented for diboson pair production via vector boson fusion.

K. Arnold; J. Bellm; G. Bozzi; M. Brieg; F. Campanario; C. Englert; B. Feigl; J. Frank; T. Figy; F. Geyer; C. Hackstein; V. Hankele; B. Jager; M. Kerner; M. Kubocz; C. Oleari; S. Palmer; S. Platzer; M. Rauch; H. Rzehak; F. Schissler; O. Schlimpert; M. Spannowsky; M. Worek; D. Zeppenfeld

2011-07-20T23:59:59.000Z

335

(abridged) We present a new time-dependent multi-zone radiative transfer code and its application to study the SSC emission of Mrk 421. The code couples Fokker-Planck and Monte Carlo methods, in a 2D geometry. For the first time all the light travel time effects (LCTE) are fully considered, along with a proper treatment of Compton cooling, which depends on them. We study a set of simple scenarios where the variability is produced by injection of relativistic electrons as a `shock front' crosses the emission region. We consider emission from two components, with the second one either being pre-existing and co-spatial and participating in the evolution of the active region, or spatially separated and independent, only diluting the observed variability. Temporal and spectral results of the simulation are compared to the multiwavelength observations of Mrk 421 in March 2001. We find parameters that can adequately fit the observed SEDs and multiwavelength light curves and correlations. There remain however a few o...

Chen, Xuhui; Liang, Edison; Boettcher, Markus

2011-01-01T23:59:59.000Z

336

Morel (1981) has developed multigroup Legendre cross sections suitable for input to standard discrete ordinates transport codes for performing charged-particle Fokker-Planck calculations in one-dimensional slab and spherical geometries. Since the Monte Carlo neutron transport code, MORSE, uses the same multigroup cross section data that discrete ordinates codes use, it was natural to consider whether Fokker-Planck calculations could be performed with MORSE. In order to extend the unique three-dimensional forward or adjoint capability of MORSE to Fokker-Planck calculations, the MORSE code was modified to correctly treat the delta-function scattering of the energy operator, and a new set of physically acceptable cross sections was derived to model the angular operator. Morel (1979) has also developed multigroup Legendre cross sections suitable for input to standard discrete ordinates codes for performing electron Boltzmann calculations. These electron cross sections may be treated in MORSE with the same methods developed to treat the Fokker-Planck cross sections. The large magnitude of the elastic scattering cross section, however, severely increases the computation or run time. It is well-known that approximate elastic cross sections are easily obtained by applying the extended transport (or delta function) correction to the Legendre coefficients of the exact cross section. An exact method for performing the extended transport cross section correction produces cross sections which are physically acceptable. Sample calculations using electron cross sections have demonstrated this new technique to be very effective in decreasing the large magnitude of the cross sections.

Sloan, D.P.

1983-05-01T23:59:59.000Z

337

We formulate a model of N_f=4 flavors of relativistic fermion in 2+1d in the presence of a chemical potential mu coupled to two flavor doublets with opposite sign, akin to isopsin chemical potential in QCD. This is argued to be an effective theory for low energy electronic excitations in bilayer graphene, in which an applied voltage between the layers ensures equal populations of particles on one layer and holes on the other. The model is then reformulated on a spacetime lattice using staggered fermions, and in the absence of a sign problem, simulated using an orthodox hybrid Monte Carlo algorithm. With the coupling strength chosen to be close to a quantum critical point believed to exist for N_f

Wes Armour; Simon Hands; Costas Strouthos

2013-02-01T23:59:59.000Z

338

Concentrated purchasing patterns of plug-in vehicles may result in localized distribution transformer overload scenarios. Prolonged periods of transformer overloading causes service life decrements, and in worst-case scenarios, results in tripped thermal relays and residential service outages. This analysis will review distribution transformer load models developed in the IEC 60076 standard, and apply the model to a neighborhood with plug-in hybrids. Residential distribution transformers are sized such that night-time cooling provides thermal recovery from heavy load conditions during the daytime utility peak. It is expected that PHEVs will primarily be charged at night in a residential setting. If not managed properly, some distribution transformers could become overloaded, leading to a reduction in transformer life expectancy, thus increasing costs to utilities and consumers. A Monte-Carlo scheme simulated each day of the year, evaluating 100 load scenarios as it swept through the following variables: number of vehicle per transformer, transformer size, and charging rate. A general method for determining expected transformer aging rate will be developed, based on the energy needs of plug-in vehicles loading a residential transformer.

Kuss, M.; Markel, T.; Kramer, W.

2011-01-01T23:59:59.000Z

339

The purpose of this paper is to quantify uncertainties of fuel pin cell or fuel assembly (FA) homogenized few group diffusion theory constants generated from the B1 theory-augmented Monte Carlo (MC) method. A mathematical formulation of the first kind is presented to quantify uncertainties of the few group constants in terms of the two major sources of the MC method; statistical and nuclear cross section and nuclide number density input data uncertainties. The formulation is incorporated into the Seoul National Univ. MC code McCARD. It is then used to compute the uncertainties of the burnup-dependent homogenized two group constants of a low-enriched UO{sub 2} fuel pin cell and a PWR FA on the condition that nuclear cross section input data of U-235 and U-238 from JENDL 3.3 library and nuclide number densities from the solution to fuel depletion equations have uncertainties. The contribution of the MC input data uncertainties to the uncertainties of the two group constants of the two fuel systems is separated from that of the statistical uncertainties. The utilities of uncertainty quantifications are then discussed from the standpoints of safety analysis of existing power reactors, development of new fuel or reactor system design, and improvement of covariance files of the evaluated nuclear data libraries. (authors)

Park, H. J. [Korea Atomic Energy Research Inst., Daedeokdaero 989-111, Yuseong-gu, Daejeon (Korea, Republic of); Shim, H. J.; Joo, H. G.; Kim, C. H. [Dept. of Nuclear Engineering, Seoul National Univ., 1 Gwanak-ro, Gwanak-gu, Seoul (Korea, Republic of)

2012-07-01T23:59:59.000Z

340

Quantum Monte Carlo methods are accurate and promising many body techniques for electronic structure calculations which, in the last years, are encountering a growing interest thanks to their favorable scaling with the system size and their efficient parallelization, particularly suited for the modern high performance computing facilities. The ansatz of the wave function and its variational flexibility are crucial points for both the accurate description of molecular properties and the capabilities of the method to tackle large systems. In this paper, we extensively analyze, using different variational ansatzes, several properties of the water molecule, namely: the total energy, the dipole and quadrupole momenta, the ionization and atomization energies, the equilibrium configuration, and the harmonic and fundamental frequencies of vibration. The investigation mainly focuses on variational Monte Carlo calculations, although several lattice regularized diffusion Monte Carlo calculations are also reported. Through a systematic study, we provide a useful guide to the choice of the wave function, the pseudo potential, and the basis set for QMC calculations. We also introduce a new strategy for the definition of the atomic orbitals involved in the Jastrow - Antisymmetrised Geminal power wave function, in order to drastically reduce the number of variational parameters. This scheme significantly improves the efficiency of QMC energy minimization in case of large basis sets.

Andrea Zen; Ye Luo; Sandro Sorella; Leonardo Guidoni

2013-09-02T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

341

Science Conference Proceedings (OSTI)

This study refines risk analysis procedures for trichloroethylene (TCE) using a physiologically based pharmacokinetic (PBPK) model in conjunction with the Monte Carlo method. The Monte Carlo method is used to generate random sets of model parameters, based on the mean, variance, and distribution types. The procedure generates a range of exposure values for human excess lifetime cancer risk of lxl0 (exp-6), based on the upper and lower bounds and the mean of a 95% confidence interval. Risk ranges were produced for both ingestion and inhalation exposures. Results are presented in a graphical format to reduce reliance on qualitative discussions of uncertainty. A sensitivity analysis of the model was also performed. This method produced acceptable TCE exposures, for total amount TCE metabolized, greater than the Environmental Protection Agency's (EPA) by a factor of 23 for inhalation and a factor of 1.6 for ingestion. Sensitive parameters identified were the elimination rate constant, alveolar ventilation rate, and cardiac output. This procedure quantifies the uncertainty related to natural variations in parameter values. Its incorporation into risk assessment could be used to promulgate, and better present, more realistic standards.... Risk analysis, Physiologically based pharmacokinetics, Pbpk, Trichloroethylene, Monte carlo method.

Cronin, W.J.; Oswald, E.J.

1993-09-01T23:59:59.000Z

342

Stochastic Event-Driven Molecular Dynamics

Science Conference Proceedings (OSTI)

A novel Stochastic Event-Driven Molecular Dynamics (SEDMD) algorithm is developed for the simulation of polymer chains suspended in a solvent. SEDMD combines event-driven molecular dynamics (EDMD) with the Direct Simulation Monte Carlo (DSMC) method. ... Keywords: Complex flow, DSMC, Event-driven molecular dynamics, Polymer suspension

Aleksandar Donev; Alejandro L. Garcia; Berni J. Alder

2008-02-01T23:59:59.000Z

343

Asynchronous Event-Driven Particle Algorithms

Science Conference Proceedings (OSTI)

We present in a unifying way the main components of three examples of asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel event-driven algorithm for Direct Simulation Monte Carlo (DSMC). Finally, we describe how to combine MD with DSMC in an event-driven framework, and discuss some promises and challenges for event-driven simulation of realistic physical systems.

Donev, A

2007-02-28T23:59:59.000Z

344

Science Conference Proceedings (OSTI)

Purpose: Compare dose distributions for pediatric patients with ependymoma calculated using a Monte Carlo (MC) system and a clinical treatment planning system (TPS). Methods: Plans from ten pediatric patients with ependymoma treated using double scatter proton therapy were exported from the TPS and calculated in our MC system. A field by field comparison of the distal edge (80% and 20%), distal fall off (80% to 20%), field width (50% to 50%), and penumbra (80% to 20%) were examined. In addition, the target dose for the full plan was compared. Results: For the 32 fields from the 10 patients, the average differences of distal edge at 80% and 20% on central axis between MC and TPS are -1.9 {+-} 1.7 mm (p out in bone or an air cavity, the 80% difference was -0.9 {+-} 1.7 mm (p= 0.09). The negative value indicates that MC was on average shallower than TPS. The average difference of the 63 field widths of the 10 patients is -0.7 {+-} 1.0 mm (p < 0.001), negative indicating on average the MC had a smaller field width. On average, the difference in the penumbra was 2.3 {+-} 2.1 mm (p < 0.001). The average of the mean clinical target volume dose differences is -1.8% (p= 0.001), negative indicating a lower dose for MC. Conclusions: Overall, the MC system and TPS gave similar results for field width, the 20% distal edge, and the target coverage. For the 80% distal edge and lateral penumbra, there was slight disagreement; however, the difference was less than 2 mm and occurred primarily in highly heterogeneous areas. These differences highlight that the TPS dose calculation cannot be automatically regarded as correct.

Jia Yingcui; Beltran, Chris; Indelicato, Daniel J.; Flampouri, Stella; Li, Zuofeng; Merchant, Thomas E. [St. Jude Children's Research Hospital, 262 Danny Thomas Place, Memphis, Tennessee 38120 (United States); Mayo Clinic, 200 First St SW, Rochester, Minnesota 55905 (United States); University of Florida Proton Therapy Institute, 2015 North Jefferson St, Jacksonville, Florida 32206 (United States); St. Jude Children's Research Hospital, 262 Danny Thomas Place, Memphis, Tennessee 38120 (United States)

2012-08-15T23:59:59.000Z

345

Science Conference Proceedings (OSTI)

Monte Carlo (MC) method is able to accurately calculate eigenvalues in reactor analysis. Its lengthy computation time can be reduced by general-purpose computing on Graphics Processing Units (GPU), one of the latest parallel computing techniques under development. The method of porting a regular transport code to GPU is usually very straightforward due to the 'embarrassingly parallel' nature of MC code. However, the situation becomes different for eigenvalue calculation in that it will be performed on a generation-by-generation basis and the thread coordination should be explicitly taken care of. This paper presents our effort to develop such a GPU-based MC code in Compute Unified Device Architecture (CUDA) environment. The code is able to perform eigenvalue calculation under simple geometries on a multi-GPU system. The specifics of algorithm design, including thread organization and memory management were described in detail. The original CPU version of the code was tested on an Intel Xeon X5660 2.8 GHz CPU, and the adapted GPU version was tested on NVIDIA Tesla M2090 GPUs. Double-precision floating point format was used throughout the calculation. The result showed that a speedup of 7.0 and 33.3 were obtained for a bare spherical core and a binary slab system respectively. The speedup factor was further increased by a factor of {approx}2 on a dual GPU system. The upper limit of device-level parallelism was analyzed, and a possible method to enhance the thread-level parallelism was proposed. (authors)

Liu, T.; Ding, A.; Ji, W.; Xu, X. G. [Nuclear Engineering and Engineering Physics, Rensselaer Polytechnic Inst., Troy, NY 12180 (United States); Carothers, C. D. [Dept. of Computer Science, Rensselaer Polytechnic Inst. RPI (United States); Brown, F. B. [Los Alamos National Laboratory (LANL) (United States)

2012-07-01T23:59:59.000Z

346

Purpose: Brachytherapy planning software relies on the Task Group report 43 dosimetry formalism. This formalism, based on a water approximation, neglects various heterogeneous materials present during treatment. Various studies have suggested that these heterogeneities should be taken into account to improve the treatment quality. The present study sought to demonstrate the feasibility of incorporating Monte Carlo (MC) dosimetry within an inverse planning algorithm to improve the dose conformity and increase the treatment quality. Methods and Materials: The method was based on precalculated dose kernels in full patient geometries, representing the dose distribution of a brachytherapy source at a single dwell position using MC simulations and the Geant4 toolkit. These dose kernels are used by the inverse planning by simulated annealing tool to produce a fast MC-based plan. A test was performed for an interstitial brachytherapy breast treatment using two different high-dose-rate brachytherapy sources: the microSelectron iridium-192 source and the electronic brachytherapy source Axxent operating at 50 kVp. Results: A research version of the inverse planning by simulated annealing algorithm was combined with MC to provide a method to fully account for the heterogeneities in dose optimization, using the MC method. The effect of the water approximation was found to depend on photon energy, with greater dose attenuation for the lower energies of the Axxent source compared with iridium-192. For the latter, an underdosage of 5.1% for the dose received by 90% of the clinical target volume was found. Conclusion: A new method to optimize afterloading brachytherapy plans that uses MC dosimetric information was developed. Including computed tomography-based information in MC dosimetry in the inverse planning process was shown to take into account the full range of scatter and heterogeneity conditions. This led to significant dose differences compared with the Task Group report 43 approach for the Axxent source.

D'Amours, Michel [Departement de Radio-Oncologie et Centre de Recherche en Cancerologie de l'Universite Laval, Hotel-Dieu de Quebec, Quebec, QC (Canada); Department of Physics, Physics Engineering, and Optics, Universite Laval, Quebec, QC (Canada); Pouliot, Jean [Department of Radiation Oncology, University of California, San Francisco, School of Medicine, San Francisco, CA (United States); Dagnault, Anne [Departement de Radio-Oncologie et Centre de Recherche en Cancerologie de l'Universite Laval, Hotel-Dieu de Quebec, Quebec, QC (Canada); Verhaegen, Frank [Department of Radiation Oncology, Maastro Clinic, GROW Research Institute, Maastricht University Medical Centre, Maastricht (Netherlands); Department of Oncology, McGill University, Montreal, QC (Canada); Beaulieu, Luc, E-mail: beaulieu@phy.ulaval.ca [Departement de Radio-Oncologie et Centre de Recherche en Cancerologie de l'Universite Laval, Hotel-Dieu de Quebec, Quebec, QC (Canada); Department of Physics, Physics Engineering, and Optics, Universite Laval, Quebec, QC (Canada)

2011-12-01T23:59:59.000Z

347

AIM: We have recently developed a microscopic Monte Carlo approach to study surface chemistry on interstellar grains and the morphology of ice mantles. The method is designed to eliminate the problems inherent in the rate-equation formalism to surface chemistry. Here we report the first use of this method in a chemical model of cold interstellar cloud cores that includes both gas-phase and surface chemistry. The surface chemical network consists of a small number of diffusive reactions that can produce molecular oxygen, water, carbon dioxide, formaldehyde, methanol and assorted radicals. METHOD: The simulation is started by running a gas-phase model including accretion onto grains but no surface chemistry or evaporation. The starting surface consists of either flat or rough olivine. We introduce the surface chemistry of the three species H, O and CO in an iterative manner using our stochastic technique. Under the conditions of the simulation, only atomic hydrogen can evaporate to a significant extent. Although it has little effect on other gas-phase species, the evaporation of atomic hydrogen changes its gas-phase abundance, which in turn changes the flux of atomic hydrogen onto grains. The effect on the surface chemistry is treated until convergence occurs. We neglect all non-thermal desorptive processes. RESULTS: We determine the mantle abundances of assorted molecules as a function of time through 2x10^5 yr. Our method also allows determination of the abundance of each molecule in specific monolayers. The mantle results can be compared with observations of water, carbon dioxide, carbon monoxide, and methanol ices in the sources W33A and Elias 16. Other than a slight underproduction of mantle CO, our results are in very good agreement with observations.

Q. Chang; H. M. Cuppen; E. Herbst

2007-04-20T23:59:59.000Z

348

Recent upgrades of the MCNPX Monte Carlo code include transport of heavy ions. We employed the new code to simulate the energy and dose distributions produced by carbon beams in rabbit's head in and around a brain tumor. The work was within our experimental technique of interlaced carbon microbeams, which uses two 90 deg. arrays of parallel, thin planes of carbon beams (microbeams) interlacing to produce a solid beam at the target. A similar version of the method was earlier developed with synchrotron-generated x-ray microbeams. We first simulated the Bragg peak in high density polyethylene and other materials, where we could compare the calculated carbon energy deposition to the measured data produced at the NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory (BNL). The results showed that new MCNPX code gives a reasonable account of the carbon beam's dose up to {approx}200 MeV/nucleon beam energy. At higher energies, which were not relevant to our project, the model failed to reproduce the Bragg-peak's extent of increasing nuclear breakup tail. In our model calculations we determined the dose distribution along the beam path, including the angular straggling of the microbeams, and used the data for determining the optimal values of beam spacing in the array for producing adequate beam interlacing at the target. We also determined, for the purpose of Bragg-peak spreading at the target, the relative beam intensities of the consecutive exposures with stepwise lower beam energies, and simulated the resulting dose distribution in the spread out Bragg-peak. The details of the simulation methods used and the results obtained are presented.

Dioszegi, I. [Nonproliferation and National Security Department, Brookhaven National Laboratory, Upton, New York 11973 (United States); Rusek, A.; Chiang, I. H. [NASA Space Radiation Laboratory, Brookhaven National Laboratory, Upton, NY 11973 (United States); Dane, B. R. [Medical School, State University of New York at Stony Brook, Stony Brook, NY 11794 (United States); Meek, A. G. [Department of Radiation Oncology, State University of New York at Stony Brook, Stony Brook, NY 11794 (United States); Dilmanian, F. A. [Department of Radiation Oncology, State University of New York at Stony Brook, Stony Brook, NY 11794 (United States); Medical Department, Brookhaven National Laboratory, Upton, NY 11973 (United States)

2011-06-01T23:59:59.000Z

349

Merging galaxy clusters have become one of the most important probes of dark matter, providing evidence for dark matter over modified gravity and even constraints on the dark matter self-interaction cross-section. To properly constrain the dark matter cross-section it is necessary to understand the dynamics of the merger, as the inferred cross-section is a function of both the velocity of the collision and the observed time since collision. While the best understanding of merging system dynamics comes from N-body simulations, these are computationally intensive and often explore only a limited volume of the merger phase space allowed by observed parameter uncertainty. Simple analytic models exist but the assumptions of these methods invalidate their results near the collision time, plus error propagation of the highly correlated merger parameters is unfeasible. To address these weaknesses I develop a Monte Carlo method to discern the properties of dissociative mergers and propagate the uncertainty of the measured cluster parameters in an accurate and Bayesian manner. I introduce this method, verify it against an existing hydrodynamic N-body simulation, and apply it to two known dissociative mergers: 1ES 0657-558 (Bullet Cluster) and DLSCL J0916.2+2951 (Musket Ball Cluster). I find that this method surpasses existing analytic models-providing accurate (10% level) dynamic parameter and uncertainty estimates throughout the merger history. This, coupled with minimal required a priori information (subcluster mass, redshift, and projected separation) and relatively fast computation ({approx}6 CPU hours), makes this method ideal for large samples of dissociative merging clusters.

Dawson, William A., E-mail: wadawson@ucdavis.edu [Physics Department, University of California, Davis, One Shields Avenue, Davis, CA 95616 (United States)

2013-08-01T23:59:59.000Z

350

Science Conference Proceedings (OSTI)

This study investigates the performance of the YALINA Booster subcritical assembly, located in Belarus, during operation with high (90%), medium (36%), and low (21%) enriched uranium fuels in the assembly's fast zone. The YALINA Booster is a zero-power, subcritical assembly driven by a conventional neutron generator. It was constructed for the purpose of investigating the static and dynamic neutronics properties of accelerator driven subcritical systems, and to serve as a fast neutron source for investigating the properties of nuclear reactions, in particular transmutation reactions involving minor-actinides. The first part of this study analyzes the assembly's performance with several fuel types. The MCNPX and MONK Monte Carlo codes were used to determine effective and source neutron multiplication factors, effective delayed neutron fraction, prompt neutron lifetime, neutron flux profiles and spectra, and neutron reaction rates produced from the use of three neutron sources: californium, deuterium-deuterium, and deuterium-tritium. In the latter two cases, the external neutron source operates in pulsed mode. The results discussed in the first part of this report show that the use of low enriched fuel in the fast zone of the assembly diminishes neutron multiplication. Therefore, the discussion in the second part of the report focuses on finding alternative fuel loading configurations that enhance neutron multiplication while using low enriched uranium fuel. It was found that arranging the interface absorber between the fast and the thermal zones in a circular rather than a square array is an effective method of operating the YALINA Booster subcritical assembly without downgrading neutron multiplication relative to the original value obtained with the use of the high enriched uranium fuels in the fast zone.

Talamo, A.; Gohar, Y. (Nuclear Engineering Division)

2011-05-12T23:59:59.000Z

351

Asynchronous Event-Driven Particle Algorithms

Science Conference Proceedings (OSTI)

We present, in a unifying way, the main components of three asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel stochastic molecular-dynamics algorithm that builds on the Direct Simulation Monte Carlo (DSMC). We explain how to effectively combine event-driven and classical time-driven handling, and discuss some promises and challenges for event-driven simulation of realistic physical systems.

Donev, A

2007-08-30T23:59:59.000Z

352

Purpose: Radiation-dose awareness and optimization in CT can greatly benefit from a dose-reporting system that provides dose and risk estimates specific to each patient and each CT examination. As the first step toward patient-specific dose and risk estimation, this article aimed to develop a method for accurately assessing radiation dose from CT examinations. Methods: A Monte Carlo program was developed to model a CT system (LightSpeed VCT, GE Healthcare). The geometry of the system, the energy spectra of the x-ray source, the three-dimensional geometry of the bowtie filters, and the trajectories of source motions during axial and helical scans were explicitly modeled. To validate the accuracy of the program, a cylindrical phantom was built to enable dose measurements at seven different radial distances from its central axis. Simulated radial dose distributions in the cylindrical phantom were validated against ion chamber measurements for single axial scans at all combinations of tube potential and bowtie filter settings. The accuracy of the program was further validated using two anthropomorphic phantoms (a pediatric one-year-old phantom and an adult female phantom). Computer models of the two phantoms were created based on their CT data and were voxelized for input into the Monte Carlo program. Simulated dose at various organ locations was compared against measurements made with thermoluminescent dosimetry chips for both single axial and helical scans. Results: For the cylindrical phantom, simulations differed from measurements by -4.8% to 2.2%. For the two anthropomorphic phantoms, the discrepancies between simulations and measurements ranged between (-8.1%, 8.1%) and (-17.2%, 13.0%) for the single axial scans and the helical scans, respectively. Conclusions: The authors developed an accurate Monte Carlo program for assessing radiation dose from CT examinations. When combined with computer models of actual patients, the program can provide accurate dose estimates for specific patients.

Li Xiang; Samei, Ehsan; Segars, W. Paul; Sturgeon, Gregory M.; Colsher, James G.; Toncheva, Greta; Yoshizumi, Terry T.; Frush, Donald P. [Medical Physics Graduate Program, Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Medical Physics Graduate Program, Department of Physics, and Department of Biomedical Engineering, Duke University Medical Center, Durham, North Carolina 27705 (United States); Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Medical Physics Graduate Program, Duke University Medical Center, Durham, North Carolina 27705 (United States); Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, Durham, North Carolina 27705 (United States) and Department of Biomedical Engineering, University of North Carolina, Chapel Hill, North Carolina 27599 (United States); Department of Radiology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Duke Radiation Dosimetry Laboratory, Department of Radiology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Duke Radiation Dosimetry Laboratory, Department of Radiology, Medical Physics Graduate Program, Duke University Medical Center, Durham, North Carolina 27705 (United States); Division of Pediatric Radiology, Department of Radiology, Medical Physics Graduate Program, Duke University Medical Center, Durham, North Carolina 27710 (United States)

2011-01-15T23:59:59.000Z

353

We present herein a theoretical study of correlations between spectral indexes of X-ray emergent spectra and mass accretion rate ( m-dot ) in black hole (BH) sources, which provide a definitive signature for BHs. It has been firmly established, using the Rossi X-ray Timing Explorer (RXTE) in numerous BH observations during hard-soft state spectral evolution, that the photon index of X-ray spectra increases when m-dot increases and, moreover, the index saturates at high values of m-dot . In this paper, we present theoretical arguments that the observationally established index saturation effect versus mass accretion rate is a signature of the bulk (converging) flow onto the BH. Also, we demonstrate that the index saturation value depends on the plasma temperature of converging flow. We self-consistently calculate the Compton cloud (CC) plasma temperature as a function of mass accretion rate using the energy balance between energy dissipation and Compton cooling. We explain the observable phenomenon, index- m-dot correlations using a Monte Carlo simulation of radiative processes in the innermost part (CC) of a BH source and we account for the Comptonization processes in the presence of thermal and bulk motions, as basic types of plasma motion. We show that, when m-dot increases, BH sources evolve to high and very soft states (HSS and VSS, respectively), in which the strong blackbody(BB)-like and steep power-law components are formed in the resulting X-ray spectrum. The simultaneous detections of these two components strongly depends on sensitivity of high-energy instruments, given that the relative contribution of the hard power-law tail in the resulting VSS spectrum can be very low, which is why, to date RXTE observations of the VSS X-ray spectrum have been characterized by the presence of the strong BB-like component only. We also predict specific patterns for high-energy e-fold (cutoff) energy (E{sub fold}) evolution with m-dot for thermal and dynamical (bulk) Comptonization cases. For the former case, E{sub fold} monotonically decreases with m-dot , in the latter case, the E{sub fold} decrease is followed by its increase at high values of m-dot . The observational evolution of E{sub fold} versus m-dot can be another test for the presence of a converging flow effect in the formation of the resulting spectra in the close vicinity of BHs.

Laurent, Philippe [CEA/DSM/IRFU/APC, CEA Saclay, 91191 Gif sur Yvvete (France); Titarchuk, Lev, E-mail: plaurent@cea.fr, E-mail: titarchuk@fe.infn.f, E-mail: lev@milkyway.gsfc.nasa.gov, E-mail: ltitarch@gmu.edu [Physics Department, University of Ferrara, Via Saragat 1, 44100 Ferrara (Italy)

2011-01-20T23:59:59.000Z

354

Indium-Gallium Segregation in CuIn$_{x}$Ga$_{1-x}$Se$_2$: An ab initio based Monte Carlo Study

Thin-film solar cells with CuIn$_x$Ga$_{1-x}$Se$_2$ (CIGS) absorber are still far below their efficiency limit, although lab cells reach already 19.9%. One important aspect is the homogeneity of the alloy. Large-scale simulations combining Monte Carlo and density functional calculations show that two phases coexist in thermal equilibrium below room temperature. Only at higher temperatures, CIGS becomes more and more a homogeneous alloy. A larger degree of inhomogeneity for Ga-rich CIGS persists over a wide temperature range, which may contribute to the low observed efficiency of Ga-rich CIGS solar cells.

Ludwig, Christian D R; Felser, Claudia; Schilling, Tanja; Windeln, Johannes; Kratzer, Peter

2010-01-01T23:59:59.000Z

355

We have used Lomb-Scargle periodogram analysis and Monte Carlo significance tests to detect periodicities above the 3-sigma level in the Beta Cephei stars V400 Car, V401 Car, V403 Car and V405 Car. These methods produce six previously unreported periodicities in the expected frequency range of excited pulsations: one in V400 Car, three in V401 Car, one in V403 Car and one in V405 Car. One of these six frequencies is significant above the 4-sigma level. We provide statistical significances for all of the periodicities found in these four stars.

Engelbrecht, C A; Frank, B S

2009-01-01T23:59:59.000Z

356

The F{sub N} basis function expansion solution to the Boltzmann transport equation in Cartesian geometry is summarized and evaluated for several heterogeneous slabs of interest. The resultant scalar and angular fluxes and the critical slab thickness (when applicable) compare to the Monte Carlo transport evaluations by MCNP. A correspondence between the one-group macroscopic cross section used in the FN code is made to energy independent synthetic MCNP microscopic cross sections. The FN method produces comparable results to MCNP, requires fewer computer resources, but is limited to specific problem types.

Singleterry, R.C. Jr. [Argonne National Lab., Idaho Falls, ID (United States); Jahshan, S. [SNJ Consulting, Idaho Falls, ID (United States)

1996-04-01T23:59:59.000Z

357

Asynchronous Event-Driven Particle Algorithms

Science Conference Proceedings (OSTI)

We present, in a unifying way, the main components of three asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics, is well known. We also present a recently ... Keywords: Asynchronous, event-driven, kinetic Monte Carlo, molecular dynamics, particle systems

Aleksandar Donev

2009-04-01T23:59:59.000Z

358

Science Conference Proceedings (OSTI)

In medical linear accelerator, the energy parameter of electron plays important role to produce electron beam. The percentage depth dose of electron beams takes account not only on the value of electron's energy, but also on the type of electron's energy. The aims of this work are to carry on the effect of energy parameter of electron on the percentage depth dose of electron beam. Monte Carlo method is chosen in this project, due to the superior of this method for simulating the random process such as the transport particle in matter. The DOSXYZnrc usercode was used to simulate the electron transport in water phantom. Two aspects of electron's energy parameter were investigated using Monte Carlo simulations. In the first aspect, electron energy's value was varied also its spectrum. In the second aspect, the geometry of electron's energy was taken account on. The parallel beam and the point source were chosen as the geometry of The measurements of percentage depth dose were conducted to compare with its simulation. The ionization chamber was used in these measurements. Presentation of the results of this work is given not only based on the shape of the percentage depth dose from the simulation and measurement, but also on the other aspect in its curve. The result of comparison between the simulation and its measurement shows that the shape of its curve depends on the energy value of electron and the type of its energy. The energy value of electron affected the depth maximum of dose.

Haryanto, Freddy [Department of Physics, Institut Teknologi Bandung (Indonesia)

2010-06-22T23:59:59.000Z

359

This report summarizes the results of three previous studies to evaluate and compare the effectiveness of sampling plans for steam generator tube inspections. An analytical evaluation and Monte Carlo simulation techniques were the methods used to evaluate sampling plan performance. To test the performance of candidate sampling plans under a variety of conditions, ranges of inspection system reliability were considered along with different distributions of tube degradation. Results from the eddy current reliability studies performed with the retired-from-service Surry 2A steam generator were utilized to guide the selection of appropriate probability of detection and flaw sizing models for use in the analysis. Different distributions of tube degradation were selected to span the range of conditions that might exist in operating steam generators. The principal means of evaluating sampling performance was to determine the effectiveness of the sampling plan for detecting and plugging defective tubes. A summary of key results from the eddy current reliability studies is presented. The analytical and Monte Carlo simulation analyses are discussed along with a synopsis of key results and conclusions.

Kurtz, R.J.; Heasler, P.G.; Baird, D.B. [Pacific Northwest Lab., Richland, WA (United States)

1994-02-01T23:59:59.000Z

360

Study of Horizontally Oriented Ice Crystals with CALIPSO Observations and Comparison with Monte oriented ice crystals (HOIC) occur frequently in both ice and mixed-phase clouds. When compared with the case for clouds consisting of randomly oriented ice crystals (ROIC), lidar measurements from clouds

Baum, Bryan A.

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

361

of approaches in parallel high performance computing that can potentially address the need to accelerate Monte level of 2.57 petaFLOPS by harvesting the power of 14,336 CPUs and 7,168 GPUs. The high performance computing industry is moving toward a hybrid computer model, where GPUs and CPUs work together to perform

Danon, Yaron

362

We performed a detailed analysis and the Monte Carlo simulation of the neutron lifetime experiment [S. Arzumanov et al., Phys. Lett. B 483 (2000) 15] because of the strong disagreement by 5.6 standard deviations between the results of this experiment and our experiment [A. Serebrov et al., Phys. Lett. B 605 (2005) 72]. We found a few effects which were not taken into account in the experiment [S. Arzumanov et al., Phys. Lett. B 483 (2000) 15]. The possible correction is -5.5 s with uncertainty of 2.4 s which comes from initial data knowledge. We assume that after taking into account this correction the result of work [S. Arzumanov et al., Phys. Lett. B 483 (2000) 15] for neutron lifetime 885.4 +/- 0.9stat +/- 0.4syst s could be corrected to 879.9 +/- 0.9stat +/- 2.4syst s.

Fomin, A K

2010-01-01T23:59:59.000Z

363

Understanding materials degradation under intense irradiation is important for the development of next generation nuclear power plants. Here we demonstrate that defect microstructural evolution in molybdenum nanofoils in situ irradiated and observed on a transmission electron microscope can be reproduced with high fidelity using an object kinetic Monte Carlo (OKMC) simulation technique. Main characteristics of defect evolution predicted by OKMC, namely, defect density and size distribution as functions of foil thickness, ion fluence and flux, are in excellent agreement with those obtained from the in situ experiments and from previous continuum-based cluster dynamics modeling. The combination of advanced in situ experiments and high performance computer simulation/modeling is a unique tool to validate physical assumptions/mechanisms regarding materials response to irradiation, and to achieve the predictive power for materials stability and safety in nuclear facilities.

Xu Donghua; Wirth, Brian D. [Department of Nuclear Engineering, University of Tennessee, Knoxville, Tennessee 37996 (United States); Li Meimei [Division of Nuclear Engineering, Argonne National Laboratory, Argonne, Illinois 60439 (United States); Kirk, Marquis A. [Division of Materials Science, Argonne National Laboratory, Argonne, Illinois 60439 (United States)

2012-09-03T23:59:59.000Z

364

Dose calculations for tiles exposed to the Hiroshima atomic bomb radiations were undertaken. A Monte Carlo code, ABOMB, was developed which considers the characteristics of atomic bomb gamma-ray fluences and geometrical configurations. ABOMB was applied to tile dose calculations for the available photon sources with definite fluences. Its validity was tested by comparing the depth-dose curves calculated for /sup 60/Co and /sup 252/Cf beams with the equivalent experimental data obtained in the laboratory. Selection of parameters, contribution of backscattering, and computing time also were considered. Present calculations are considered to be accurate with uncertainties less than +/- 10%, and may be useful for correcting or reinforcing atomic bomb gamma-ray doses, together with tile dose measurements by thermoluminescent (TL) dosimetry.

Uehara, S.; Hoshi, M.; Sawada, S.; Nagatomo, T.; Ichikawa, Y.

1988-03-01T23:59:59.000Z

365

We present Monte-Carlo simulations for the focusing design of a novel cold-neutron triple-axis spectrometer to be installed at the end position of the cold guide NL-1 of the research reactor FRM-II in Munich, Germany. Our simulations are of general relevance for the design of triple-axis spectrometers at end positions of neutron guides. Using the McStas program code we performed ray trajectories to compare parabolic and elliptic focusing concepts. In addition the design of the monochromator was optimized concerning crystal size and mosaic spread. The parabolic focusing concept is superior to the elliptic alternative in view of the neutron intensity distribution as a function of energy and divergence. In particular, the elliptical configuration leads to an inhomogeneous divergence distribution.

Komarek, A C; Braden, M

2011-01-01T23:59:59.000Z

366

A new hybrid experiment has been started by AS{\\gamma} experiment at Tibet, China, since August 2011, which consists of a low threshold burst-detector-grid (YAC-II, Yangbajing Air shower Core array), the Tibet air-shower array (Tibet-III) and a large underground water Cherenkov muon detector (MD). In this paper, the capability of the measurement of the chemical components (proton, helium and iron) with use of the (Tibet-III+YAC-II) is investigated by means of an extensive Monte Carlo simulation in which the secondary particles are propagated through the (Tibet-III+YAC-II) array and an artificial neural network (ANN) method is applied for the primary mass separation. Our simulation shows that the new installation is powerful to study the chemical compositions, in particular, to obtain the primary energy spectrum of the major component at the knee.

The Tibet As? Collaboration; :; M. Amenomori; X. J. Bi; D. Chen; W. Y. Chen; S. W. Cui; Danzengluobu; L. K. Ding; X. H. Ding; C. F. Feng; Zhaoyang Feng; Z. Y. Feng; Q. B. Gou; H. W. Guo; Y. Q. Guo; H. H. He; Z. T. He; K. Hibino; N. Hotta; Haibing Hu; H. B. Hu; J. Huang; W. J. Li; H. Y. Jia; L. Jiang; F. Kajino; K. Kasahara; Y. Katayose; C. Kato; K. Kawata; Labaciren; G. M. Le; A. F. Li; C. Liu; J. S. Liu; H. Lu; X. R. Meng; K. Mizutani; K. Munakata; H. Nanjo; M. Nishizawa; M. Ohnishi; I. Ohta; S. Ozawa; X. L. Qian; X. B. Qu; T. Saito; T. Y. Saito; M. Sakata; T. K. Sako; J. Shao; M. Shibata; A. Shiomi; T. Shirai; H. Sugimoto; M. Takita; Y. H. Tan; N. Tateyama; S. Torii; H. Tsuchiya; S. Udo; H. Wang; H. R. Wu; L. Xue; Y. Yamamoto; Z. Yang; S. Yasue; A. F. Yuan; T. Yuda; L. M. Zhai; H. M. Zhang; J. L. Zhang; X. Y. Zhang; Y. Zhang; Yi Zhang; Ying Zhang; Zhaxisangzhu; X. X. Zhou

2013-03-12T23:59:59.000Z

367

A new hybrid experiment has been started by AS{\\gamma} experiment at Tibet, China, since August 2011, which consists of a low threshold burst-detector-grid (YAC-II, Yangbajing Air shower Core array), the Tibet air-shower array (Tibet-III) and a large underground water Cherenkov muon detector (MD). In this paper, the capability of the measurement of the chemical components (proton, helium and iron) with use of the (Tibet-III+YAC-II) is investigated by means of an extensive Monte Carlo simulation in which the secondary particles are propagated through the (Tibet-III+YAC-II) array and an artificial neural network (ANN) method is applied for the primary mass separation. Our simulation shows that the new installation is powerful to study the chemical compositions, in particular, to obtain the primary energy spectrum of the major component at the knee.

:,; Bi, X J; Chen, D; Chen, W Y; Cui, S W; Danzengluobu,; Ding, L K; Ding, X H; Feng, C F; Feng, Zhaoyang; Feng, Z Y; Gou, Q B; Guo, H W; Guo, Y Q; He, H H; He, Z T; Hibino, K; Hotta, N; Hu, Haibing; Hu, H B; Huang, J; Li, W J; Jia, H Y; Jiang, L; Kajino, F; Kasahara, K; Katayose, Y; Kato, C; Kawata, K; Labaciren,; Le, G M; Li, A F; Liu, C; Liu, J S; Lu, H; Meng, X R; Mizutani, K; Munakata, K; Nanjo, H; Nishizawa, M; Ohnishi, M; Ohta, I; Ozawa, S; Qian, X L; Qu, X B; Saito, T; Saito, T Y; Sakata, M; Sako, T K; Shao, J; Shibata, M; Shiomi, A; Shirai, T; Sugimoto, H; Takita, M; Tan, Y H; Tateyama, N; Torii, S; Tsuchiya, H; Udo, S; Wang, H; Wu, H R; Xue, L; Yamamoto, Y; Yang, Z; Yasue, S; Yuan, A F; Yuda, T; Zhai, L M; Zhang, H M; Zhang, J L; Zhang, X Y; Zhang, Y; Zhang, Yi; Zhang, Ying; Zhaxisangzhu,; Zhou, X X

2013-01-01T23:59:59.000Z

368

Science Conference Proceedings (OSTI)

Purpose: To validate the feasibility of developing a radiotherapy unit with kilovoltage X-rays through actual irradiation of live rabbit lungs, and to explore the practical issues anticipated in future clinical application to humans through Monte Carlo dose simulation. Methods and Materials: A converging stereotactic irradiation unit was developed, consisting of a modified diagnostic computed tomography (CT) scanner. A tiny cylindrical volume in 13 normal rabbit lungs was individually irradiated with single fractional absorbed doses of 15, 30, 45, and 60 Gy. Observational CT scanning of the whole lung was performed every 2 weeks for 30 weeks after irradiation. After 30 weeks, histopathologic specimens of the lungs were examined. Dose distribution was simulated using the Monte Carlo method, and dose-volume histograms were calculated according to the data. A trial estimation of the effect of respiratory movement on dose distribution was made. Results: A localized hypodense change and subsequent reticular opacity around the planning target volume (PTV) were observed in CT images of rabbit lungs. Dose-volume histograms of the PTVs and organs at risk showed a focused dose distribution to the target and sufficient dose lowering in the organs at risk. Our estimate of the dose distribution, taking respiratory movement into account, revealed dose reduction in the PTV. Conclusions: A converging stereotactic irradiation unit using kilovoltage X-rays was able to generate a focused radiobiologic reaction in rabbit lungs. Dose-volume histogram analysis and estimated sagittal dose distribution, considering respiratory movement, clarified the characteristics of the irradiation received from this type of unit.

Kawase, Takatsugu [Department of Radiology, Keio University School of Medicine, Tokyo (Japan); CREST, Japan Science and Technology Agency, Tokyo (Japan); Kunieda, Etsuo [Department of Radiology, Keio University School of Medicine, Tokyo (Japan); CREST, Japan Science and Technology Agency, Tokyo (Japan)], E-mail: kunieda-mi@umin.ac.jp; Deloar, Hossain M. [CREST, Japan Science and Technology Agency, Tokyo (Japan); Oncology Service, Medical Physics and Bioengineering Department, Christchurch Hospital, Christchurch (New Zealand); Tsunoo, Takanori [Department of Radiology, Keio University School of Medicine, Tokyo (Japan); CREST, Japan Science and Technology Agency, Tokyo (Japan); Seki, Satoshi [Department of Radiology, Keio University School of Medicine, Tokyo (Japan); Oku, Yohei [CREST, Japan Science and Technology Agency, Tokyo (Japan); Department of Radiology, Keio University School of Medicine, Tokyo (Japan); Saitoh, Hidetoshi [Division of Radiological Sciences, Faculty of Health Sciences, Tokyo Metropolitan University, Tokyo (Japan); CREST, Japan Science and Technology Agency, Tokyo (Japan); Saito, Kimiaki [Center for Promotion of Computational Science and Engineering, Japan Atomic Energy Agency, Ibaraki (Japan); CREST, Japan Science and Technology Agency, Tokyo (Japan); Ogawa, Eileen N. [Department of Anesthesiology, Keio University School of Medicine, Tokyo (Japan); Ishizaka, Akitoshi [Department of Medicine, Keio University School of Medicine, Tokyo (Japan); Kameyama, Kaori [Division of Diagnostic Pathology, Keio University School of Medicine, Tokyo (Japan); Kubo, Atsushi [Department of Radiology, Keio University School of Medicine, Tokyo (Japan)

2009-10-01T23:59:59.000Z

369

Joint International Conference on Supercomputing in Nuclear Applications and Monte Carlo 2013 (SNA-Cr alloys are investigated using Density Functional Theory (DFT) formalism, in the form of constrained non temperature, represent the key unknown entities critical to the development of viable fusion reactor design

370

A Moment-Preserving Single-Event Monte Carlo Model of Electron and Positron Energy-Loss Straggling.

??Analog simulation of energy straggling of electrons and positrons is computationally impractical because of long-range Coulomb forces resulting in highly peaked cross sections about small… (more)

Gonzales, Matthew

2013-01-01T23:59:59.000Z

371

Science Conference Proceedings (OSTI)

The free energy of solvation and dissociation of hydrogen chloride in water is calculated through a combined molecular simulation quantum chemical approach at four temperatures between T = 300 and 450 K. The free energy is first decomposed into the sum of two components: the Gibbs free energy of transfer of molecular HCl from the vapor to the aqueous liquid phase and the standard-state free energy of acid dissociation of HCl in aqueous solution. The former quantity is calculated using Gibbs ensemble Monte Carlo simulations using either Kohn-Sham density functional theory or a molecular mechanics force field to determine the system’s potential energy. The latter free energy contribution is computed using a continuum solvation model utilizing either experimental reference data or micro-solvated clusters. The predicted combined solvation and dissociation free energies agree very well with available experimental data. CJM was supported by the US Department of Energy,Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. Pacific Northwest National Laboratory is operated by Battelle for the US Department of Energy.

McGrath, Matthew; Kuo, I-F W.; Ngouana, Brice F.; Ghogomu, Julius N.; Mundy, Christopher J.; Marenich, Aleksandr; Cramer, Christopher J.; Truhlar, Donald G.; Siepmann, Joern I.

2013-08-28T23:59:59.000Z

372

In modeling direct current (dc) discharges, such as dc magnetrons, a current-limiting device is often neglected. In this study, it is shown that an external circuit consisting of a voltage source and a resistor is inevitable in calculating the correct cathode current. Avoiding the external circuit can cause the current to converge (if at all) to a wrong volt-ampere regime. The importance of this external circuit is studied by comparing the results with those of a model without current-limiting device. For this purpose, a 2d3v particle-in-cell/Monte Carlo collisions model was applied to calculate discharge characteristics, such as cathode potential and current, particle fluxes and densities, and potential distribution in the plasma. It is shown that the calculated cathode current is several orders of magnitude lower when an external circuit is omitted, leading to lower charged particle fluxes and densities, and a wider plasma sheath. Also, it was shown, that only simulations with external circuit can bring the cathode current into a certain plasma regime, which has its own typical properties. In this work, the normal and abnormal regimes were studied.

Bultinck, E.; Kolev, I.; Bogaerts, A. [Research Group PLASMANT, Department of Chemistry, University of Antwerp, Universiteitsplein 1, 2610 Antwerp (Belgium); Depla, D. [Department of Solid State Sciences, Ghent University, Krijgslaan 281 (S1), 9000 Ghent (Belgium)

2008-01-01T23:59:59.000Z

373

Science Conference Proceedings (OSTI)

A two-dimensional axisymmetric electromagnetic particle-in-cell code with Monte Carlo collision conditions has been developed for an applied-field magnetoplasmadynamic thruster simulation. This theoretical approach establishes a particle acceleration model to investigate the microscopic and macroscopic characteristics of particles. This new simulation code was used to study the physical processes associated with applied magnetic fields. In this paper (I), detail of the computation procedure and results of predictions of local plasma and field properties are presented. The numerical model was applied to the configuration of a NASA Lewis Research Center 100-kW magnetoplasmadynamic thruster which has well documented experimental results. The applied magnetic field strength was varied from 0 to 0.12 T, and the effects on thrust were calculated as a basis for verification of the theoretical approach. With this confirmation, the changes in the distributions of ion density, velocity, and temperature throughout the acceleration region related to the applied magnetic fields were investigated. Using these results, the effects of applied field on physical processes in the thruster discharge region could be represented in detail, and those results are reported.

Tang Haibin; Cheng Jiao; Liu Chang [School of Astronautics, Beijing University of Aeronautics and Astronautics, Beijing 100191 (China); York, Thomas M. [Ohio State University, Columbus, Ohio 43235 (United States)

2012-07-15T23:59:59.000Z

374

The Laser Interferometer Space Antenna (LISA) defines new demands on data analysis efforts in its all-sky gravitational wave survey, recording simultaneously thousands of galactic compact object binary foreground sources and tens to hundreds of background sources like binary black hole mergers and extreme-mass ratio inspirals. We approach this problem with an adaptive and fully automatic Reversible Jump Markov Chain Monte Carlo sampler, able to sample from the joint posterior density function (as established by Bayes theorem) for a given mixture of signals ''out of the box'', handling the total number of signals as an additional unknown parameter beside the unknown parameters of each individual source and the noise floor. We show in examples from the LISA Mock Data Challenge implementing the full response of LISA in its TDI description that this sampler is able to extract monochromatic Double White Dwarf signals out of colored instrumental noise and additional foreground and background noise successfully in a global fitting approach. We introduce 2 examples with fixed number of signals (MCMC sampling), and 1 example with unknown number of signals (RJ-MCMC), the latter further promoting the idea behind an experimental adaptation of the model indicator proposal densities in the main sampling stage. We note that the experienced runtimes and degeneracies in parameter extraction limit the shown examples to the extraction of a low but realistic number of signals.

Stroeer, Alexander; Veitch, John [School of Physics and Astronomy, University of Birmingham, Edgbaston, Birmingham B15 2TT (United Kingdom)

2009-09-15T23:59:59.000Z

375

Science Conference Proceedings (OSTI)

An approach to estimating the uncertainty of initial data in calculations by the Monte Carlo method is considered. The relative geometrical position of parts of the analyzed system is assumed to be unknown. The influence of different approximations in the description of the geometrical shape of system objects is studied. The effect of unknown location and approximate shape description of solid radioactive waste in the container on the magnitude of dose fields is considered for photon transport problems.

Androsenko, P. A.; Kolganov, K. M., E-mail: smilodonam@yandex.ru; Mogulyan, V. G. [National Research Nuclear University MEPhI, Obninsk Institute for Nuclear Power Engineering (Russian Federation)

2012-12-15T23:59:59.000Z

376

A Multivariate Training Technique with Event Reweighting

The performances of Artificial Neural Networks and Boosted Decision Trees using equal event weight and effective event weight training are compared in this paper. The comparison is performed in the context of the physics analysis of the ATLAS experiment at the Large Hadron Collider (LHC), which will explore the fundamental nature of matter and the basic forces that shape our universe. Based on our studies using the ATLAS Monte Carlo samples of simulated data, event pattern recognition with effective event weight training has significantly better performance than that with equal event weight training.

Yang, Hai-Jun; Wilson, Alan; Zhao, Zhengguo; Zhou Bing

2008-01-01T23:59:59.000Z

377

The kinetics for the selective hydrogenation of acetylene-ethylene mixtures over model Pd(111) and bimetallic Pd-Ag alloy surfaces were examined using first principles based kinetic Monte Carlo (KMC) simulations to elucidate the effects of alloying as well as process conditions (temperature and hydrogen partial pressure). The mechanisms that control the selective and unselective routes which included hydrogenation, dehydrogenation and C-?C bond breaking pathways were analyzed using first-principle density functional theory (DFT) calculations. The results were used to construct an intrinsic kinetic database that was used in a variable time step kinetic Monte Carlo simulation to follow the kinetics and the molecular transformations in the selective hydrogenation of acetylene-ethylene feeds over Pd and Pd-Ag surfaces. The lateral interactions between coadsorbates that occur through-surface and through-space were estimated using DFT-parameterized bond order conservation and van der Waal interaction models respectively. The simulation results show that the rate of acetylene hydrogenation as well as the ethylene selectivity increase with temperature over both the Pd(111) and the Pd-Ag/Pd(111) alloy surfaces. The selective hydrogenation of acetylene to ethylene proceeds via the formation of a vinyl intermediate. The unselective formation of ethane is the result of the over-hydrogenation of ethylene as well as over-hydrogenation of vinyl to form ethylidene. Ethylidene further hydrogenates to form ethane and dehydrogenates to form ethylidyne. While ethylidyne is not reactive, it can block adsorption sites which limit the availability of hydrogen on the surface and thus act to enhance the selectivity. Alloying Ag into the Pd surface decreases the overall rated but increases the ethylene selectivity significantly by promoting the selective hydrogenation of vinyl to ethylene and concomitantly suppressing the unselective path involving the hydrogenation of vinyl to ethylidene and the dehydrogenation ethylidene to ethylidyne. This is consistent with experimental results which suggest only the predominant hydrogenation path involving the sequential addition of hydrogen to form vinyl and ethylene exists over the Pd-Ag alloys. Ag enhances the desorption of ethylene and hydrogen from the surface thus limiting their ability to undergo subsequent reactions. The simulated apparent activation barriers were calculated to be 32-44 kJ/mol on Pd(111) and 26-31 kJ/mol on Pd-Ag/Pd(111) respectively. The reaction was found to be essentially first order in hydrogen over Pd(111) and Pd-Ag/Pd(111) surfaces. The results reveal that increases in the hydrogen partial pressure increase the activity but decrease ethylene selectivity over both Pd and Pd-Ag/Pd(111) surfaces. Pacific Northwest National Laboratory is operated by Battelle for the US Department of Energy.

Mei, Donghai; Neurock, Matthew; Smith, C Michael

2009-10-22T23:59:59.000Z

378

Science Conference Proceedings (OSTI)

ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.

Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

2005-09-01T23:59:59.000Z

379

Purpose: The experimental determination of doses at proximal distances from radioactive sources is difficult because of the steepness of the dose gradient. The goal of this study was to determine the relative radial dose distribution for a low dose rate {sup 192}Ir wire source using electron paramagnetic resonance imaging (EPRI) and to compare the results to those obtained using Gafchromic EBT film dosimetry and Monte Carlo (MC) simulations. Methods: Lithium formate and ammonium formate were chosen as the EPR dosimetric materials and were used to form cylindrical phantoms. The dose distribution of the stable radiation-induced free radicals in the lithium formate and ammonium formate phantoms was assessed by EPRI. EBT films were also inserted inside in ammonium formate phantoms for comparison. MC simulation was performed using the MCNP4C2 software code. Results: The radical signal in irradiated ammonium formate is contained in a single narrow EPR line, with an EPR peak-to-peak linewidth narrower than that of lithium formate ({approx}0.64 and 1.4 mT, respectively). The spatial resolution of EPR images was enhanced by a factor of 2.3 using ammonium formate compared to lithium formate because its linewidth is about 0.75 mT narrower than that of lithium formate. The EPRI results were consistent to within 1% with those of Gafchromic EBT films and MC simulations at distances from 1.0 to 2.9 mm. The radial dose values obtained by EPRI were about 4% lower at distances from 2.9 to 4.0 mm than those determined by MC simulation and EBT film dosimetry. Conclusions: Ammonium formate is a suitable material under certain conditions for use in brachytherapy dosimetry using EPRI. In this study, the authors demonstrated that the EPRI technique allows the estimation of the relative radial dose distribution at short distances for a {sup 192}Ir wire source.

Kolbun, N.; Leveque, Ph.; Abboud, F.; Bol, A.; Vynckier, S.; Gallez, B. [Biomedical Magnetic Resonance Unit, Louvain Drug Research Institute, Universite catholique de Louvain, Avenue Mounier 73.40, B-1200 Brussels (Belgium); Molecular Imaging and Experimental Radiotherapy Unit, Institute of Experimental and Clinical Research, Universite catholique de Louvain, Avenue Hippocrate 55, B-1200 Brussels (Belgium); Biomedical Magnetic Resonance Unit, Louvain Drug Research Institute, Universite catholique de Louvain, Avenue Mounier 73.40, B-1200 Brussels (Belgium)

2010-10-15T23:59:59.000Z

380

In Paper by Titarchuk & Shrader the general formulation and results for photon reprocessing (downscattering) that included recoil and Comptonization effects due to divergence of the flow were presented. Here we show the Monte Carlo (MC) simulated continuum and line spectra. We also provide an analytical description of the simulated continuum spectra using the diffusion approximation. We have simulated the propagation of monochromatic and continuum photons in a bulk outflow from a compact object. Electron scattering of the photons within the expanding flow leads to a decrease of their energy which is of first order in V/c (where V is the outflow velocity). The downscattering effect of first order in V/c in the diverging flow is explained by semi-analytical calculations and confirmed by MC simulations. We conclude that redshifted lines and downscattering bumps are intrinsic properties of the powerful outflows for which Thomson optical depth is greater than one. We fitted our model line profiles to the observations using four free parameters, \\beta=V/c, optical depth of the wind \\tau, the wind temperature kT_e and the original line photon energy E_0. We show how the primary spectrum emitted close to the black hole is modified by reprocessing in the warm wind. In the framework of the our wind model the fluorescent iron line K_alpha is formed in the partly ionized wind as a result of illumination by central source continuum photons. The demonstrated application of our outflow model to the XMM observations of MCG 6-30-15, and to the ASCA observations of GRO J1655-40, points out a potential powerful spectral diagnostic for probes of the outflow-central object connection in Galactic and extragalactic BH sources.

Philippe Laurent; Lev Titarchuk

2006-11-06T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

381

The observed gas-phase molecular inventory of hot cores is believed to be significantly impacted by the products of chemistry in interstellar ices. In this study, we report the construction of a full macroscopic Monte Carlo model of both the gas-phase chemistry and the chemistry occurring in the icy mantles of interstellar grains. Our model treats icy grain mantles in a layer-by-layer manner, which incorporates laboratory data on ice desorption correctly. The ice treatment includes a distinction between a reactive ice surface and an inert bulk. The treatment also distinguishes between zeroth- and first-order desorption, and includes the entrapment of volatile species in more refractory ice mantles. We apply the model to the investigation of the chemistry in hot cores, in which a thick ice mantle built up during the previous cold phase of protostellar evolution undergoes surface reactions and is eventually evaporated. For the first time, the impact of a detailed multilayer approach to grain mantle formation on the warm-up chemistry is explored. The use of a multilayer ice structure has a mixed impact on the abundances of organic species formed during the warm-up phase. For example, the abundance of gaseous HCOOCH{sub 3} is lower in the multilayer model than in previous grain models that do not distinguish between layers (so-called two phase models). Other gaseous organic species formed in the warm-up phase are affected slightly. Finally, we find that the entrapment of volatile species in water ice can explain the two-jump behavior of H{sub 2}CO previously found in observations of protostars.

Vasyunin, A. I. [Department of Chemistry, The University of Virginia, Charlottesville, VA (United States)] [Department of Chemistry, The University of Virginia, Charlottesville, VA (United States); Herbst, Eric, E-mail: anton.vasyunin@gmail.com, E-mail: eh2ef@virginia.edu [Departments of Chemistry, Astronomy, and Physics, The University of Virginia, Charlottesville, VA (United States)] [Departments of Chemistry, Astronomy, and Physics, The University of Virginia, Charlottesville, VA (United States)

2013-01-10T23:59:59.000Z

382

Properties of the X-ray halo of the eclipsing X-ray pulsar 4U 1538-52 are derived from a 25 ksec observation by the Chandra X-Ray Observatory. Profiles of the halo, compiled in two energy ranges, 2 to 4 keV and 4 to 6 keV, and three time intervals before and after an eclipse immersion, exhibit a three-peak shape. The observed profiles are fitted by the profiles of a simulated halo generated by a Monte Carlo ray-tracing code operating on a model of three discrete clouds and a spectrum of the photons emitted by the source over a period of time extending from 270 ksec before the observation began till it ended. The distances of the two nearer dust clouds are fixed at the distances of the peaks of atomic hydrogen derived from the 21-cm spectrum in the direction of the X-ray source, namely at 1.30 and 2.56 kpc. A good fit is achieved with the source at a distance 4.5 kpc, the distance of the third cloud at 4.05 kpc, the total scattering optical depth of the three clouds equal to 0.159 at 3 keV, and the column density of hydrogen set to 4.6x10^22 cm^-2. With Av=6.5 mag for the binary companion star, QV Nor, the ratio of the scattering optical depth at 3 keV to the visual extinction is 0.0234 mag^-1.

George W. Clark

2004-04-16T23:59:59.000Z

383

The self-healing diffusion Monte Carlo algorithm (SHDMC) [Reboredo, Hood and Kent, Phys. Rev. B {\\bf 79}, 195117 (2009); Reboredo, {\\it ibid.} {\\bf 80}, 125110 (2009)] is extended to study the ground and excited states of magnetic and periodic systems. The method converges to exact eigenstates as the statistical data collected increases if the wave function is sufficiently flexible. It is shown that the wave functions of complex anti-symmetric eigen-states can be written as the product of an anti-symmetric real factor and a symmetric phase factor. The dimensionality of the nodal surface is dependent on whether phase is a scalar function or not. A recursive optimization algorithm is derived from the time evolution of the mixed probability density, which is given by an ensemble of electronic configurations (walkers) with complex weight. This complex weight allows the amplitude of the fixed-node wave function to move away from the trial wave function phase. This novel approach is both a generalization of SHDMC and the fixed-phase approximation [Ortiz, Ceperley and Martin, Phys Rev. Lett. {\\bf 71}, 2777 (1993)]. When used recursively it simultaneously improves the node and the phase. The algorithm is demonstrated to converge to nearly exact solutions of model systems with periodic boundary conditions or applied magnetic fields. The computational cost is proportional to the number of independent degrees of freedom of the phase. The method is applied to obtain low-energy excitations of Hamiltonians with magnetic field or periodic boundary conditions. The method is used to optimize wave functions with twisted boundary conditions, which are included in a many-body Bloch phase. The potential applications of this new method to study periodic, magnetic, and complex Hamiltonians are discussed.

Fernando Agustín Reboredo

2010-07-19T23:59:59.000Z

384

CCpi0 Event Reconstruction at MiniBooNE

We describe the development of a fitter to reconstruct {nu}{sub {mu}} induced Charged-Current single {pi}{sup 0} (CC{pi}{sup 0}) events in an oil Cerenkov detector (CH{sub 2}). These events are fit using a generic muon and two photon extended track hypothesis from a common event vertex. The development of ring finding and particle identification are described. Comparisons between data and Monte Carlo are presented for a few kinematic distributions.

Nelson, Robert H.; /Colorado U.

2009-09-01T23:59:59.000Z

385

NLE Websites -- All DOE Office Websites (Extended Search)

new events. STEM Education Events STEM Education Programs Teachers (K-12) Students (K-12) Higher Education Regional Education Partners LANL STEM Education Summit Resources...

386

Riemannian Manifold Hamiltonian Monte Carlo

Girolami,M. Calderhead,B. Chin,S. DCS Technical Report Series pp 35 Dept of Computing Science, University of Glasgow

Girolami, M.

387

Monte Carlo Simulation of Solidification

Science Conference Proceedings (OSTI)

m>= (2+ cod).( l- cose)2. 4. (3). In the simulation, it is assumed that the nucleation in a cell (iJ) would not take place until the accumulation of nucleation (Ni) ...

388

NLE Websites -- All DOE Office Websites (Extended Search)

Events All upcoming events are listed below. | View full calendar Add EETD Calendar to Google Calendar Fri, Sep 6, 2013 - 12:00pm - 1:00pm Wireless Data Collection and Actuation...

389

Configurational-bias Monte Carlo simulations in the Gibbs ensemble using the TraPPE force field were carried out to predict the pressure-composition diagrams for the binary mixture of ethanol and 1,1,1,2,3,3,3-heptafluoropropane at 283.17 and 343.13 K. A new approach is introduced that allows to scale predictions at one temperature based on the differences in Gibbs free energies of transfer between experiment and simulation obtained at another temperature. A detailed analysis of the molecular structure and hydrogen bonding for this fluid mixture is provided.

Rai, N; Rafferty, J L; Maiti, A; Siepmann, I

2007-02-28T23:59:59.000Z

390

Measurement of jet multiplicity in top pair events

The normalized differential $t\\bar t$ cross section in jet multiplicity bins is presented, measured in proton-proton collisions using 5.0~fb$^{-1}$ of data collected at $\\sqrt{s}$ = 7~\\TeV. The analysis is performed in the electron + jets and the muon + jets channels. The jet multiplicity distribution is sensitive to initial state radiation. A comparison of the data with different Monte Carlo generators is shown. After background subtraction, the data are in agreement with $t\\bar t$ signal Monte Carlo distributions. Additionally, the measurement of the top quark pair differential cross-section in the number of radiated additional hard partons in the muon + jets channel is presented. The measured fractions of events with $t\\bar t$ + 0, 1, and $\\geq$ 2 additional partons are in good agreement with different Monte Carlo predictions.

CMS Collaboration

2012-01-01T23:59:59.000Z

391

The electron energy spectra, not connected to b-decay, of 235U- and 239Pu-films, irradiated by thermal neutrons, obtained by a Monte Carlo method is presented in the given work. The modelling was performed with the help of a computer code MCNP4C (Monte Carlo Neutron Photon transport code system), allowing to carry out the computer experiments on joint transport of neutrons, photons and electrons. The experiment geometry and the parameters of an irradiation were the same, as in [11] (the thickness of a foil varied only). As a result of computer experiments, the electron spectra was obtained for the samples of 235U, 239Pu and uranium dioxide of 93 % enrichment representing a set of films of 22 mm in diameter and different thickness: 0,001 mm, 0,005 mm, 0,02 mm, 0,01 mm, 0,1 mm, 1,0 mm; and also for the uranium dioxide film of 93 % enrichment (diameter 22 mm and thickness 0,01mm), located between two protective 0,025 mm aluminium disks (the conditions of experiment in [11]) and the electron spectrum was fixed at the output surface of a protective disk. The comparative analysis of the experimental [11] and calculated b--spectra is carried out.

V. D. Rusova; V. N. Pavlovychb; V. A. Tarasova; S. V. Iaroshenkob; D. A. Litvinova

2004-07-05T23:59:59.000Z

392

The electron energy spectra, not connected to b-decay, of 235U- and 239Pu-films, irradiated by thermal neutrons, obtained by a Monte Carlo method is presented in the given work. The modelling was performed with the help of a computer code MCNP4C (Monte Carlo Neutron Photon transport code system), allowing to carry out the computer experiments on joint transport of neutrons, photons and electrons. The experiment geometry and the parameters of an irradiation were the same, as in [11] (the thickness of a foil varied only). As a result of computer experiments, the electron spectra was obtained for the samples of 235U, 239Pu and uranium dioxide of 93 % enrichment representing a set of films of 22 mm in diameter and different thickness: 0,001 mm, 0,005 mm, 0,02 mm, 0,01 mm, 0,1 mm, 1,0 mm; and also for the uranium dioxide film of 93 % enrichment (diameter 22 mm and thickness 0,01mm), located between two protective 0,025 mm aluminium disks (the conditions of experiment in [11]) and the electron spectrum was fixed at...

Rusova, V D; Tarasova, V A; Iaroshenkob, S V; Litvinova, D A

2004-01-01T23:59:59.000Z

393

NLE Websites -- All DOE Office Websites (Extended Search)

Special Events Special Events Tornado and Severe Storm Seminar with Meteorologist Tom Skilling, Saturday, April 6, 2013. The 2013 Fermilab/WGN-TV Tornado and Severe Weather Seminar will take place Saturday, April 6, at noon and repeats in its entirety at 6 p.m. The program features numerous special guests, and runs about four hours. We hope you can join us! The programs are free of charge, require no tickets and feature seating on a first come, first served basis. This is the 32nd year we've presented our Fermilab tornado seminars, and we look forward to seeing you! More information Video: Upcoming Events This brief video gives an overview of all of our upcoming public events, including special events. A new video will be posted on the first of each month. Watch video Past exhibits in the atrium of Wilson Hall

394

NLE Websites -- All DOE Office Websites (Extended Search)

Events Events RSS feed RSS Feed Past Events Smart Solar Energy for the Smart Grid NOVEMBER 20, 2013 Solar photovoltaic (PV) installations traditionally are stand-alone systems without integrated computation. However, it is possible to use real-time processes to adaptively reconfigure solar PV installations while sensing and computing environmental factors. This talk will introduce new concepts that enable solar installations to adapt their performance to environmental conditions. PATTERN: Advantages of High Resolution Weather Radar Networks SEPTEMBER 30, 2013 The aim of this presentation is to identify advantages and disadvantages of a high resolution radar network as well as single radars operating in the X-Band frequency range. The presentation will include a description of

395

The introductory chapter of this monograph, which follows this Preface, provides an overview of radiotherapy and treatment planning. The main chapters that follow describe in detail three significant aspects of radiotherapy on which the author has focused her research efforts. Chapter 2 presents studies the author worked on at the German National Cancer Institute (DKFZ) in Heidelberg. These studies applied the Monte Carlo technique to investigate the feasibility of performing Intensity Modulated Radiotherapy (IMRT) by scanning with a narrow photon beam. This approach represents an alternative to techniques that generate beam modulation by absorption, such as MLC, individually-manufactured compensators, and special tomotherapy modulators. The technical realization of this concept required investigation of the influence of various design parameters on the final small photon beam. The photon beam to be scanned should have a diameter of approximately 5 mm at Source Surface Distance (SSD) distance, and the penumbr...

Wysocka-Rabin, A

2013-01-01T23:59:59.000Z

396

NLE Websites -- All DOE Office Websites (Extended Search)

Events Events All past events are listed below. | View full calendar Add EETD Calendar to Google Calendar Fri, Dec 20, 2013 - 2:00pm - 3:00pm An Open Architecture Platform for Demand Resources from AutoDR and MBCx: National Virtual Power Plant (Seminar) Speaker(s): Jung In Choi Location: 90-3122 Tue, Dec 10, 2013 - 12:00pm - 1:00pm On the Accuracy of Regulatory Cost Estimates (Seminar) Speaker(s): Richard Morgenstern Location: 90-3122 Wed, Dec 4, 2013 - 12:00pm - 1:00pm Modeling, Estimation, and Control in Energy Systems: Batteries & Demand Response (Seminar) Speaker(s): Scott Moura Location: 90-3122 Thu, Nov 21, 2013 - 2:30pm - 3:30pm SMUD 2013 PowerDirectÂ® AutoDR Pilot Program Overview (Seminar) Speaker(s): Harlan S. Coomes Location: 90-4133 Thu, Nov 21, 2013 - 12:00pm - 1:00pm

397

When the finite-difference time-domain (FDTD) method is applied to light scattering computations, the far fields can be obtained by either a volume integration method, or a surface integration method. In the first study, we investigate the errors associated with the two near-to-far field transform methods. For a scatterer with a small refractive index, the surface approach is more accurate than its volume counterpart for computing the phase functions and extinction efficiencies; however, the volume integral approach is more accurate for computing other scattering matrix elements. If a large refractive index is involved, the results computed from the volume integration method become less accurate, whereas the surface method still retains the same order of accuracy as in the situation of a small refractive index. In my second study, a fourth order symplectic FDTD method is applied to the problem of light scattering by small particles. The total-field/ scattered-field (TF/SF) technique is generalized for providing the incident wave source conditions in the symplectic FDTD (SFDTD) scheme. Numerical examples demonstrate that the fourthorder symplectic FDTD scheme substantially improves the precision of the near field calculation. The major shortcoming of the fourth-order SFDTD scheme is that it requires more computer CPU time than the conventional second-order FDTD scheme if the same grid size is used. My third study is on multiple scattering theory. We develop a 3D Monte Carlo code for the solving vector radiative transfer equation, which is the equation governing the radiation field in a multiple scattering medium. The impulse-response relation for a plane-parallel scattering medium is studied using our 3D Monte Carlo code. For a collimated light beam source, the angular radiance distribution has a dark region as the detector moves away from the incident point. The dark region is gradually filled as multiple scattering increases. We have also studied the effects of the finite size of clouds. Extending the finite size of clouds to infinite layers leads to underestimating the reflected radiance in the multiple scattering region, especially for scattering angles around 90 degrees. The results have important applications in the field of remote sensing.

Zhai, Pengwang

2006-08-01T23:59:59.000Z

398

A Study of the Di-Hadron Angular Correlation Function in Event by Event Ideal Hydrodynamics

The di-hadron angular correlation function is computed within boost invariant, ideal hydrodynamics for Au+Au collisions at $\\sqrt{s}_{NN}=200$ GeV using Monte Carlo Glauber fluctuating initial conditions. When $0event by event basis to the initial condition geometrical parameters $\\left\\{\\varepsilon_{2,n}, \\Phi_{2,n} \\right \\}$, respectively. Moreover, the fluctuation of the relative phase between trigger and associated particles, $\\Delta_n =\\Psi_n^t - \\Psi_n^a$, is found to affect the di-hadron angular correlation function when different intervals of transverse momentum are used to define the trigger and the associated hadrons.

R. P. G. Andrade; J. Noronha

2013-05-14T23:59:59.000Z

399

Event group importance measures for top event frequency analyses

Three traditional importance measures, risk reduction, partial derivative, nd variance reduction, have been extended to permit analyses of the relative importance of groups of underlying failure rates to the frequencies of resulting top events. The partial derivative importance measure was extended by assessing the contribution of a group of events to the gradient of the top event frequency. Given the moments of the distributions that characterize the uncertainties in the underlying failure rates, the expectation values of the top event frequency, its variance, and all of the new group importance measures can be quantified exactly for two familiar cases: (1) when all underlying failure rates are presumed independent, and (2) when pairs of failure rates based on common data are treated as being equal (totally correlated). In these cases, the new importance measures, which can also be applied to assess the importance of individual events, obviate the need for Monte Carlo sampling. The event group importance measures are illustrated using a small example problem and demonstrated by applications made as part of a major reactor facility risk assessment. These illustrations and applications indicate both the utility and the versatility of the event group importance measures.

NONE

1995-07-31T23:59:59.000Z

400

Underlying event studies at ATLAS and CDF

Improving our understanding and modeling of the underlying event in high energy collider environment is important for more precise measurements at the LHC. CDF Run II data for the underlying event associated with Drell-Yan lepton pair production and early ATLAS data measuring underlying event activity with respect to the leading transverse momentum track are presented. The data are compared with several QCD Monte Carlo models. It is seen that no current standard Monte Carlo tune adequately describes all the early ATLAS data and CDF data simultaneously. The underlying event observables presented here are particularly important for constraining the energy evolution of multiple parton interaction models. One of the goals of these analyses is to provide data that can be used to test and improve MC models for current and future physics studies at the LHC. The underlying event observables presented here are particularly important for constraining the energy evolution of multiple partonic interaction models, since the plateau heights of the underlying event profiles are highly correlated to multiple parton interaction activity. The data at 7 TeV are crucial for MC tuning, since measurements are needed with at least two energies to constrain the energy evolution of MPI activity. PYTHIA tune A and tune AW do a good job in describing the CDF data on the underlying-event observables for leading jet and Drell-Yan events, respectively, although the agreement between predictions and data is not perfect. The leading-jet data show slightly more activity in the underlying event than PYTHIA Tune A, although they are very similar - which may indicate the universality of underlying event modeling. However, all pre-LHC MC models predict less activity in the transverse region (i.e in the underlying event) than is actually observed in ATLAS leading track data, for both center-of-mass energies. There is therefore no current standard MC tune which adequately describes all the early ATLAS data. However, using diffraction-limited minimum bias distributions and the plateau of the underlying event distributions presented here, ATLAS has developed a new PYTHIA tune AMBT1 (ATLAS Minimum Bias Tune 1) and a new HERWIG+ JIMMY tune AUET1 (ATLAS Underlying Event Tune 1) which model the p{sub T} and charged multiplicity spectra significantly better than the pre-LHC tunes of those generators. It is critical to have sensible underlying event models containing our best physical knowledge and intuition, tuned to all relevant available data.

Kar, D.; /Dresden, Tech. U.

2011-01-01T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

401

Nanoparticles: Synthesis, Monte Carlo Simulation and Application...

NLE Websites -- All DOE Office Websites (Extended Search)

of Contact: Samuel Mao Venkat Srinivasan Nanoparticles are building blocks for nanotechnology and have been widely exploited for various applications. Numerous methods have...

402

A Distributed Application for Monte Carlo Simulations

Science Conference Proceedings (OSTI)

The present paper implements a cluster of workstation (COW) structure and also an application that will demonstrate the advantages and benefits of such a structure. The application is a message - passing - based parallel program, which simulates in parallel ...

Nicolae Tapus; Mihai Burcea; Vlad Staicu

2001-09-01T23:59:59.000Z

403

A Monte Carlo study of growth regressions

14:103–147 Table 11 continued Error-to-truth ratio: Variable14:103–147 Table 4 Varying the error–truth ratio: average14:103–147 Table 9 Varying the error–truth ratio with more

Hauk, William R.; Wacziarg, Romain

2009-01-01T23:59:59.000Z

404

Quantum Monte Carlo for Vibrating Molecules

large W) and low kinetic energy matrix elements (small W)approach. Multi-state energy estimator After matrix elementin the matrix elements of the highest energy trial functions

Brown, W.R.

2010-01-01T23:59:59.000Z

405

Search method for coincident events from LIGO and IceCube detectors

We present a coincidence search method for astronomical events using gravitational wave detectors in conjunction with other astronomical observations. We illustrate our method for the specific case of LIGO gravitational wave detector and the IceCube neutrino detector. Event triggers which appear in both detectors within a certain time window are selected as time coincident events. Then the spatial overlap of reconstructed event directions is evaluated by an unbinned maximum likelihood method. Our method was tested by Monte Carlo simulations using simulated LIGO and IceCube events. We estimated a typical false alarm rate of the analysis to be 1 event per 435 years. This would allow us to relax the event trigger thresholds of the individual detectors and improve the detection capability.

Aso, Yoichi; Finley, Chad; Dwyer, John; Kotake, Kei; Marka, Szabolcs

2007-01-01T23:59:59.000Z

406

Search method for coincident events from LIGO and IceCube detectors

We present a coincidence search method for astronomical events using gravitational wave detectors in conjunction with other astronomical observations. We illustrate our method for the specific case of the LIGO gravitational wave detector and the IceCube neutrino detector. LIGO trigger-events and IceCube events which occur within a given time window are selected as time-coincident events. Then the spatial overlap of the reconstructed event directions is evaluated using an unbinned maximum likelihood method. Our method was tested with Monte Carlo simulations based on realistic LIGO and IceCube event distributions. We estimated a typical false alarm rate for the analysis to be 1 event per 435 years. This is significantly smaller than the false alarm rates of the ndividual detectors.

Yoichi Aso; Zsuzsa Marka; Chad Finley; John Dwyer; Kei Kotake; Szabolcs Marka

2007-11-01T23:59:59.000Z

407

Event-by-event hydrodynamics and elliptic flow from fluctuating initial states

Science Conference Proceedings (OSTI)

We develop a framework for event-by-event ideal hydrodynamics to study the differential elliptic flow, which is measured at different centralities in Au + Au collisions at the Relativistic Heavy Ion Collider (RHIC). Fluctuating initial energy density profiles, which here are the event-by-event analogs of the wounded nucleon profiles, are created using a Monte Carlo Glauber model. Using the same event plane method for obtaining v{sub 2} as in the data analysis, we can reproduce both the measured centrality dependence and the p{sub T} shape of charged-particle elliptic flow up to p{sub T}{approx}2 GeV. We also consider the relation of elliptic flow to the initial-state eccentricity using different reference planes and discuss the correlation between the physical event plane and the initial participant plane. Our results demonstrate that event-by-event hydrodynamics with initial-state fluctuations must be accounted for before a meaningful lower limit for viscosity can be obtained from elliptic flow data.

Holopainen, H.; Eskola, K. J. [Department of Physics, Post Office Box 35, University of Jyvaeskylae, FIN-40014 Jyvaeskylae (Finland); Helsinki Institute of Physics, Post Office Box 64, University of Helsinki, FIN-00014 Helsinki (Finland); Niemi, H. [Frankfurt Institute for Advanced Studies, Ruth-Moufang-Strasse 1, D-60438 Frankfurt am Main (Germany)

2011-03-15T23:59:59.000Z

408

ITER Neutronics Modeling Using Hybrid Monte Carlo/Deterministic and CAD-Based Monte Carlo Methods

Science Conference Proceedings (OSTI)

Technical Paper / Special Issue on the 16th Biennial Topical Meeting of the Radiation Protection and Shielding Division / Radiation Transport and Protection

Ahmad M. Ibrahim; Scott W. Mosher; Thomas M. Evans; Douglas E. Peplow; Mohamed E. Sawan; Paul P. H. Wilson; John C. Wagner; Thad Heltemes

409

A toy detector has been designed to simulate central detectors in reactor neutrino experiments in the paper. The samples of neutrino events and three major backgrounds from the Monte-Carlo simulation of the toy detector are generated in the signal region. The Bayesian Neural Networks(BNN) are applied to separate neutrino events from backgrounds in reactor neutrino experiments. As a result, the most neutrino events and uncorrelated background events in the signal region can be identified with BNN, and the part events each of the fast neutron and $^{8}$He/$^{9}$Li backgrounds in the signal region can be identified with BNN. Then, the signal to noise ratio in the signal region is enhanced with BNN. The neutrino discrimination increases with the increase of the neutrino rate in the training sample. However, the background discriminations decrease with the decrease of the background rate in the training sample.

Ye Xu; Yixiong Meng; Weiwei Xu

2008-08-02T23:59:59.000Z

410

Science Conference Proceedings (OSTI)

The prediction of events is of substantial interest in many research areas. To evaluate the performance of prediction methods, the statistical validation of these methods is of utmost importance. Here, we compare an analytical validation method to numerical approaches that are based on Monte Carlo simulations. The comparison is performed in the field of the prediction of epileptic seizures. In contrast to the analytical validation method, we found that for numerical validation methods insufficient but realistic sample sizes can lead to invalid high rates of false positive conclusions. Hence we outline necessary preconditions for sound statistical tests on above chance predictions.

Feldwisch-Drentrup, Hinnerk [Bernstein Center Freiburg (BCF), University of Freiburg, Freiburg (Germany); Freiburg Center for Data Analysis and Modeling (FDM), University of Freiburg, Freiburg (Germany); Department of Neurobiology and Biophysics, Faculty of Biology, University of Freiburg, Freiburg (Germany); Freiburg Institute for Advanced Studies, University of Freiburg, Freiburg (Germany); Department of Physics, University of Freiburg, Freiburg (Germany); Schulze-Bonhage, Andreas [Bernstein Center Freiburg (BCF), University of Freiburg, Freiburg (Germany); Epilepsy Center, University Hospital of Freiburg, Freiburg (Germany); Timmer, Jens [Bernstein Center Freiburg (BCF), University of Freiburg, Freiburg (Germany); Freiburg Center for Data Analysis and Modeling (FDM), University of Freiburg, Freiburg (Germany); Freiburg Institute for Advanced Studies, University of Freiburg, Freiburg (Germany); Department of Physics, University of Freiburg, Freiburg (Germany); Department of Clinical and Experimental Medicine, Linkoeping University, Linkoeping (Sweden); Schelter, Bjoern [Freiburg Center for Data Analysis and Modeling (FDM), University of Freiburg, Freiburg (Germany); Department of Physics, University of Freiburg, Freiburg (Germany); Institute for Complex Systems and Mathematical Biology, SUPA, University of Aberdeen, Aberdeen (United Kingdom)

2011-06-15T23:59:59.000Z

411

NLE Websites -- All DOE Office Websites (Extended Search)

Monte Carlo Event Generators 1 1. MONTE CARLO EVENT GENERATORS Written January 2012 by P. Nason (INFN, Milan) and P.Z. Skands (CERN) General-purpose Monte Carlo (GPMC) generators...

412

New event-driven sampling techniques for network reliability estimation

Science Conference Proceedings (OSTI)

Exactly computing network reliability measures is an NP-hard problem. Therefore, Monte Carlo simulation has been frequently used by network designers to obtain accurate estimates. This paper focuses on simulation estimation of network reliability. Using ...

Abdullah Konak; Alice E. Smith; Sadan Kulturel-Konak

2004-12-01T23:59:59.000Z

413

Measurements of charged particle distributions, sensitive to the underlying event, have been performed with the ATLAS detector at the LHC. The measurements are based on data collected using a minimum-bias trigger to select proton-proton collisions at center-of-mass energies of 900 GeV and 7 TeV. The "underlying event" is defined as those aspects of a hadronic interaction attributed not to the hard scattering process, but rather to the accompanying interactions of the rest of the proton. Three regions are defined in azimuthal angle with respect to the highest-pt charged particle in the event, such that the region transverse to the dominant momentum-flow is most sensitive to the underlying event. In each of these regions, distributions of the charged particle multiplicity, pt density, and average pt are measured. The data show a higher underlying event activity than that predicted by Monte Carlo models tuned to pre-LHC data.

The ATLAS Collaboration

2010-12-03T23:59:59.000Z

414

Signals of New Physics in the Underlying Event

Science Conference Proceedings (OSTI)

LHC searches for new physics focus on combinations of hard physics objects. In this work we propose a qualitatively different soft signal for new physics at the LHC - the 'anomalous underlying event'. Every hard LHC event will be accompanied by a soft underlying event due to QCD and pile-up effects. Though it is often used for QCD and monte carlo studies, here we propose the incorporation of an underlying event analysis in some searches for new physics. An excess of anomalous underlying events may be a smoking-gun signal for particular new physics scenarios such as 'quirks' or 'hidden valleys' in which large amounts of energy may be emitted by a large multiplicity of soft particles. We discuss possible search strategies for such soft diffuse signals in the tracking system and calorimetry of the LHC experiments. We present a detailed study of the calorimetric signal in a concrete example, a simple quirk model motivated by folded supersymmetry. In these models the production and radiative decay of highly excited quirk bound states leads to an 'antenna pattern' of soft unclustered energy. Using a dedicated simulation of a toy detector and a 'CMB-like' multipole analysis we compare the signal to the expected backgrounds.

Harnik, Roni; /Stanford U., ITP /SLAC; Wizansky, Tommer; /SLAC; ,

2010-06-11T23:59:59.000Z

415

Dynamic Data-Driven Event Reconstruction for Atmospheric Releases

For atmospheric releases, event reconstruction answers the critical questions - How much material was released? When? Where? and What are the potential consequences? Inaccurate estimation of the source term can lead to gross errors, time delays during a crisis, and even fatalities. We are developing a capability that seamlessly integrates observational data streams with predictive models in order to provide the best possible estimates of unknown source term parameters, as well as optimal and timely situation analyses consistent with both models and data. Our approach utilizes Bayesian inference and stochastic sampling methods (Markov Chain and Sequential Monte Carlo) to reformulate the inverse problem into a solution based on efficient sampling of an ensemble of predictive simulations, guided by statistical comparisons with data.

Sugiyama, G; Kosovic, B; Hanley, W; Johannesson, G; Larsen, S; Loosmore, G; Lundquist, J; Mirin, A; Nitao, J; Serban, R; Dyer, K

2004-10-13T23:59:59.000Z

416

Stochastic Event-Driven Molecular Dynamics

Science Conference Proceedings (OSTI)

A novel Stochastic Event-Driven Molecular Dynamics (SEDMD) algorithm is developed for the simulation of polymer chains suspended in a solvent. SEDMD combines event-driven molecular dynamics (EDMD) with the Direct Simulation Monte Carlo (DSMC) method. The polymers are represented as chains of hard-spheres tethered by square wells and interact with the solvent particles with hard-core potentials. The algorithm uses EDMD for the simulation of the polymer chain and the interactions between the chain beads and the surrounding solvent particles. The interactions between the solvent particles themselves are not treated deterministically as in EDMD, rather, the momentum and energy exchange in the solvent is determined stochastically using DSMC. The coupling between the solvent and the solute is consistently represented at the particle level retaining hydrodynamic interactions and thermodynamic fluctuations. However, unlike full MD simulations of both the solvent and the solute, in SEDMD the spatial structure of the solvent is ignored. The SEDMD algorithm is described in detail and applied to the study of the dynamics of a polymer chain tethered to a hard-wall subjected to uniform shear. SEDMD closely reproduces results obtained using traditional EDMD simulations with two orders of magnitude greater efficiency. Results question the existence of periodic (cycling) motion of the polymer chain.

Donev, Aleksandar [Lawrence Livermore National Laboratory, P.O. Box 808, Livermore, CA 94551-9900 (United States)], E-mail: aleks.donev@gmail.com; Garcia, Alejandro L. [Department of Physics, San Jose State University, San Jose, CA 95192 (United States); Alder, Berni J. [Lawrence Livermore National Laboratory, P.O. Box 808, Livermore, CA 94551-9900 (United States)

2008-02-01T23:59:59.000Z

417

UNIVERSIDADE FEDERAL DE MINAS GERAIS Avenida Antonio Carlos, 6627 -Pampulha

UNIVERSIDADE FEDERAL DE MINAS GERAIS Avenida Antonio Carlos, 6627 - Pampulha Belo Horizonte, Minas de 1949, quando foi federalizada. Atualmente a UFMG conta com dois campi em Belo Horizonte - Pampulha e SaÃºde Â e um Campus situado em Montes Claros. Ã? composta por 19 unidades acadÃªmicas, trÃªs unidades

Chaimowicz, Luiz

418

General-purpose event generators for LHC physics

We review the physics basis, main features and use of general-purpose Monte Carlo event generators for the simulation of proton-proton collisions at the Large Hadron Collider. Topics included are: the generation of hard-scattering matrix elements for processes of interest, at both leading and next-to-leading QCD perturbative order; their matching to approximate treatments of higher orders based on the showering approximation; the parton and dipole shower formulations; parton distribution functions for event generators; non-perturbative aspects such as soft QCD collisions, the underlying event and diffractive processes; the string and cluster models for hadron formation; the treatment of hadron and tau decays; the inclusion of QED radiation and beyond-Standard-Model processes. We describe the principal features of the Ariadne, Herwig++, Pythia 8 and Sherpa generators, together with the Rivet and Professor validation and tuning tools, and discuss the physics philosophy behind the proper use of these generators and tools. This review is aimed at phenomenologists wishing to understand better how parton-level predictions are translated into hadron-level events as well as experimentalists wanting a deeper insight into the tools available for signal and background simulation at the LHC.

Buckley, Andy; /Edinburgh U.; Butterworth, Jonathan; /University Coll. London; Gieseke, Stefan; /Karlsruhe U., ITP; Grellscheid, David; /Durham U., IPPP; Hoche, Stefan; /SLAC; Hoeth, Hendrik; Krauss, Frank; /Durham U., IPPP; Lonnblad, Leif; /Lund U., Dept. Theor. Phys. /CERN; Nurse, Emily; /University Coll. London; Richardson, Peter; /Durham U., IPPP; Schumann, Steffen; /Heidelberg U.; Seymour, Michael H.; /Manchester U.; Sjostrand, Torbjorn; /Lund U., Dept. Theor. Phys.; Skands, Peter; /CERN; Webber, Bryan; /Cambridge U.

2011-03-03T23:59:59.000Z

419

Minimum bias and underlying event studies at CDF

Soft, non-perturbative, interactions are poorly understood from the theoretical point of view even though they form a large part of the hadronic cross section at the energies now available. We review the CDF studies on minimum-bias ad underlying event in p{bar p} collisions at 2 TeV. After proposing an operative definition of 'underlying event', we present part of a systematic set of measurements carried out by the CDF Collaboration with the goal to provide data to test and improve the QCD models of hadron collisions. Different analysis strategies of the underlying event and possible event topologies are discussed. Part of the CDF minimum-bias results are also presented: in this sample, that represent the full inelastic cross-section, we can test simultaneously our knowledge of all the components that concur to form hadronic interactions. Comparisons with MonteCarlo simulations are always shown along with the data. These measurements will also contribute to more precise estimates of the soft QCD background of high-p{sub T} observables.

Moggi, Niccolo; /INFN, Bologna

2010-01-01T23:59:59.000Z

420

Lattice Monte Carlo Determination of Harrison Kinetics Regimes for ...

Science Conference Proceedings (OSTI)

Conference Tools for 2012 TMS Annual Meeting & Exhibition ... Cluster Expansion Methods - Progress and Outlook · Coarsening of Bicontinuous Two- Phase ... Fully Ab Initio Determination of Free Energies: Where Do We Stand? Generalized ...

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

421

Monte Carlo Simulations of the Magnetocaloric Effect and Exchange ...

Science Conference Proceedings (OSTI)

Bonded Magnetocaloric Powders for the Refrigeration Application · Coercivity ... Industrial Needs and Applications for Soft Magnetic Materials · Industrial ...

422

Monte Carlo Sampling-Based Methods for Stochastic Optimization

applications in areas such as energy planning, national security, supply chain management, health. 2 ..... The product is sold at a given price r per unit during the selling season, ... The first newsvendor instance has demand modeled as an exponential random variable ...... Response surface analysis of two-stage stochastic.

423