While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

1

Variance Reduction Techniques for Implicit Monte Carlo Simulations

The Implicit Monte Carlo (IMC) method is widely used for simulating thermal radiative transfer and solving the radiation transport equation. During an IMC run a grid network is constructed and particles are sourced into the problem to simulate...

Landman, Jacob Taylor

2013-09-19T23:59:59.000Z

2

Stratified source-sampling techniques for Monte Carlo eigenvalue analysis.

In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo ''Eigenvalue of the World'' problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. In this paper, stratified source-sampling techniques are generalized and applied to three different Eigenvalue of the World configurations which take into account real-world statistical noise sources not included in the model problem, but which differ in the amount of neutronic coupling among the constituents of each configuration. It is concluded that, in Monte Carlo eigenvalue analysis of loosely-coupled arrays, the use of stratified source-sampling reduces the probability of encountering an anomalous result over that if conventional source-sampling methods are used. However, this gain in reliability is substantially less than that observed in the model-problem results.

Mohamed, A.

1998-07-10T23:59:59.000Z

3

Monte Carlo techniques of simulation applied to a single item inventory system

MONTE CARI, O TECHNIQUES OF SIMULATION APPLIED TO A SINGLE ITEM INVENTORY SYSTEM A Thesis By WILLIAM MURRAY ALDRED, JR. Submitted to the Graduate College of the Texas A&M University in partial fulfillment of the requirements for the degree... of MASTER OF SCIENCE August 1965 Major SubJect: Computer Science MONTE CARLO TECHNIQUES OF SIMULATION APPLIED TO A SINGLE ITEM INVENTORY SYSTEM A Thesis By WILLIAM MURRAY ALDRED, JR. Approved as to style and content by: (Chairman of Committee (Head...

Aldred, William Murray

2012-06-07T23:59:59.000Z

4

This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

Brown, F.B.; Sutton, T.M.

1996-02-01T23:59:59.000Z

5

The development of Monte Carlo techniques for isotopic inventory analysis has been explored in order to facilitate the modeling of systems with flowing streams of material through varying neutron irradiation environments. This represents a novel application of Monte Carlo methods to a field that has traditionally relied on deterministic solutions to systems of first-order differential equations. The Monte Carlo techniques were based largely on the known modeling techniques of Monte Carlo radiation transport, but with important differences, particularly in the area of variance reduction and efficiency measurement. The software that was developed to implement and test these methods now provides a basis for validating approximate modeling techniques that are available to deterministic methodologies. The Monte Carlo methods have been shown to be effective in reproducing the solutions of simple problems that are possible using both stochastic and deterministic methods. The Monte Carlo methods are also effective for tracking flows of materials through complex systems including the ability to model removal of individual elements or isotopes in the system. Computational performance is best for flows that have characteristic times that are large fractions of the system lifetime. As the characteristic times become short, leading to thousands or millions of passes through the system, the computational performance drops significantly. Further research is underway to determine modeling techniques to improve performance within this range of problems. This report describes the technical development of Monte Carlo techniques for isotopic inventory analysis. The primary motivation for this solution methodology is the ability to model systems of flowing material being exposed to varying and stochastically varying radiation environments. The methodology was developed in three stages: analog methods which model each atom with true reaction probabilities (Section 2), non-analog methods which bias the probability distributions while adjusting atom weights to preserve a fair game (Section 3), and efficiency measures to provide local and global measures of the effectiveness of the non-analog methods (Section 4). Following this development, the MCise (Monte Carlo isotope simulation engine) software was used to explore the efficiency of different modeling techniques (Section 5).

Paul P.H. Wilson

2005-07-30T23:59:59.000Z

6

Interactive design of neutron beam collimators using the Monte Carlo technique in APL

Science Journals Connector (OSTI)

The scattering and absorption of a neutron beam in a system of collimation rings is simulated step by step under user's control. In the APL notation, the expression of a parallel Monte Carlo simulation scheme is straightforward. Coupled with a fullscreen ...

C. Bastian

1982-07-01T23:59:59.000Z

7

Radiative transfer in the earth's atmosphere-ocean system using Monte Carlo techniques

are described in the next chapter. The books by Morgan and Hammersley and Handscomb describe the theory and some methods of variance reduction for general applications. One item that is required of any Monte Carlo simulation is a supply of randoni numbers... be checked through modification of the model since the same sequeiice of random numbers may be generated repeatedly. Discussions on the properties ot' random nuinbers and their generation may be found in the books by Morgan' and Hammersley and Handscomb...

Bradley, Paul Andrew

2012-06-07T23:59:59.000Z

8

Parallel Monte Carlo reactor neutronics

The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved.

Blomquist, R.N.; Brown, F.B.

1994-03-01T23:59:59.000Z

9

Science Journals Connector (OSTI)

In this work we present an efficient procedure to evaluate effective pair potentials compatible with “experimental” distribution functions using a Monte Carlo simulation scheme. Using computer simulation results for the pair distribution functions, we have applied the method to a Lennard-Jones fluid and to a model of liquid aluminum. In both cases the procedure was able to recover with high accuracy the actual interaction potential of the systems. Moreover, the procedure can easily incorporate additional information, for instance, thermodynamic properties, in order to improve the reliability of the results.

N. G. Almarza and E. Lomba

2003-07-03T23:59:59.000Z

10

Science Journals Connector (OSTI)

......Figure-1. Scaled DPKs in water calculated using MCNP5...Figure-2. Scaled DPKs in water calculated using MCNP5...from LHC to ICARUS and atmospheric showers. (1997) Proceedings...dose point kernels in water generated by the Monte...database, random number generator and statistical error......

J. Wu; Y. L. Liu; S. J. Chang; M. M. Chao; S. Y. Tsai; D. E. Huang

2012-11-01T23:59:59.000Z

11

I review the status of the general-purpose Monte Carlo event generators for the LHC, with emphasis on areas of recent physics developments. There has been great progress, especially in multi-jet simulation, but I mention some question marks that have recently arisen.

Michael H. Seymour

2010-08-17T23:59:59.000Z

12

Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

Zimmerman, G.B.

1997-06-24T23:59:59.000Z

13

Advancements in parallel and cluster computing have made many complex Monte Carlo simulations possible in the past several years. Unfortunately, cluster computers are large, expensive, and still not fast enough to make the Monte Carlo technique...

Pasciak, Alexander Samuel

2007-04-25T23:59:59.000Z

14

DOE Science Showcase - Monte Carlo Methods | OSTI, US Dept of Energy,

Office of Scientific and Technical Information (OSTI)

Monte Carlo Methods Monte Carlo Methods Monte Carlo calculation methods are algorithms for solving various kinds of computational problems by using (pseudo)random numbers. Developed in the 1940s during the Manhattan Project, the Monte Carlo method signified a radical change in how scientists solved problems. Learn about the ways these methods are used in DOE's research endeavors today in "Monte Carlo Methods" by Dr. William Watson, Physicist, OSTI staff. Effects of static particle dispersions on grain growth are studied using SPPARKS simulations Image credit: Sandia National Laboratory Monte Carlo Results in DOE Databases Lab biophysicist invents improvement to Monte Carlo technique, LLNL News Monte Carlo Benchmark software, ESTSC Improved Monte Carlo Renormalization Group Method, DOE R&D

15

A Monte Carlo algorithm for degenerate plasmas

Science Journals Connector (OSTI)

A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the Fermi-Dirac ... Keywords: Degenerate plasma, Monte Carlo

A. E. Turrell; M. Sherlock; S. J. Rose

2013-09-01T23:59:59.000Z

16

Monte Carlo Simulation of Isopentane Glass

Science Journals Connector (OSTI)

...research-article Monte Carlo Simulation of Isopentane Glass S. Yashonath C. N. R. Rao Monte Carlo...quenching the liquid, we have obtained the glass-transition temperature from the temperature...distribution functions suggest a structure of the glass primarily influenced by geometrical factors...

1985-01-01T23:59:59.000Z

17

The MC21 Monte Carlo Transport Code

MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities.

Sutton TM, Donovan TJ, Trumbull TH, Dobreff PS, Caro E, Griesheimer DP, Tyburski LJ, Carpenter DC, Joo H

2007-01-09T23:59:59.000Z

18

HILO: Quasi Diffusion Accelerated Monte Carlo on Hybrid Architectures

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

HILO: Quasi Diffusion Accelerated Monte Carlo on Hybrid Architectures HILO: Quasi Diffusion Accelerated Monte Carlo on Hybrid Architectures The Boltzmann transport equation...

19

THE BEGINNING of the MONTE CARLO METHOD

. For a whole host of 125 #12;Monte Carlo reasons, he had become seriously inter- ested in the thermonuclear a preliminary computational model of a thermonuclear reaction for the ENIAC. He felt he could convince

20

Monte Carlo, Colloids, and Public Health

Science Journals Connector (OSTI)

Here we see in a snapshot the importance of the Monte Carlo algorithm which is used in recent work to simulate data on women's mortality in a specific health-screening program.

Charles Day

2013-01-01T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

21

Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments

Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL.

Pevey, Ronald E.

2005-09-15T23:59:59.000Z

22

Quantum Monte Carlo methods for nuclear physics

Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states and transition moments in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.

J. Carlson; S. Gandolfi; F. Pederiva; Steven C. Pieper; R. Schiavilla; K. E. Schmidt; R. B. Wiringa

2014-12-09T23:59:59.000Z

23

Das Ising-Modell und Monte-Carlo-Simulation Das Ising-Modell und Monte-Carlo-Simulation

Das Ising-Modell und Monte-Carlo-Simulation Das Ising-Modell und Monte-Carlo-Simulation Aljoscha Rheinwalt 14. Januar 2009 Betreuender Professor: Prof. M. MÂ¨uller-PreuÃ?ker #12;Das Ising-Modell und Monte-Carlo-Simulation Gliederung Gliederung Ising-Modell Definition Anwendungen Numerische Analyse Statistische Beschreibung Monte

24

ENVIRONMENTAL MODELING: 1 APPLICATIONS: MONTE CARLO SENSITIVITY SIMULATIONS

SIMULATIONS TO THE PROBLEM OF AIR POLLUTION TRANSPORT 3 1.1 The Danish Eulerian Model #12;Chapter 1 APPLICATIONS: MONTE CARLO SENSITIVITY SIMULATIONS TO THE PROBLEM OF AIR POLLUTION of pollutants in a real-live scenario of air-pollution transport over Europe. First, the developed technique

Dimov, Ivan

25

Monte Carlo event reconstruction implemented with artificial neural networks

I implemented event reconstruction of a Monte Carlo simulation using neural networks. The OLYMPUS Collaboration is using a Monte Carlo simulation of the OLYMPUS particle detector to evaluate systematics and reconstruct ...

Tolley, Emma Elizabeth

2011-01-01T23:59:59.000Z

26

Module 2: Monte Carlo Methods Prof. Mike Giles

, Bermudan options, optimal trading given transaction costs) MC Lecture 1 Â p. 7 #12;Monte Carlo vs. finite 1 Â p. 5 #12;Monte Carlo vs. finite differences Hard to get reliable figures, but my "guesstimate most heavily? . . . and will it stay that way in the future? MC Lecture 1 Â p. 6 #12;Monte Carlo vs

Giles, Mike

27

John von Neumann Institute for Computing Monte Carlo Protein Folding

John von Neumann Institute for Computing Monte Carlo Protein Folding: Simulations of Met://www.fz-juelich.de/nic-series/volume20 #12;#12;Monte Carlo Protein Folding: Simulations of Met-Enkephalin with Solvent-Accessible Area difficulties in applying Monte Carlo methods to protein folding. The solvent-accessible area method, a popular

Hsu, Hsiao-Ping

28

Overview of Bayesian sequential Monte Carlo methods for group and extended object tracking

Science Journals Connector (OSTI)

This work presents the current state-of-the-art in techniques for tracking a number of objects moving in a coordinated and interacting fashion. Groups are structured objects characterized with particular motion patterns. The group can be comprised of ... Keywords: Group and extended object tracking, Markov chain Monte Carlo methods, Metropolis Hastings, Nonlinear filtering, Reasoning over time, Sequential Monte Carlo methods

Lyudmila Mihaylova; Avishy Y. Carmi; François Septier; Amadou Gning; Sze Kim Pang; Simon Godsill

2014-02-01T23:59:59.000Z

29

Melting of Iron under Earth's Core Conditions from Diffusion Monte Carlo Free Energy Calculations

Melting of Iron under Earth's Core Conditions from Diffusion Monte Carlo Free Energy Calculations Ester Sola1 and Dario Alfe`1,2 1 Thomas Young Centre@UCL, and Department of Earth Sciences, UCL, Gower. Here we used quantum Monte Carlo techniques to compute the free energies of solid and liquid iron

AlfÃ¨, Dario

30

Science Journals Connector (OSTI)

The aim of this study was to apply the Monte-Carlo techniques to develop a probabilistic risk assessment. The risk resulting from the occupational exposure during the remediation activities of a uranium tailings disposal, in an abandoned uranium mining ... Keywords: Monte Carlo simulation, occupational exposure, risk and dose assessment, uranium tailings disposal

M. L. Dinis; A. Fiúza

2010-08-01T23:59:59.000Z

31

Asymptotic Scaling and Monte Carlo Data

It is a generally known problem that the behaviour predicted from perturbation theory for asymptotically free theories like QCD, i.e. asymptotic scaling, has not been observed in Monte Carlo simulations when the series is expressed in terms of the bare coupling g_0. This discrepancy has been explained in the past with the poor convergence properties of the perturbative series in the g_0. An alternative point of view, called Lattice-Distorted Perturbation Theory proposes that lattice artifacts due to the finiteness of the lattice spacing, a, cause the disagreement between Monte Carlo data and perturbative scaling. Following this alternative scenario, we fit recent quenched data from different observables to fitting functions that include these cut-off effects, confirming that the lattice data are well reproduced by g_0-PT with the simple addition of terms O(a^n).

A. Trivini; C. R. Allton

2005-11-02T23:59:59.000Z

32

Overview of Monte Carlo radiation transport codes

The Radiation Safety Information Computational Center (RSICC) is the designated central repository of the United States Department of Energy (DOE) for nuclear software in radiation transport, safety, and shielding. Since the center was established in the early 60's, there have been several Monte Carlo particle transport (MC) computer codes contributed by scientists from various countries. An overview of the neutron transport computer codes in the RSICC collection is presented.

Kirk, Bernadette Lugue [ORNL] [ORNL

2010-01-01T23:59:59.000Z

33

New Monte Carlo Algorithm for Protein Folding

Science Journals Connector (OSTI)

We demonstrate that the recently introduced pruned-enriched Rosenbluth method leads to extremely efficient algorithms for the folding of simple model proteins. We test them on several models for lattice heteropolymers, and compare the results to published Monte Carlo studies. In all cases our algorithms are faster than previous ones, and in several cases we find new minimal energy states. In addition, our algorithms give estimates for the partition sum at finite temperatures.

Helge Frauenkron; Ugo Bastolla; Erwin Gerstner; Peter Grassberger; Walter Nadler

1998-04-06T23:59:59.000Z

34

Monte Carlo event generators at nonleading order

Science Journals Connector (OSTI)

A method to construct Monte Carlo event generators at arbitrarily nonleading order is explained for the case of a nongauge theory. A precise and correct treatment of parton kinematics is provided. Modifications of the conventional formalism are required: parton showering is not exactly the same as Dokshitzer-Gribov-Lipatov-Altarelli-Parisi evolution, and the external line prescription for the hard scattering differs from the Lehmann-Symanzik-Zimmermann prescription. The prospects for extending the results to QCD are discussed.

John Collins

2002-05-06T23:59:59.000Z

35

A survey of Monte Carlo methods Jonathan Weare

A survey of Monte Carlo methods Jonathan Weare University of Chicago April 5, 2011 Jonathan Weare?) problems are high dimensional so we need MC. Jonathan Weare A survey of Monte Carlo methods #12;Some high in the initial conditions for an evolutionary PDE propagate? and many more Â· Â· Â· Jonathan Weare A survey of Monte

Anisimov, Mikhail

36

Monte Carlo Particle Transport: Algorithm and Performance Overview

National Nuclear Security Administration (NNSA)

Monte Carlo Particle Transport: Monte Carlo Particle Transport: Algorithm and Performance Overview N. A. Gentile, R. J. Procassini and H. A. Scott Lawrence Livermore National Laboratory, Livermore, California, 94551 Monte Carlo methods are frequently used for neutron and radiation trans- port. These methods have several advantages, such as relative ease of programming and dealing with complex meshes. Disadvantages include long run times and statistical noise. Monte Carlo photon transport calcu- lations also often suffer from inaccuracies in matter temperature due to the lack of implicitness. In this paper we discuss the Monte Carlo algo- rithm as it is applied to neutron and photon transport, detail the differ- ences between neutron and photon Monte Carlo, and give an overview of the ways the numerical method has been modified to deal with issues that

37

A Monte Carlo algorithm for degenerate plasmas

A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the Fermi–Dirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electron–ion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.

Turrell, A.E., E-mail: a.turrell09@imperial.ac.uk; Sherlock, M.; Rose, S.J.

2013-09-15T23:59:59.000Z

38

Hybrid Monte Carlo and topological modes of full QCD

We investigate the performance of the hybrid Monte Carlo algorithm, the standard algorithm used for lattice QCD simulations involving fermions, in updating non-trivial global topological structures. We find that the hybrid Monte Carlo algorithm has serious problems decorrelating the global topological charge at the values of $\\beta$ and $m$ currently simulated, where continuum physics should be approximately realized. This represents a warning which must be seriously considered when simulating full QCD by hybrid Monte Carlo.

B. Allés; G. Boyd; M. D'Elia; A. Di Giacomo; E. Vicari

1996-07-22T23:59:59.000Z

39

Monte Carlo Simulations of the Corrosion of Aluminoborosilicate...

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Simulations of the Corrosion of Aluminoborosilicate Glasses. Monte Carlo Simulations of the Corrosion of Aluminoborosilicate Glasses. Abstract: Aluminum is one of the most common...

40

Posters Monte Carlo Simulation of Longwave Fluxes Through Broken...

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

5 Posters Monte Carlo Simulation of Longwave Fluxes Through Broken Scattering Cloud Fields E. E. Takara and R. G. Ellingson University of Maryland College Park, Maryland To...

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

41

Binding and Diffusion of Lithium in Graphite: Quantum Monte Carlo...

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Binding and Diffusion of Lithium in Graphite: Quantum Monte Carlo Benchmarks and Validation of van der Waals Density Functional Methods P. Ganesh,* , Jeongnim Kim, Changwon...

42

THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE

MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.

WATERS, LAURIE S. [Los Alamos National Laboratory; MCKINNEY, GREGG W. [Los Alamos National Laboratory; DURKEE, JOE W. [Los Alamos National Laboratory; FENSIN, MICHAEL L. [Los Alamos National Laboratory; JAMES, MICHAEL R. [Los Alamos National Laboratory; JOHNS, RUSSELL C. [Los Alamos National Laboratory; PELOWITZ, DENISE B. [Los Alamos National Laboratory

2007-01-10T23:59:59.000Z

43

Calculating Pi Using the Monte Carlo Method

Science Journals Connector (OSTI)

During the summer of 2012 I had the opportunity to participate in a research experience for teachers at the center for sustainable energy at Notre Dame University (RET @ cSEND) working with Professor John LoSecco on the problem of using antineutrino detection to accurately determine the fuel makeup and operating power of nuclear reactors. During full power operation a reactor may produce 1021 antineutrinos per second with approximately 100 per day being detected. While becoming familiar with the design and operation of the detectors and how total antineutrino flux could be obtained from such a small sample I read about a simulation program called Monte Carlo. 1 Further investigation led me to the Monte Carlo method page of Wikipedia 2 where I saw an example of approximating pi using this simulation. Other examples where this method was applied were typically done with computer simulations 2 or purely mathematical. 3 It is my belief that this method may be easily related to the students by performing the simple activity of sprinkling rice on an arc drawn in a square. The activity that follows was inspired by those simulations and was used by my AP Physics class last year with very good results.

Timothy Williamson

2013-01-01T23:59:59.000Z

44

Energy Monte Carlo (EMCEE) | Open Energy Information

Energy Monte Carlo (EMCEE) Energy Monte Carlo (EMCEE) Jump to: navigation, search Tool Summary LAUNCH TOOL Name: EMCEE and Emc2 Agency/Company /Organization: United States Geological Survey Sector: Energy Focus Area: Non-renewable Energy Topics: Resource assessment Resource Type: Software/modeling tools User Interface: Spreadsheet Website: pubs.usgs.gov/pp/pp1713/26/ Country: United States Cost: Free Northern America Coordinates: 37.09024Â°, -95.712891Â° Loading map... {"minzoom":false,"mappingservice":"googlemaps3","type":"ROADMAP","zoom":14,"types":["ROADMAP","SATELLITE","HYBRID","TERRAIN"],"geoservice":"google","maxzoom":false,"width":"600px","height":"350px","centre":false,"title":"","label":"","icon":"","visitedicon":"","lines":[],"polygons":[],"circles":[],"rectangles":[],"copycoords":false,"static":false,"wmsoverlay":"","layers":[],"controls":["pan","zoom","type","scale","streetview"],"zoomstyle":"DEFAULT","typestyle":"DEFAULT","autoinfowindows":false,"kml":[],"gkml":[],"fusiontables":[],"resizable":false,"tilt":0,"kmlrezoom":false,"poi":true,"imageoverlays":[],"markercluster":false,"searchmarkers":"","locations":[{"text":"","title":"","link":null,"lat":37.09024,"lon":-95.712891,"alt":0,"address":"","icon":"","group":"","inlineLabel":"","visitedicon":""}]}

45

RADIATIVE HEAT TRANSFER WITH QUASI-MONTE CARLO METHODS

RADIATIVE HEAT TRANSFER WITH QUASI-MONTE CARLO METHODS A. Kersch1 W. Moroko2 A. Schuster1 1Siemens of Quasi-Monte Carlo to this problem. 1.1 Radiative Heat Transfer Reactors In the manufacturing of the problems which can be solved by such a simulation is high accuracy modeling of the radiative heat transfer

46

Sequential Monte Carlo EM for multivariate probit models

Science Journals Connector (OSTI)

Multivariate probit models have the appealing feature of capturing some of the dependence structure between the components of multidimensional binary responses. The key for the dependence modelling is the covariance matrix of an underlying latent multivariate ... Keywords: Adaptive sequential Monte Carlo, Maximum likelihood, Monte Carlo EM, Multivariate probit

Giusi Moffa; Jack Kuipers

2014-04-01T23:59:59.000Z

47

CERN-TH.6275/91 Monte Carlo Event Generation

CERN-TH.6275/91 Monte Carlo Event Generation for LHC T. SjÂ¨ostrand CERN -- Geneva Abstract The necessity of event generators for LHC physics studies is illustrated, and the Monte Carlo approach is outlined. A survey is presented of existing event generators, followed by a more detailed study

SjÃ¶strand, TorbjÃ¶rn

48

Monte Carlo stratified source-sampling

In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo {open_quotes}eigenvalue of the world{close_quotes} problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. The original test-problem was treated by a special code designed specifically for that purpose. Recently ANL started work on a method for dealing with more realistic eigenvalue of the world configurations, and has been incorporating this method into VIM. The original method has been modified to take into account real-world statistical noise sources not included in the model problem. This paper constitutes a status report on work still in progress.

Blomquist, R.N.; Gelbard, E.M.

1997-09-01T23:59:59.000Z

49

National Nuclear Security Administration (NNSA)

Monte Carlo Simulation of Joint Transport of Neutrons Monte Carlo Simulation of Joint Transport of Neutrons Monte Carlo Simulation of Joint Transport of Neutrons and Photons and Photons Zhitnik Zhitnik A A . . K K . . , , Artemeva Artemeva E.V., E.V., Bakanov Bakanov V.V., V.V., Donskoy Donskoy E.N., E.N., Zalyalov Zalyalov A.N., A.N., Ivanov Ivanov N.V., N.V., Ognev Ognev S.P., S.P., Ronzhin Ronzhin A.B., A.B., Roslov Roslov V.I., V.I., Semenova Semenova T.V. T.V. RFNC-VNIIEF, 607190, Sarov, RFNC-VNIIEF, 607190, Sarov, Nizhni Nizhni Novgorod region Novgorod region The approaches used at VNIIEF to simulate transport of neutrons and photons in standard (with The approaches used at VNIIEF to simulate transport of neutrons and photons in standard (with surface description of region interfaces) and grid geometries are considered in the paper.

50

GPU accelerated Monte Carlo simulations of lattice spin models

We consider Monte Carlo simulations of classical spin models of statistical mechanics using the massively parallel architecture provided by graphics processing units (GPUs). We discuss simulations of models with discrete and continuous variables, and using an array of algorithms ranging from single-spin flip Metropolis updates over cluster algorithms to multicanonical and Wang-Landau techniques to judge the scope and limitations of GPU accelerated computation in this field. For most simulations discussed, we find significant speed-ups by two to three orders of magnitude as compared to single-threaded CPU implementations.

Martin Weigel; Taras Yavors'kii

2011-07-27T23:59:59.000Z

51

E-Print Network 3.0 - all-atom monte carlo Sample Search Results

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Source: Collection: Engineering ; Computer Technologies and Information Sciences 17 Das Ising-Modell und Monte-Carlo-Simulation Das Ising-Modell und Monte-Carlo-Simulation Summary:...

52

E-Print Network 3.0 - accurate monte carlo Sample Search Results

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Tilburg University Collection: Computer Technologies and Information Sciences 59 Das Ising-Modell und Monte-Carlo-Simulation Das Ising-Modell und Monte-Carlo-Simulation Summary:...

53

E-Print Network 3.0 - adjoint monte carlo Sample Search Results

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Oxford Collection: Engineering ; Computer Technologies and Information Sciences 37 Das Ising-Modell und Monte-Carlo-Simulation Das Ising-Modell und Monte-Carlo-Simulation Summary:...

54

Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU)

and S. Andersson-Engels, “Accelerated Monte Carlo models toAccelerated rescaling of single Monte Carlo simulation runsreported on online, GPU- accelerated MC simulations. Along

Yang, Owen; Choi, Bernard

2013-01-01T23:59:59.000Z

55

Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.

Urbatsch, T.J.

1995-11-01T23:59:59.000Z

56

Recent advances and future prospects for Monte Carlo

The history of Monte Carlo methods is closely linked to that of computers: The first known Monte Carlo program was written in 1947 for the ENIAC; a pre-release of the first Fortran compiler was used for Monte Carlo In 1957; Monte Carlo codes were adapted to vector computers in the 1980s, clusters and parallel computers in the 1990s, and teraflop systems in the 2000s. Recent advances include hierarchical parallelism, combining threaded calculations on multicore processors with message-passing among different nodes. With the advances In computmg, Monte Carlo codes have evolved with new capabilities and new ways of use. Production codes such as MCNP, MVP, MONK, TRIPOLI and SCALE are now 20-30 years old (or more) and are very rich in advanced featUres. The former 'method of last resort' has now become the first choice for many applications. Calculations are now routinely performed on office computers, not just on supercomputers. Current research and development efforts are investigating the use of Monte Carlo methods on FPGAs. GPUs, and many-core processors. Other far-reaching research is exploring ways to adapt Monte Carlo methods to future exaflop systems that may have 1M or more concurrent computational processes.

Brown, Forrest B [Los Alamos National Laboratory

2010-01-01T23:59:59.000Z

57

Monte Carlo methods for the nuclear shell model

We present novel Monte Carlo methods for treating the interacting shell model that allow exact calculations much larger than those heretofore possible. The two-body interaction is linearized by an auxiliary field; Monte Carlo evaluation of the resulting functional integral gives ground-state or thermal expectation values of few-body operators. The ``sign problem'' generic to quantum Monte Carlo calculations is absent in a number of cases. We discuss the favorable scaling of these methods with nucleon numb er and basis size and their suitability to parallel computation.

C. W. Johnson; S. E. Koonin; G. H. Lang; W. E. Ormand

1992-10-20T23:59:59.000Z

58

Markov-Chain Monte Carlo Methods for Simulations of Biomolecules

The computer revolution has been driven by a sustained increase of computational speed of approximately one order of magnitude (a factor of ten) every five years since about 1950. In natural sciences this has led to a continuous increase of the importance of computer simulations. Major enabling techniques are Markov Chain Monte Carlo (MCMC) and Molecular Dynamics (MD) simulations. This article deals with the MCMC approach. First basic simulation techniques, as well as methods for their statistical analysis are reviewed. Afterwards the focus is on generalized ensembles and biased updating, two advanced techniques, which are of relevance for simulations of biomolecules, or are expected to become relevant with that respect. In particular we consider the multicanonical ensemble and the replica exchange method (also known as parallel tempering or method of multiple Markov chains).

Bernd A. Berg

2007-09-04T23:59:59.000Z

59

Sequential Monte Carlo Methods for Protein Folding

We describe a class of growth algorithms for finding low energy states of heteropolymers. These polymers form toy models for proteins, and the hope is that similar methods will ultimately be useful for finding native states of real proteins from heuristic or a priori determined force fields. These algorithms share with standard Markov chain Monte Carlo methods that they generate Gibbs-Boltzmann distributions, but they are not based on the strategy that this distribution is obtained as stationary state of a suitably constructed Markov chain. Rather, they are based on growing the polymer by successively adding individual particles, guiding the growth towards configurations with lower energies, and using "population control" to eliminate bad configurations and increase the number of "good ones". This is not done via a breadth-first implementation as in genetic algorithms, but depth-first via recursive backtracking. As seen from various benchmark tests, the resulting algorithms are extremely efficient for lattice models, and are still competitive with other methods for simple off-lattice models.

Peter Grassberger

2004-08-26T23:59:59.000Z

60

A Monte Carlo tool for multi-node reliability evaluation

-Area Reliability Program(NARP) is based on the random sampling of generator and transmission line status for each hour. Monte Carlo Approach for Estimating Contingency Statistics along with the Evaluation Subroutine(MACS-ES) advances the generation...

Thalasila, Chander Pravin

2012-06-07T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

61

An Analysis Tool for Flight Dynamics Monte Carlo Simulations

and analysis work to understand vehicle operating limits and identify circumstances that lead to mission failure. A Monte Carlo simulation approach that varies a wide range of physical parameters is typically used to generate thousands of test cases...

Restrepo, Carolina 1982-

2011-05-20T23:59:59.000Z

62

Monte Carlo Domain Decomposition for Robust Nuclear Reactor Analyses

Science Journals Connector (OSTI)

Abstract Monte Carlo (MC) neutral particle transport codes are considered the gold-standard for nuclear simulations, but they cannot be robustly applied to high-fidelity nuclear reactor analysis without accommodating several terabytes of materials and tally data. While this is not a large amount of aggregate data for a typical high performance computer, MC methods are only embarrassingly parallel when the key data structures are replicated for each processing element, an approach which is likely infeasible on future machines. The present work explores the use of spatial domain decomposition to make full-scale nuclear reactor simulations tractable with Monte Carlo methods, presenting a simple implementation in a production-scale code. Good performance is achieved for mesh-tallies of up to 2.39TB distributed across 512 compute nodes while running a full-core reactor benchmark on the Mira Blue Gene/Q supercomputer at the Argonne National Laboratory. In addition, the effects of load imbalances are explored with an updated performance model that is empirically validated against observed timing results. Several load balancing techniques are also implemented to demonstrate that imbalances can be largely mitigated, including a new and efficient way to distribute extra compute resources across coarse domain meshes.

Nicholas Horelik; Andrew Siegel; Benoit Forget; Kord Smith

2014-01-01T23:59:59.000Z

63

Monte Carlo source convergence and the Whitesides problem

The issue of fission source convergence in Monte Carlo eigenvalue calculations is of interest because of the potential consequences of erroneous criticality safety calculations. In this work, the authors compare two different techniques to improve the source convergence behavior of standard Monte Carlo calculations applied to challenging source convergence problems. The first method, super-history powering, attempts to avoid discarding important fission sites between generations by delaying stochastic sampling of the fission site bank until after several generations of multiplication. The second method, stratified sampling of the fission site bank, explicitly keeps the important sites even if conventional sampling would have eliminated them. The test problems are variants of Whitesides' Criticality of the World problem in which the fission site phase space was intentionally undersampled in order to induce marginally intolerable variability in local fission site populations. Three variants of the problem were studied, each with a different degree of coupling between fissionable pieces. Both the superhistory powering method and the stratified sampling method were shown to improve convergence behavior, although stratified sampling is more robust for the extreme case of no coupling. Neither algorithm completely eliminates the loss of the most important fissionable piece, and if coupling is absent, the lost piece cannot be recovered unless its sites from earlier generations have been retained. Finally, criteria for measuring source convergence reliability are proposed and applied to the test problems.

Blomquist, R. N.

2000-02-25T23:59:59.000Z

64

Adjoint and forward Monte Carlo coupled weight window generator for variance reduction

Among the various variance-reduction methods in Monte Carlo calculations, one of the most widely used techniques is the weight window method. The MCNP code provides a weight window generator (WWG) option. In WWG of MCNP, the importance of a cell is estimated by the virtual sampling method during normal Monte Carlo calculation. But, the performance of WWG tends to deteriorate in deep penetration problems. To enhance the performance of the weight window method, importance estimation by the deterministic adjoint calculation has been proposed. However, this approach is possible only when the related deterministic code and interface program are available. The midway coupling method is a surface tally technique that calculates detector response based on the reciprocity theorem, but it does not provide an importance generator. In this paper, a new weight window generation method, called adjoint and forward Monte Carlo coupled WWG, is proposed to overcome the drawbacks of the current WWG in MCNP and the deterministic adjoint calculation method.

Ahn, J.G.; Cho, N.Z.

2000-07-01T23:59:59.000Z

65

A midway forward-adjoint coupling method for neutron and photon Monte Carlo transport

The midway Monte Carlo method for calculating detector responses combines a forward and an adjoint Monte Carlo calculation. In both calculations, particle scores are registered at a surface to be chosen by the user somewhere between the source and detector domains. The theory of the midway response determination is developed within the framework of transport theory for external sources and for criticality theory. The theory is also developed for photons, which are generated at inelastic scattering or capture of neutrons. In either the forward or the adjoint calculation a so-called black absorber technique can be applied; i.e., particles need not be followed after passing the midway surface. The midway Monte Carlo method is implemented in the general-purpose MCNP Monte Carlo code. The midway Monte Carlo method is demonstrated to be very efficient in problems with deep penetration, small source and detector domains, and complicated streaming paths. All the problems considered pose difficult variance reduction challenges. Calculations were performed using existing variance reduction methods of normal MCNP runs and using the midway method. The performed comparative analyses show that the midway method appears to be much more efficient than the standard techniques in an overwhelming majority of cases and can be recommended for use in many difficult variance reduction problems of neutral particle transport.

Serov, I.V.; John, T.M.; Hoogenboom, J.E. [Delft Univ. of Technology (Netherlands). Interfaculty Reactor Inst.

1999-09-01T23:59:59.000Z

66

An Overview of Geometry Representation in Monte Carlo Codes

National Nuclear Security Administration (NNSA)

Geometry Representation Geometry Representation in Monte Carlo Codes R.P. Kensek, * B.C. Franke, * T.W. Laub * , L.J. Lorence, * M. R. Martin, * S. Warren â€ * Sandia National Laboratories, P.O. Box 5800, MS 1179, Albuquerque, NM 87185 â€ Kansas State University, Manhattan, KS 66506 Geometry representations in production Monte Carlo radiation transport codes used for linear-transport simulations are traditionally limited to combinatorial geometry (CG) topologies. While CG representations of input geometries are efficient to query, they are difficult to construct. In the Integrated-TIGER-Series (ITS) Monte Carlo code suite, a new approach for radiation transport geometry engines has been implemented that allows for Computer Aided Design (CAD), facetted approximations, and other geometry types to simultaneously define an input geometry.

67

The Monte Carlo Independent Column Approximation Model Intercomparison

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

The Monte Carlo Independent Column Approximation Model Intercomparison The Monte Carlo Independent Column Approximation Model Intercomparison Project (McMIP) Barker, Howard Meteorological Service of Canada Cole, Jason Meteorological Service of Canada Raisanen, Petri Finnish Meteorological Institute Pincus, Robert NOAA-CIRES Climate Diagnostics Center Morcrette, Jean-Jacques European Centre for Medium-Range Weather Forecasts Li, Jiangnan Canadian Center for Climate Modelling Stephens, Graeme Colorado State University Vaillancourt, Paul Environment Canada Oreopoulos, Lazaros JCET/UMBC and NASA/GSFC Siebesma, Pier KNMI Los, Alexander KNMI Clothiaux, Eugene The Pennsylvania State University Randall, David Colorado State University Iacono, Michael Atmospheric & Environmental Research, Inc. Category: Radiation The Monte Carlo Independent Column Approximation (McICA) method for

68

Fast neutron fluxes in pressure vessels using Monte Carlo methods

The objective of this project is to determine the feasibility of calculating the fast neutron flux in the pressure vessel of a pressurized water reactor by Monte Carlo methods. Neutron reactions reduce the ductility of the steel and thus limit the useful life of this important reactor component. This work was performed for Virginia Power (VEPCO). VIM is a continuous-energy Monte Carlo code which provides a versatile geometrical capability and a neutron physics data base closely representing the EDNF/B-IV data from which it was derived.

Edlund, M.C.; Thomas, J.R.

1986-01-01T23:59:59.000Z

69

Improved convergence of Monte Carlo generated multi-group scattering moments

This paper introduces an improved method of obtaining multi-group scattering moments from a Monte Carlo neutron transport code for use in deterministic transport solvers. The new method increases the information obtained from scattering events and therefore has more useful convergence characteristics than the currently used analog techniques. A prototype of the improved method was implemented in the OpenMC Monte Carlo transport code to compare the accuracy and convergence characteristics of the new method. The prototype showed that accuracy was retained (or improved) while increasing the figure-of-merit for the generation of multi-group scattering moments. (authors)

Nelson, A. G.; Martin, W. R. [University of Michigan, Department of Nuclear Engineering and Radiological Sciences, 2355 Bonisteel Boulevard, Ann Arbor, MI 48104 (United States)

2013-07-01T23:59:59.000Z

70

A Monte Carlo synthetic-acceleration method for solving the thermal radiation diffusion equation

We present a novel synthetic-acceleration-based Monte Carlo method for solving the equilibrium thermal radiation diffusion equation in three spatial dimensions. The algorithm performance is compared against traditional solution techniques using a Marshak benchmark problem and a more complex multiple material problem. Our results show that our Monte Carlo method is an effective solver for sparse matrix systems. For solutions converged to the same tolerance, it performs competitively with deterministic methods including preconditioned conjugate gradient and GMRES. We also discuss various aspects of preconditioning the method and its general applicability to broader classes of problems.

Evans, Thomas M., E-mail: evanstm@ornl.gov [Oak Ridge National Laboratory, 1 Bethel Valley Rd., Oak Ridge, TN 37831 (United States); Mosher, Scott W., E-mail: moshersw@ornl.gov [Oak Ridge National Laboratory, 1 Bethel Valley Rd., Oak Ridge, TN 37831 (United States); Slattery, Stuart R., E-mail: sslattery@wisc.edu [University of Wisconsin–Madison, 1500 Engineering Dr., Madison, WI 53716 (United States); Hamilton, Steven P., E-mail: hamiltonsp@ornl.gov [Oak Ridge National Laboratory, 1 Bethel Valley Rd., Oak Ridge, TN 37831 (United States)

2014-02-01T23:59:59.000Z

71

Monte Carlo Estimation of Time Mismatch Effect in an OFDM EER Architecture

Monte Carlo Estimation of Time Mismatch Effect in an OFDM EER Architecture J-F.Bercher, A.Diet, C technique due to non-linearities of the power amplification operation. EER architecture can be used to solve-linearities in the radio- frequency transmitter. Linearization methods are necessary. EER (Envelope Elimination

Baudoin, GeneviÃ¨ve

72

Monte Carlo Characterization of a Pulsed Laser-Wakefield Driven Monochromatic

Monte Carlo Characterization of a Pulsed Laser-Wakefield Driven Monochromatic X-Ray Source S. D determination of the incident X-ray energy by using unfolding techniques. I. INTRODUCTION HE Diocles laser light from the same laser system, producing monochromatic X-rays with energy and spectral width

Umstadter, Donald

73

E-Print Network 3.0 - atlas monte carlo Sample Search Results

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

di - Dipartimento di Fisica, Quantum Optics Group Collection: Physics 50 Das Ising-Modell und Monte-Carlo-Simulation Das Ising-Modell und Monte-Carlo-Simulation Summary:...

74

E-Print Network 3.0 - accelerated monte carlo Sample Search Results

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Nuclear Technologies ; Environmental Sciences and Ecology ; Biology and Medicine 36 Das Ising-Modell und Monte-Carlo-Simulation Das Ising-Modell und Monte-Carlo-Simulation Summary:...

75

E-Print Network 3.0 - applicator monte carlo Sample Search Results

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

of Electrical and Computer Engineering, Virginia Tech Collection: Engineering 85 Das Ising-Modell und Monte-Carlo-Simulation Das Ising-Modell und Monte-Carlo-Simulation Summary:...

76

E-Print Network 3.0 - adaptive monte carlo Sample Search Results

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

and Statistics, University of New South Wales Collection: Mathematics 47 Das Ising-Modell und Monte-Carlo-Simulation Das Ising-Modell und Monte-Carlo-Simulation Summary:...

77

E-Print Network 3.0 - accelerating monte carlo Sample Search...

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Nuclear Technologies ; Environmental Sciences and Ecology ; Biology and Medicine 36 Das Ising-Modell und Monte-Carlo-Simulation Das Ising-Modell und Monte-Carlo-Simulation Summary:...

78

E-Print Network 3.0 - anatomy monte carlo Sample Search Results

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

de Lige Collection: Power Transmission, Distribution and Plants ; Engineering 19 Das Ising-Modell und Monte-Carlo-Simulation Das Ising-Modell und Monte-Carlo-Simulation Summary:...

79

E-Print Network 3.0 - absolute monte carlo Sample Search Results

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Engineering, University of California at San Diego Collection: Engineering 37 Das Ising-Modell und Monte-Carlo-Simulation Das Ising-Modell und Monte-Carlo-Simulation Summary:...

80

Science Journals Connector (OSTI)

...infusion. Simulations with our...equilibration by modeling and to derive...Monte Carlo simulations. Drusano...bone was rapid for both...using a fully automated extraction...pharmacokinetic modeling and Monte Carlo simulation. Antimicrob...

Cornelia B. Landersdorfer; Martina Kinzig; Jürgen B. Bulitta; Friedrich F. Hennig; Ulrike Holzgrabe; Fritz Sörgel; Johannes Gusinde

2009-03-23T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

81

Monte Carlo Simulations of Thermal Conductivity in Nanoporous Si Membranes

candidates for thermoelectric materials as they can provide extremely low thermal conductivity , relatively of boundary scattering on the thermal conductivity. We show that the material porosity strongly affects1 Monte Carlo Simulations of Thermal Conductivity in Nanoporous Si Membranes Stefanie Wolf1

82

Impact of random numbers on parallel Monte Carlo application

A number of graduate students are involved at various level of research in this project. We investigate the basic issues in materials using Monte Carlo simulations with specific interest in heterogeneous materials. Attempts have been made to seek collaborations with the DOE laboratories. Specific details are given.

Pandey, Ras B.

2002-10-22T23:59:59.000Z

83

Monte Carlo calibration of avalanches described as Coulomb fluid flows

Science Journals Connector (OSTI)

...different mathematical issues to invert a relationship in the form...crude current knowledge of snow rheology and avalanche physics as well...positioned on each side of 0 and a flat asymmetric tail extending over...Theoretical Monte Carlo Method Motion Rheology methods Snow chemistry Static...

2005-01-01T23:59:59.000Z

84

Difficulties in vector-parallel processing of Monte Carlo codes

Experiences with vectorization of production-level Monte Carlo codes such as KENO-IV, MCNP, VIM, and MORSE have shown that it is difficult to attain high speedup ratios on vector processors because of indirect addressing, nests of conditional branches, short vector length, cache misses, and operations for realization of robustness and generality. A previous work has already shown that the first, second, and third difficulties can be resolved by using special computer hardware for vector processing of Monte Carlo codes. Here, the fourth and fifth difficulties are discussed in detail using the results for a vectorized version of the MORSE code. As for the fourth difficulty, it is shown that the cache miss-hit ratio affects execution times of the vectorized Monte Carlo codes and the ratio strongly depends on the number of the particles simultaneously tracked. As for the fifth difficulty, it is shown that remarkable speedup ratios are obtained by removing operations that are not essential to the specific problem being solved. These experiences have shown that if a production-level Monte Carlo code system had a capability to selectively construct source coding that complements the input data, then the resulting code could achieve much higher performance.

Higuchi, Kenji; Asai, Kiyoshi [Japan Atomic Energy Research Inst., Tokyo (Japan). Center for Promotion of Computational Science and Engineering; Hasegawa, Yukihiro [Research Organization for Information Science and Technology, Tokai, Ibaraki (Japan)

1997-09-01T23:59:59.000Z

85

Using random number generators in Monte Carlo simulations

Science Journals Connector (OSTI)

One of the standard tests for Monte Carlo algorithms and for testing random number generators is the two-dimensional Ising model. We show that at least in the present case, where we study the two-state clock model, good random number generators can give inconsistent values for the critical temperature.

F. J. Resende and B. V. Costa

1998-10-01T23:59:59.000Z

86

Evolutionary Monte Carlo for protein folding simulations Faming Lianga)

Evolutionary Monte Carlo for protein folding simulations Faming Lianga) Department of Statistics to simulations of protein folding on simple lattice models, and to finding the ground state of a protein. In all structures in protein folding. The numerical results show that it is drastically superior to other methods

Liang, Faming

87

ccsd00003115, Coupled Electron Ion Monte Carlo Calculations of Atomic

ccsdÂ00003115, version 1 Â 21 Oct 2004 Coupled Electron Ion Monte Carlo Calculations of Atomic state calculations where both electronic and protonic degrees of freedom are treated quantumÂzero temperature with a QMC calculation for the electronic energies where the BornÂOppenheimer approximation helps

88

Faster Fermions in the Tempered Hybrid Monte Carlo Algorithm

Tempering is used to change the quark mass while remaining in equilibrium between the trajectories of a standard hybrid Monte Carlo simulation of four flavours of staggered fermions. The algorithm is faster for small enough quark masses, and particularly so when more than one mass is required.

G. Boyd

1997-01-20T23:59:59.000Z

89

Monte Carlo sampling from the quantum state space. II

High-quality random samples of quantum states are needed for a variety of tasks in quantum information and quantum computation. Searching the high-dimensional quantum state space for a global maximum of an objective function with many local maxima or evaluating an integral over a region in the quantum state space are but two exemplary applications of many. These tasks can only be performed reliably and efficiently with Monte Carlo methods, which involve good samplings of the parameter space in accordance with the relevant target distribution. We show how the Markov-chain Monte Carlo method known as Hamiltonian Monte Carlo, or Hybrid Monte Carlo, can be adapted to this context. It is applicable when an efficient parameterization of the state space is available. The resulting random walk is entirely inside the physical parameter space, and the Hamiltonian dynamics enable us to take big steps, thereby avoiding strong correlations between successive sample points while enjoying a high acceptance rate. We use examples of single and double qubit measurements for illustration.

Yi-Lin Seah; Jiangwei Shang; Hui Khoon Ng; David John Nott; Berthold-Georg Englert

2014-07-29T23:59:59.000Z

90

SciTech Connect: Fast Monte Carlo for radiation therapy: the...

Office of Scientific and Technical Information (OSTI)

MEDICINE, BASIC STUDIES; RADIOTHERAPY; PLANNING; COMPUTER CALCULATIONS; RADIATION DOSE DISTRIBUTIONS; MONTE CARLO METHOD; THREE-DIMENSIONAL CALCULATIONS; COMPUTERIZED TOMOGRAPHY...

91

Improving computational efficiency of Monte Carlo simulations with variance reduction

CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)

Turner, A. [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon, 0X14 3DB (United Kingdom); Davis, A. [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon, 0X14 3DB (United Kingdom); University of Wisconsin-Madison, Madison, WI 53706 (United States)

2013-07-01T23:59:59.000Z

92

Monte-Carlo Simulations for the optimisation of a TOF-MIEZE Instrument

The MIEZE (Modulation of Intensity with Zero Effort) technique is a variant of neutron resonance spin echo (NRSE), which has proven to be a unique neutron scattering technique for measuring with high energy resolution in magnetic fields. Its limitations in terms of flight path differences have already been investigated analytically for neutron beams with vanishing divergence. In the present work Monte-Carlo simulations for quasi-elastic MIEZE experiments taking into account beam divergence as well as the sample dimensions are presented. One application of the MIEZE technique could be a dedicated NRSE-MIEZE instrument at the European Spallation Source (ESS) in Sweden. The optimisation of a particular design based on Montel mirror optics with the help of Monte Carlo simulations will be discussed here in detail.

Weber, T; Georgii, R; Häußler, W; Weichselbaumer, S; Böni, P; 10.1016/j.nima.2013.03.010

2013-01-01T23:59:59.000Z

93

Uncertainties beyond statistics in Monte Carlo simulations

Science Journals Connector (OSTI)

......temperatures relevant to reactor operating conditions...hydrogen in light water, deuterium in heavy water, hydrogen in solid...beginning to include advanced statistical techniques...Temperature Gas Cooled Reactor(7). Figure-3......

H. Grady Hughes

2007-08-01T23:59:59.000Z

94

Load Balancing Of Parallel Monte Carlo Transport Calculations

National Nuclear Security Administration (NNSA)

Load Balancing Of Parallel Load Balancing Of Parallel Monte Carlo Transport Calculations R.J. Procassini, M. J. O'Brien and J.M. Taylor Lawrence Livermore National Laboratory, P. O. Box 808, Livermore, CA 94551 The performance of parallel Monte Carlo transport calculations which use both spatial and particle parallelism is increased by dynamically assigning processors to the most worked domains. Since the particle work load varies over the course of the simulation, each cycle this algorithm determines if dynamic load balancing would speed up the calculation. If load balancing is required, a small number of particle communications are initiated in order to achieve load balance. This method has decreased the parallel run time by more than a factor of three for certain criticality

95

Beyond the Born-Oppenheimer approximation with quantum Monte Carlo

In this work we develop tools that enable the study of non-adiabatic effects with variational and diffusion Monte Carlo methods. We introduce a highly accurate wave function ansatz for electron-ion systems that can involve a combination of both fixed and quantum ions. We explicitly calculate the ground state energies of H$_{2}$, LiH, H$_{2}$O and FHF$^{-}$ using fixed-node quantum Monte Carlo with wave function nodes that explicitly depend on the ion positions. The obtained energies implicitly include the effects arising from quantum nuclei and electron-nucleus coupling. We compare our results to the best theoretical and experimental results available and find excellent agreement.

Tubman, Norm M; Hammes-Schiffer, Sharon; Ceperley, David M

2014-01-01T23:59:59.000Z

96

Monte Carlo simulation of gamma ray scanning gauge

A gamma ray scanning gauge was simulated with Monte Carlo to study the properties of gamma scanning gauges and to resolve the counts coming from a {sup 235}U source from those coming from a contaminant ({sup 232}U) whose daughters emit high energy gamma rays. The simulation has been used to infer the amount of the {sup 232}U contaminant in a {sup 235}U source to select the best size for the NaI(Tl) detector crystal to minimize the effect of the contaminant. The results demonstrate that Monte Carlo simulation provides a systematic tool for designing a gauge with desired properties and for estimating properties of the gamma source from measured count rates.

Hartfield, G.L.; Freeman, L.B.; Dei, D.E.; Emert, C.J.; Glickstein, S.S.; Kahler, A.C.; Niedzwecki, P.F.

1990-12-31T23:59:59.000Z

97

The hybrid Monte Carlo Algorithm and the chiral transition

In this talk the author describes tests of the Hybrid Monte Carlo Algorithm for QCD done in collaboration with Greg Kilcup and Stephen Sharpe. We find that the acceptance in the glubal Metropolis step for Staggered fermions can be tuned and kept large without having to make the step-size prohibitively small. We present results for the finite temperature transition on 4/sup 4/ and 4 x 6/sup 3/ lattices using this algorithm.

Gupta, R.

1987-01-01T23:59:59.000Z

98

Improved diffusion coefficients generated from Monte Carlo codes

Monte Carlo codes are becoming more widely used for reactor analysis. Some of these applications involve the generation of diffusion theory parameters including macroscopic cross sections and diffusion coefficients. Two approximations used to generate diffusion coefficients are assessed using the Monte Carlo code MC21. The first is the method of homogenization; whether to weight either fine-group transport cross sections or fine-group diffusion coefficients when collapsing to few-group diffusion coefficients. The second is a fundamental approximation made to the energy-dependent P1 equations to derive the energy-dependent diffusion equations. Standard Monte Carlo codes usually generate a flux-weighted transport cross section with no correction to the diffusion approximation. Results indicate that this causes noticeable tilting in reconstructed pin powers in simple test lattices with L2 norm error of 3.6%. This error is reduced significantly to 0.27% when weighting fine-group diffusion coefficients by the flux and applying a correction to the diffusion approximation. Noticeable tilting in reconstructed fluxes and pin powers was reduced when applying these corrections. (authors)

Herman, B. R.; Forget, B.; Smith, K. [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Avenue, Cambridge, MA 02139 (United States); Aviles, B. N. [Knolls Atomic Power Laboratory, Bechtel Marine Propulsion Corporation, P.O. Box 1072, Schenectady, NY 12301-1072 (United States)

2013-07-01T23:59:59.000Z

99

A new effective Monte Carlo Midway coupling method in MCNP applied to a well logging problem

Science Journals Connector (OSTI)

The background of the Midway forward–adjoint coupling method including the black absorber technique for efficient Monte Carlo determination of radiation detector responses is described. The method is implemented in the general purpose MCNP Monte Carlo code. The utilization of the method is fairly straightforward and does not require any substantial extra expertise. The method was applied to a standard neutron well logging porosity tool problem. The results exhibit reliability and high efficiency of the Midway method. For the studied problem the efficiency gain is considerably higher than for a normal forward calculation, which is already strongly optimized by weight-windows. No additional effort is required to adjust the Midway model if the position of the detector or the porosity of the formation is changed. Additionally, the Midway method can be used with other variance reduction techniques if extra gain in efficiency is desired.

I.V. Serov; T.M. John; J.E. Hoogenboom

1998-01-01T23:59:59.000Z

100

Markov chain Monte Carlo (MCMC) techniques represent an extremely flexible and powerful approach to Bayesian modeling. This work illustrates the application of such techniques to time-dependent reliability of components with repair. The WinBUGS package is used to illustrate, via examples, how Bayesian techniques can be used for parametric statistical modeling of time-dependent component reliability. Additionally, the crucial, but often overlooked subject of model validation is discussed, and summary statistics for judging the model’s ability to replicate the observed data are developed, based on the posterior predictive distribution for the parameters of interest.

D. L. Kelly

2007-06-01T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

101

Multiple quadrature by Monte Carlo techniques

manner in a Fortran program the statement: Y = FRN (a ) will cause a floating point random number X (0 & X & 1) to be placed in location Y, The argument a may be a constant or variable of any mode; it is employed to allow reference to FRN as a... function subprogram. By using FRN (a) we can generate a sequence of uniformly distributed pseudo-random numbers 20 between zero and one. Since the distribution is uniform the probabil- ity of a number of the sequence lying between X and (X+DX) is DX, i...

Voss, John Dietrich

2012-06-07T23:59:59.000Z

102

Monte Carlo techniques applied to PERT networks

results, and (3) what bias exists in the system. The basic tool used to investigate these questions was the simula- tion of actual conditions on +he IBM 709 compu"er. In some instances, the simulation method was combined with mathematical analys's. A...

McGowan, Lawrence Lee

2012-06-07T23:59:59.000Z

103

Properties of Reactive Oxygen Species by Quantum Monte Carlo

The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of Chemistry, Biology and Atmospheric Science. Nevertheless, the electronic structure of such species is a challenge for ab-initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as $N^3-N^4$, where $N$ is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.

Andrea Zen; Bernhardt L. Trout; Leonardo Guidoni

2014-03-11T23:59:59.000Z

104

Monte Carlo study of the performance of a time-of-flight multichopper spectrometer

The Monte Carlo method is a powerful technique for neutron transport studies. While it has been applied for many years to the study of nuclear systems, there are few codes available for neutron transport in the optical regime. The recent surge of interest in so-called next generation spallation neutron sources and the desire to design new and optimized instruments for these facilities has led us to develop a Monte Carlo code geared toward the simulation of neutron scattering instruments. The time-of-flight multichopper spectrometer, of which IN5 at the ILL is the prototypical example, is the first spectrometer studied with the code. Some of the results of a comparison between the IN5 performance at a reactor and at a Long Pulse Spallation Source (LPSS) are summarized here.

Daemen, L.L.; Eckert, J.; Pynn, R. [and others

1995-12-01T23:59:59.000Z

105

Monte Carlo Tools for charged Higgs boson production

In this short review we discuss two implementations of the charged Higgs boson production process in association with a top quark in Monte Carlo event generators at next-to-leading order in QCD. We introduce the MC@NLO and the POWHEG method of matching next-to-leading order matrix elements with parton showers and compare both methods analyzing the charged Higgs boson production process in association with a top quark. We shortly discuss the case of a light charged Higgs boson where the associated charged Higgs production interferes with the charged Higgs production via t tbar-production and subsequent decay of the top quark.

Kovarik, K

2014-01-01T23:59:59.000Z

106

Protein folding bottlenecks: A lattice Monte Carlo simulation

Science Journals Connector (OSTI)

Results of Monte Carlo simulations of folding of a model ‘‘protein,’’ which is a freely joined 27-monomer chain on a simple cubic lattice with nearest-neighbor interactions, are reported. All compact self-avoiding conformations on this chain have been enumerated, and the conformation (‘‘native’’) corresponding to the global minimum of energy is known for each sequence. Only one out of thirty sequences folds and finds the global minimum. For this sequence, the folding process has a two-stage character, with a rapid noncooperative compactization followed by a slower transition over a free-energy barrier to the global minimum. The evolutionary implications of the results are discussed.

E. Shakhnovich; G. Farztdinov; A. M. Gutin; M. Karplus

1991-09-16T23:59:59.000Z

107

Monte Carlo Tools for charged Higgs boson production

In this short review we discuss two implementations of the charged Higgs boson production process in association with a top quark in Monte Carlo event generators at next-to-leading order in QCD. We introduce the MC@NLO and the POWHEG method of matching next-to-leading order matrix elements with parton showers and compare both methods analyzing the charged Higgs boson production process in association with a top quark. We shortly discuss the case of a light charged Higgs boson where the associated charged Higgs production interferes with the charged Higgs production via t tbar-production and subsequent decay of the top quark.

K. Kovarik

2014-12-18T23:59:59.000Z

108

Multijet and single diffraction dissociation Monte Carlo generator

Science Journals Connector (OSTI)

We have built a Monte Carlo generator for simulating propagation of cosmic ray particles in the atmosphere. The core of the generator is a p-air nuclear interaction model in which SD and NSD processes are included in the inelastic collisions. Based on QCD partonic theory, multiple minijet production is described in detail in the NSD process. A phase-space model is used for the SD process in our work. This generator reproduces cosmic ray experimental data well at very high energies.

J. C. Chen; Q. Q. Zhu; A. X. Huo

1997-05-01T23:59:59.000Z

109

Temperature and density extrapolations in canonical ensemble Monte Carlo simulations

We show how to use the multiple histogram method to combine canonical ensemble Monte Carlo simulations made at different temperatures and densities. The method can be applied to study systems of particles with arbitrary interaction potential and to compute the thermodynamic properties over a range of temperatures and densities. The calculation of the Helmholtz free energy relative to some thermodynamic reference state enables us to study phase coexistence properties. We test the method on the Lennard-Jones fluids for which many results are available.

A. L. Ferreira; M. A. Barroso

1999-06-14T23:59:59.000Z

110

Monte Carlo Fundamentals E B. BROWN and T M. S N

Office of Scientific and Technical Information (OSTI)

e 32-64 X - Parallel & vector processing are now "routine" & necessary for high-performance computing I661 Vector & Parallel Monte Carlo - Introduction .. .. Characterize...

111

??We used the combination of molecular dynamics and Monte Carlo method to investigate protein folding problems. The environments of proteins are very big, and often… (more)

Liao, Jun-min

2006-01-01T23:59:59.000Z

112

Protein folding and phylogenetic tree reconstruction using stochastic approximation Monte Carlo.

??Recently, the stochastic approximation Monte Carlo algorithm has been proposed by Liang et al. (2005) as a general-purpose stochastic optimization and simulation algorithm. An annealing… (more)

Cheon, Sooyoung

2007-01-01T23:59:59.000Z

113

Science Journals Connector (OSTI)

Modulation of light by ultrasound in turbid media is investigated by modified public domain software based on the Monte Carlo algorithm. Apart from the recognized modulation...

Elazar, Jovan M; Steshenko, Oleg

2008-01-01T23:59:59.000Z

114

Improved criticality convergence via a modified Monte Carlo iteration method

Nuclear criticality calculations with Monte Carlo codes are normally done using a power iteration method to obtain the dominant eigenfunction and eigenvalue. In the last few years it has been shown that the power iteration method can be modified to obtain the first two eigenfunctions. This modified power iteration method directly subtracts out the second eigenfunction and thus only powers out the third and higher eigenfunctions. The result is a convergence rate to the dominant eigenfunction being |k{sub 3}|/k{sub 1} instead of |k{sub 2}|/k{sub 1}. One difficulty is that the second eigenfunction contains particles of both positive and negative weights that must sum somehow to maintain the second eigenfunction. Summing negative and positive weights can be done using point detector mechanics, but this sometimes can be quite slow. We show that an approximate cancellation scheme is sufficient to accelerate the convergence to the dominant eigenfunction. A second difficulty is that for some problems the Monte Carlo implementation of the modified power method has some stability problems. We also show that a simple method deals with this in an effective, but ad hoc manner.

Booth, Thomas E [Los Alamos National Laboratory; Gubernatis, James E [Los Alamos National Laboratory

2009-01-01T23:59:59.000Z

115

Monte Carlo simulation of quantum Zeno effect in the brain

Environmental decoherence appears to be the biggest obstacle for successful construction of quantum mind theories. Nevertheless, the quantum physicist Henry Stapp promoted the view that the mind could utilize quantum Zeno effect to influence brain dynamics and that the efficacy of such mental efforts would not be undermined by environmental decoherence of the brain. To address the physical plausibility of Stapp's claim, we modeled the brain using quantum tunneling of an electron in a multiple-well structure such as the voltage sensor in neuronal ion channels and performed Monte Carlo simulations of quantum Zeno effect exerted by the mind upon the brain in the presence or absence of environmental decoherence. The simulations unambiguously showed that the quantum Zeno effect breaks down for timescales greater than the brain decoherence time. To generalize the Monte Carlo simulation results for any n-level quantum system, we further analyzed the change of brain entropy due to the mind probing actions and proved a theorem according to which local projections cannot decrease the von Neumann entropy of the unconditional brain density matrix. The latter theorem establishes that Stapp's model is physically implausible but leaves a door open for future development of quantum mind theories provided the brain has a decoherence-free subspace.

Danko Georgiev

2014-12-11T23:59:59.000Z

116

Monte Carlo Simulation of Massive Absorbers for Cryogenic Calorimeters

There is a growing interest in cryogenic calorimeters with macroscopic absorbers for applications such as dark matter direct detection and rare event search experiments. The physics of energy transport in calorimeters with absorber masses exceeding several grams is made complex by the anisotropic nature of the absorber crystals as well as the changing mean free paths as phonons decay to progressively lower energies. We present a Monte Carlo model capable of simulating anisotropic phonon transport in cryogenic crystals. We have initiated the validation process and discuss the level of agreement between our simulation and experimental results reported in the literature, focusing on heat pulse propagation in germanium. The simulation framework is implemented using Geant4, a toolkit originally developed for high-energy physics Monte Carlo simulations. Geant4 has also been used for nuclear and accelerator physics, and applications in medical and space sciences. We believe that our current work may open up new avenues for applications in material science and condensed matter physics.

Brandt, D.; Asai, M.; Brink, P.L.; /SLAC; Cabrera, B.; /Stanford U.; Silva, E.do Couto e; Kelsey, M.; /SLAC; Leman, S.W.; McArthy, K.; /MIT; Resch, R.; Wright, D.; /SLAC; Figueroa-Feliciano, E.; /MIT

2012-06-12T23:59:59.000Z

117

Quantum Monte Carlo Simulation of the High-Pressure Molecular-Atomic Crossover in Fluid Hydrogen

Science Journals Connector (OSTI)

A first-order liquid-liquid phase transition in high-pressure hydrogen between molecular and atomic fluid phases has been predicted in computer simulations using ab initio molecular dynamics approaches. However, experiments indicate that molecular dissociation may occur through a continuous crossover rather than a first-order transition. Here we study the nature of molecular dissociation in fluid hydrogen using an alternative simulation technique in which electronic correlation is computed within quantum Monte Carlo methods, the so-called coupled electron-ion Monte Carlo method. We find no evidence for a first-order liquid-liquid phase transition.

Kris T. Delaney; Carlo Pierleoni; D. M. Ceperley

2006-12-06T23:59:59.000Z

118

A honeycomb probe was designed to measure the optical properties of biological tissues using single Monte Carlo method. The ongoing project is intended to be a multi-wavelength, real time, and in-vivo technique to detect breast cancer. Preliminary...

Bendele, Travis Henry

2013-02-22T23:59:59.000Z

119

Monte Carlo simulations of free chains in end-linked polymer networks Nisha Gilra

be significantly altered.1Â3 This occurs because the micro- scopic structure including network defectsMonte Carlo simulations of free chains in end-linked polymer networks Nisha Gilra School networks prepared in the presence of inert linear chain solvent were investigated with Monte Carlo

120

Self-assembly of surfactants in a supercritical solvent from lattice Monte Carlo simulations

Self-assembly of surfactants in a supercritical solvent from lattice Monte Carlo simulations Martin self-assembly of surfactants in a supercritical solvent by large-scale Monte Carlo simulations. CarbonCO2.3 Surfactant molecules used in scCO2 have two mutually incompatible components: a CO2-philic tail

Lisal, Martin

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

121

Quantum Monte Carlo study of a disordered 2D Josephson junction array

Quantum Monte Carlo study of a disordered 2D Josephson junction array W.A. Al-Saidi *, D. Stroud reserved. PACS: 74.25.Dw; 05.30.Jp; 85.25.Cp Keywords: Josephson junctions; Quantum Monte Carlo; Disorder 1. Introduction A Josephson junction array (JJA) consists of a collection of superconducting islands connected

Stroud, David

122

Optimization of quantum Monte Carlo wave functions using analytical energy derivatives

Optimization of quantum Monte Carlo wave functions using analytical energy derivatives Xi Lin of the local energy, H^ / .5 If the wave function were the exact ground eigenstate, the local energy would November 1999 An algorithm is proposed to optimize quantum Monte Carlo QMC wave functions based on Newton

Lin, Xi

123

Status of the VIM Monte Carlo neutron/photon transport code.

Recent work on the VIM Monte Carlo code has aimed at advanced data libraries, ease of use, availability to users outside of Argonne, and fission source convergence algorithms in eigenvalue calculations. VIM is one of three US Monte Carlo codes in the USDOE Nuclear Criticality Safety Program, and is available through RSICC and the NEA Data Bank.

Blomquist, R.N.

2002-01-22T23:59:59.000Z

124

Fluid simulations with localized boltzmann upscaling by direct simulation Monte-Carlo

Science Journals Connector (OSTI)

In the present work, we present a novel numerical algorithm to couple the Direct Simulation Monte Carlo method (DSMC) for the solution of the Boltzmann equation with a finite volume like method for the solution of the Euler equations. Recently we presented ... Keywords: Boltzmann equation, Kinetic-fluid coupling, Monte Carlo methods, Multiscale problems

Pierre Degond; Giacomo Dimarco

2012-03-01T23:59:59.000Z

125

Shell model Monte Carlo investigation of rare earth nuclei

Science Journals Connector (OSTI)

We utilize the shell model Monte Carlo method to study the structure of rare earth nuclei. This work demonstrates the first systematic full oscillator shell with intruder calculations in such heavy nuclei. Exact solutions of a pairing plus quadrupole Hamiltonian are compared with the static path approximation in several dysprosium isotopes from A=152 to 162, including the odd mass A=153. Some comparisons are also made with Hartree-Fock-Bogoliubov results from Baranger and Kumar. Basic properties of these nuclei at various temperatures and spin are explored. These include energy, deformation, moments of inertia, pairing channel strengths, band crossing, and evolution of shell model occupation numbers. Exact level densities are also calculated and, in the case of 162Dy, compared with experimental data.

J. A. White; S. E. Koonin; D. J. Dean

2000-02-14T23:59:59.000Z

126

Hydrogen molecule ion: Path-integral Monte Carlo approach

The path-integral Monte Carlo approach is used to study the coupled quantum dynamics of the electron and nuclei in hydrogen molecule ion. The coupling effects are demonstrated by comparing differences in adiabatic Born-Oppenheimer and nonadiabatic simulations, and inspecting projections of the full three-body dynamics onto the adiabatic Born-Oppenheimer approximation. Coupling of the electron and nuclear quantum dynamics is clearly seen. The nuclear pair correlation function is found to broaden by 0.040a{sub 0}, and the average bond length is larger by 0.056a{sub 0}. Also, a nonadiabatic correction to the binding energy is found. The electronic distribution is affected less than the nuclear one upon inclusion of nonadiabatic effects.

Kylaenpaeae, I.; Leino, M.; Rantala, T. T. [Institute of Physics, Tampere University of Technology, P.O. Box 692, FI-33101 Tampere (Finland)

2007-11-15T23:59:59.000Z

127

Monte Carlo sampling of negative-temperature plasma states

Science Journals Connector (OSTI)

A Monte Carlo procedure is used to generate N-particle configurations compatible with two-temperature canonical equilibria in two dimensions, with particular attention to nonlinear plasma gyrokinetics. An unusual feature of the problem is the importance of a nontrivial probability density function P0(?), the probability of realizing a set ? of Fourier amplitudes associated with an ensemble of uniformly distributed, independent particles. This quantity arises because the equilibrium distribution is specified in terms of ?, whereas the sampling procedure naturally produces particle states ?; ? and ? are related via a gyrokinetic Poisson equation, highly nonlinear in its dependence on ?. Expansion and asymptotic methods are used to calculate P0(?) analytically; excellent agreement is found between the large-N asymptotic result and a direct numerical calculation. The algorithm is tested by successfully generating a variety of states of both positive and negative temperature, including ones in which either the longest- or shortest-wavelength modes are excited to relatively large amplitudes.

John A. Krommes and Sharadini Rath

2003-06-09T23:59:59.000Z

128

Velocity renormalization in graphene from lattice Monte Carlo

We compute the Fermi velocity of the Dirac quasiparticles in clean graphene at the charge neutrality point for strong Coulomb coupling alpha_g. We perform a Lattice Monte Carlo calculation within the low-energy Dirac theory, which includes an instantaneous, long-range Coulomb interaction. We find a renormalized Fermi velocity v_FR > v_F, where v_F = c/300. Our results are consistent with a momentum-independent v_FR which increases approximately linearly with alpha_g, although a logarithmic running with momentum cannot be excluded at present. At the predicted critical coupling alpha_gc for the semimetal-insulator transition due to excitonic pair formation, we find v_FR/v_F = 3.3, which we discuss in light of experimental findings for v_FR/v_F at the charge neutrality point in ultra-clean suspended graphene.

Joaquín E. Drut; Timo A. Lähde

2014-03-26T23:59:59.000Z

129

Monte Carlo modeling of spallation targets containing uranium and americium

Neutron production and transport in spallation targets made of uranium and americium are studied with a Geant4-based code MCADS (Monte Carlo model for Accelerator Driven Systems). A good agreement of MCADS results with experimental data on neutron- and proton-induced reactions on $^{241}$Am and $^{243}$Am nuclei allows to use this model for simulations with extended Am targets. Several geometry options and material compositions (U, U+Am, Am, Am$_2$O$_3$) are considered for spallation targets to be used in Accelerator Driven Systems. It was demonstrated that MCADS model can be reliably used for calculating critical masses of fissile materials. All considered options operate as deep subcritical targets having neutron multiplication factor of $k \\sim 0.5$. It is found that more than 4 kg of Am can be burned in one spallation target during the first year of operation.

Malyshkin, Yury; Mishustin, Igor; Greiner, Walter

2013-01-01T23:59:59.000Z

130

Monte Carlo modeling of spallation targets containing uranium and americium

Neutron production and transport in spallation targets made of uranium and americium are studied with a Geant4-based code MCADS (Monte Carlo model for Accelerator Driven Systems). A good agreement of MCADS results with experimental data on neutron- and proton-induced reactions on $^{241}$Am and $^{243}$Am nuclei allows to use this model for simulations with extended Am targets. It was demonstrated that MCADS model can be used for calculating the values of critical mass for $^{233,235}$U, $^{237}$Np, $^{239}$Pu and $^{241}$Am. Several geometry options and material compositions (U, U+Am, Am, Am$_2$O$_3$) are considered for spallation targets to be used in Accelerator Driven Systems. All considered options operate as deep subcritical targets having neutron multiplication factor of $k \\sim 0.5$. It is found that more than 4 kg of Am can be burned in one spallation target during the first year of operation.

Yury Malyshkin; Igor Pshenichnov; Igor Mishustin; Walter Greiner

2014-05-02T23:59:59.000Z

131

Lifting -- A Nonreversible Markov Chain Monte Carlo Algorithm

Markov Chain Monte Carlo algorithms are invaluable numerical tools for exploring stationary properties of physical systems -- in particular when direct sampling is not feasible. They are widely used in many areas of physics and other sciences. Most common implementations are done with reversible Markov chains -- Markov chains that obey detailed balance. Reversible Markov chains are sufficient in order for the physical system to relax to equilibrium, but it is not necessary. Here we review several works that use "lifted" or nonreversible Markov chains, which violate detailed balance, yet still converge to the correct stationary distribution (they obey the global balance condition). In certain cases, the acceleration is a square root improvement at most, to the conventional reversible Markov chains. We introduce the problem in a way that makes it accessible to non-specialists. We illustrate the method on several representative examples (sampling on a ring, sampling on a torus, an Ising model on a complete graph...

Vucelja, Marija

2015-01-01T23:59:59.000Z

132

MONTE-CARLO BURNUP CALCULATION UNCERTAINTY QUANTIFICATION AND PROPAGATION DETERMINATION

MONTEBURNS is a Monte-Carlo depletion routine utilizing MCNP and ORIGEN 2.2. Uncertainties exist in the MCNP transport calculation, but this information is not passed to the depletion calculation in ORIGEN or saved. To quantify this transport uncertainty and determine how it propagates between burnup steps, a statistical analysis of a multiple repeated depletion runs is performed. The reactor model chosen is the Oak Ridge Research Reactor (ORR) in a single assembly, infinite lattice configuration. This model was burned for a 25.5 day cycle broken down into three steps. The output isotopics as well as effective multiplication factor (k-effective) were tabulated and histograms were created at each burnup step using the Scott Method to determine the bin width. It was expected that the gram quantities and k-effective histograms would produce normally distributed results since they were produced from a Monte-Carlo routine, but some of results do not. The standard deviation at each burnup step was consistent between fission product isotopes as expected, while the uranium isotopes created some unique results. The variation in the quantity of uranium was small enough that, from the reaction rate MCNP tally, round off error occurred producing a set of repeated results with slight variation. Statistical analyses were performed using the {chi}{sup 2} test against a normal distribution for several isotopes and the k-effective results. While the isotopes failed to reject the null hypothesis of being normally distributed, the {chi}{sup 2} statistic grew through the steps in the k-effective test. The null hypothesis was rejected in the later steps. These results suggest, for a high accuracy solution, MCNP cell material quantities less than 100 grams and greater kcode parameters are needed to minimize uncertainty propagation and minimize round off effects.

Nichols, T.; Sternat, M.; Charlton, W.

2011-05-08T23:59:59.000Z

133

The light propagation in highly scattering turbid media composed of the particles with different size distribution is studied using a Monte Carlo simulation model implemented in Standard C. Monte Carlo method has been widely utilized to study...

Koh, Wonshill

2013-02-22T23:59:59.000Z

134

Science Journals Connector (OSTI)

Benchmarking and validation of a new Monte Carlo code for dose calculations in microbeam radiation therapy are described.

Cornelius, I.

2014-04-03T23:59:59.000Z

135

Transport anisotropy of the pnictides studied via Monte Carlo simulations of the Spin-Fermion model

An undoped three-orbital spin-fermion model for the Fe-based superconductors is studied via Monte Carlo techniques in two-dimensional clusters. At low temperatures, the magnetic and one-particle spectral properties are in agreement with neutron and photoemission experiments. Our main results are the resistance versus temperature curves that display the same features observed in BaFe{sub 2}As{sub 2} detwinned single crystals (under uniaxial stress), including a low-temperature anisotropy between the two directions followed by a peak at the magnetic ordering temperature, that qualitatively appears related to short-range spin order and concomitant Fermi surface orbital order.

Liang, Shuhua [ORNL; Alvarez, Gonzalo [ORNL; Sen, Cengiz [ORNL; Moreo, Adriana [ORNL; Dagotto, Elbio R [ORNL

2012-01-01T23:59:59.000Z

136

Generation of SFR few-group constants using the Monte Carlo code Serpent

In this study, the Serpent Monte Carlo code was used as a tool for preparation of homogenized few-group cross sections for the nodal diffusion analysis of Sodium cooled Fast Reactor (SFR) cores. Few-group constants for two reference SFR cores were generated by Serpent and then employed by nodal diffusion code DYN3D in 2D full core calculations. The DYN3D results were verified against the references full core Serpent Monte Carlo solutions. A good agreement between the reference Monte Carlo and nodal diffusion results was observed demonstrating the feasibility of using Serpent for generation of few-group constants for the deterministic SFR analysis. (authors)

Fridman, E.; Rachamin, R. [Helmholz-Zentrum Dresden-Rossendorf, POB 510119, Dresden, 01314 (Germany); Shwageraus, E. [Ben-Gurion University, POB 653, 84105 Beer-Sheva (Israel)

2013-07-01T23:59:59.000Z

137

Coupled Electron-Ion Monte Carlo Calculations of Dense Metallic Hydrogen

Science Journals Connector (OSTI)

We present an efficient new Monte Carlo method which couples path integrals for finite temperature protons with quantum Monte Carlo calculations for ground state electrons, and we apply it to metallic hydrogen for pressures beyond molecular dissociation. We report data for the equation of state for temperatures across the melting of the proton crystal. Our data exhibit more structure and higher melting temperatures of the proton crystal than do Car-Parrinello molecular dynamics results. This method fills the gap between high temperature electron-proton path integral and ground state diffusion Monte Carlo methods and should have wide applicability.

Carlo Pierleoni; David M. Ceperley; Markus Holzmann

2004-09-27T23:59:59.000Z

138

Quantum Monte Carlo for electronic structure: Recent developments and applications

Quantum Monte Carlo (QMC) methods have been found to give excellent results when applied to chemical systems. The main goal of the present work is to use QMC to perform electronic structure calculations. In QMC, a Monte Carlo simulation is used to solve the Schroedinger equation, taking advantage of its analogy to a classical diffusion process with branching. In the present work the author focuses on how to extend the usefulness of QMC to more meaningful molecular systems. This study is aimed at questions concerning polyatomic and large atomic number systems. The accuracy of the solution obtained is determined by the accuracy of the trial wave function`s nodal structure. Efforts in the group have given great emphasis to finding optimized wave functions for the QMC calculations. Little work had been done by systematically looking at a family of systems to see how the best wave functions evolve with system size. In this work the author presents a study of trial wave functions for C, CH, C{sub 2}H and C{sub 2}H{sub 2}. The goal is to study how to build wave functions for larger systems by accumulating knowledge from the wave functions of its fragments as well as gaining some knowledge on the usefulness of multi-reference wave functions. In a MC calculation of a heavy atom, for reasonable time steps most moves for core electrons are rejected. For this reason true equilibration is rarely achieved. A method proposed by Batrouni and Reynolds modifies the way the simulation is performed without altering the final steady-state solution. It introduces an acceleration matrix chosen so that all coordinates (i.e., of core and valence electrons) propagate at comparable speeds. A study of the results obtained using their proposed matrix suggests that it may not be the optimum choice. In this work the author has found that the desired mixing of coordinates between core and valence electrons is not achieved when using this matrix. A bibliography of 175 references is included.

Rodriguez, M.M.S. [Univ. of California, Berkeley, CA (United States). Dept. of Chemistry]|[Lawrence Berkeley Lab., CA (United States). Chemical Sciences Div.

1995-04-01T23:59:59.000Z

139

Complete Monte Carlo Simulation of Neutron Scattering Experiments

In the far past, it was not possible to accurately correct for the finite geometry and the finite sample size of a neutron scattering set-up. The limited calculation power of the ancient computers as well as the lack of powerful Monte Carlo codes and the limitation in the data base available then prevented a complete simulation of the actual experiment. Using e.g. the Monte Carlo neutron transport code MCNPX [1], neutron scattering experiments can be simulated almost completely with a high degree of precision using a modern PC, which has a computing power that is ten thousand times that of a super computer of the early 1970s. Thus, (better) corrections can also be obtained easily for previous published data provided that these experiments are sufficiently well documented. Better knowledge of reference data (e.g. atomic mass, relativistic correction, and monitor cross sections) further contributes to data improvement. Elastic neutron scattering experiments from liquid samples of the helium isotopes performed around 1970 at LANL happen to be very well documented. Considering that the cryogenic targets are expensive and complicated, it is certainly worthwhile to improve these data by correcting them using this comparatively straightforward method. As two thirds of all differential scattering cross section data of {sup 3}He(n,n){sup 3}He are connected to the LANL data, it became necessary to correct the dependent data measured in Karlsruhe, Germany, as well. A thorough simulation of both the LANL experiments and the Karlsruhe experiment is presented, starting from the neutron production, followed by the interaction in the air, the interaction with the cryostat structure, and finally the scattering medium itself. In addition, scattering from the hydrogen reference sample was simulated. For the LANL data, the multiple scattering corrections are smaller by a factor of five at least, making this work relevant. Even more important are the corrections to the Karlsruhe data due to the inclusion of the missing outgoing self-attenuation that amounts to up to 15%.

Drosg, M. [Faculty of Physics, University of Vienna, Boltzmanngasse 5, A-1090 Wien (Austria)

2011-12-13T23:59:59.000Z

140

. The theory of neutral particle kinetics[1] treats the transport of mass, momen- tum, and energy in a plasma;Monte Carlo neutral transport codes can build on the techniques developed for neutron transportAbstract This is the user's manual for DEGAS 2 - A Monte Carlo code for the study of neutral atom

Karney, Charles

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

141

The GUINEVERE experiment (Generation of Uninterrupted Intense Neutrons at the lead Venus Reactor) is an experimental program in support of the ADS technology presently carried out at SCK-CEN in Mol (Belgium). In the experiment a modified lay-out of the original thermal VENUS critical facility is coupled to an accelerator, built by the French body CNRS in Grenoble, working in both continuous and pulsed mode and delivering 14 MeV neutrons by bombardment of deuterons on a tritium-target. The modified lay-out of the facility consists of a fast subcritical core made of 30% U-235 enriched metallic Uranium in a lead matrix. Several off-line and on-line reactivity measurement techniques will be investigated during the experimental campaign. This report is focused on the simulation by deterministic (ERANOS French code) and Monte Carlo (MCNPX US code) calculations of three reactivity measurement techniques, Slope ({alpha}-fitting), Area-ratio and Source-jerk, applied to a GUINEVERE subcritical configuration (namely SC1). The inferred reactivity, in dollar units, by the Area-ratio method shows an overall agreement between the two deterministic and Monte Carlo computational approaches, whereas the MCNPX Source-jerk results are affected by large uncertainties and allow only partial conclusions about the comparison. Finally, no particular spatial dependence of the results is observed in the case of the GUINEVERE SC1 subcritical configuration. (authors)

Bianchini, G.; Burgio, N.; Carta, M. [ENEA C.R. CASACCIA, via Anguillarese, 301, 00123 S. Maria di Galeria Roma (Italy); Peluso, V. [ENEA C.R. BOLOGNA, Via Martiri di Monte Sole, 4, 40129 Bologna (Italy); Fabrizio, V.; Ricci, L. [Univ. of Rome La Sapienza, C/o ENEA C.R. CASACCIA, via Anguillarese, 301, 00123 S. Maria di Galeria Roma (Italy)

2012-07-01T23:59:59.000Z

142

Monte Carlo wave packet approach to dissociative multiple ionization in diatomic molecules

A detailed description of the Monte Carlo wave packet technique applied to dissociative multiple ionization of diatomic molecules in short intense laser pulses is presented. The Monte Carlo wave packet technique relies on the Born-Oppenheimer separation of electronic and nuclear dynamics and provides a consistent theoretical framework for treating simultaneously both ionization and dissociation. By simulating the detection of continuum electrons and collapsing the system onto either the neutral, singly ionized or doubly ionized states in every time step the nuclear dynamics can be solved separately for each molecular charge state. Our model circumvents the solution of a multiparticle Schroedinger equation and makes it possible to extract the kinetic energy release spectrum via the Coulomb explosion channel as well as the physical origin of the different structures in the spectrum. The computational effort is restricted and the model is applicable to any molecular system where electronic Born-Oppenheimer curves, dipole moment functions, and ionization rates as a function of nuclear coordinates can be determined.

Leth, Henriette Astrup; Madsen, Lars Bojer; Moelmer, Klaus [Lundbeck Foundation Theoretical Center for Quantum System Research, Department of Physics and Astronomy, Aarhus University, DK-8000 Aarhus C (Denmark)

2010-05-15T23:59:59.000Z

143

Charged-Particle Thermonuclear Reaction Rates: I. Monte Carlo Method and Statistical Distributions

A method based on Monte Carlo techniques is presented for evaluating thermonuclear reaction rates. We begin by reviewing commonly applied procedures and point out that reaction rates that have been reported up to now in the literature have no rigorous statistical meaning. Subsequently, we associate each nuclear physics quantity entering in the calculation of reaction rates with a specific probability density function, including Gaussian, lognormal and chi-squared distributions. Based on these probability density functions the total reaction rate is randomly sampled many times until the required statistical precision is achieved. This procedure results in a median (Monte Carlo) rate which agrees under certain conditions with the commonly reported recommended "classical" rate. In addition, we present at each temperature a low rate and a high rate, corresponding to the 0.16 and 0.84 quantiles of the cumulative reaction rate distribution. These quantities are in general different from the statistically meaningless "minimum" (or "lower limit") and "maximum" (or "upper limit") reaction rates which are commonly reported. Furthermore, we approximate the output reaction rate probability density function by a lognormal distribution and present, at each temperature, the lognormal parameters miu and sigma. The values of these quantities will be crucial for future Monte Carlo nucleosynthesis studies. Our new reaction rates, appropriate for bare nuclei in the laboratory, are tabulated in the second paper of this series (Paper II). The nuclear physics input used to derive our reaction rates is presented in the third paper of this series (Paper III). In the fourth paper of this series (Paper IV) we compare our new reaction rates to previous results.

Richard Longland; Christian Iliadis; Art Champagne; Joe Newton; Claudio Ugalde; Alain Coc; Ryan Fitzgerald

2010-04-23T23:59:59.000Z

144

An efficient approach to ab initio Monte Carlo simulation

We present a Nested Markov chain Monte Carlo (NMC) scheme for building equilibrium averages based on accurate potentials such as density functional theory. Metropolis sampling of a reference system, defined by an inexpensive but approximate potential, was used to substantially decorrelate configurations at which the potential of interest was evaluated, thereby dramatically reducing the number needed to build ensemble averages at a given level of precision. The efficiency of this procedure was maximized on-the-fly through variation of the reference system thermodynamic state (characterized here by its inverse temperature ?{sup 0}), which was otherwise unconstrained. Local density approximation results are presented for shocked states of argon at pressures from 4 to 60 GPa, where—depending on the quality of the reference system potential—acceptance probabilities were enhanced by factors of 1.2–28 relative to unoptimized NMC. The optimization procedure compensated strongly for reference potential shortcomings, as evidenced by significantly higher speedups when using a reference potential of lower quality. The efficiency of optimized NMC is shown to be competitive with that of standard ab initio molecular dynamics in the canonical ensemble.

Leiding, Jeff; Coe, Joshua D., E-mail: jcoe@lanl.gov [Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)

2014-01-21T23:59:59.000Z

145

Ensemble bayesian model averaging using markov chain Monte Carlo sampling

Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL

2008-01-01T23:59:59.000Z

146

Cohesion Energetics of Carbon Allotropes: Quantum Monte Carlo Study

We have performed quantum Monte Carlo calculations to study the cohesion energetics of carbon allotropes, including sp3-bonded diamond, sp2-bonded graphene, sp-sp2 hybridized graphynes, and sp-bonded carbyne. The comput- ed cohesive energies of diamond and graphene are found to be in excellent agreement with the corresponding values de- termined experimentally for diamond and graphite, respectively, when the zero-point energies, along with the interlayer binding in the case of graphite, are included. We have also found that the cohesive energy of graphyne decreases system- atically as the ratio of sp-bonded carbon atoms increases. The cohesive energy of -graphyne, the most energetically- stable graphyne, turns out to be 6.766(6) eV/atom, which is smaller than that of graphene by 0.698(12) eV/atom. Experi- mental difficulty in synthesizing graphynes could be explained by their significantly smaller cohesive energies. Finally we conclude that the cohesive energy of a newly-proposed two-dimensional carbon network can be accurately estimated with the carbon-carbon bond energies determined from the cohesive energies of graphene and three different graphynes.

Shin, Hyeondeok [Konkuk University, South Korea] [Konkuk University, South Korea; Kang, Sinabro [Konkuk University, South Korea] [Konkuk University, South Korea; Koo, Jahyun [Konkuk University, South Korea] [Konkuk University, South Korea; Lee, Hoonkyung [Konkuk University, South Korea] [Konkuk University, South Korea; Kim, Jeongnim [ORNL] [ORNL; Kwon, Yongkyung [Konkuk University, South Korea] [Konkuk University, South Korea

2014-01-01T23:59:59.000Z

147

Random Number Generation for Petascale Quantum Monte Carlo

The quality of random number generators can affect the results of Monte Carlo computations, especially when a large number of random numbers are consumed. Furthermore, correlations present between different random number streams in a parallel computation can further affect the results. The SPRNG software, which the author had developed earlier, has pseudo-random number generators (PRNGs) capable of producing large numbers of streams with large periods. However, they had been empirically tested on only thousand streams earlier. In the work summarized here, we tested the SPRNG generators with over a hundred thousand streams, involving over 10^14 random numbers per test, on some tests. We also tested the popular Mersenne Twister. We believe that these are the largest tests of PRNGs, both in terms of the numbers of streams tested and the number of random numbers tested. We observed defects in some of these generators, including the Mersenne Twister, while a few generators appeared to perform well. We also corrected an error in the implementation of one of the SPRNG generators.

Ashok Srinivasan

2010-03-16T23:59:59.000Z

148

Monte Carlo sampling from the quantum state space. I

High-quality random samples of quantum states are needed for a variety of tasks in quantum information and quantum computation. Searching the high-dimensional quantum state space for a global maximum of an objective function with many local maxima or evaluating an integral over a region in the quantum state space are but two exemplary applications of many. These tasks can only be performed reliably and efficiently with Monte Carlo methods, which involve good samplings of the parameter space in accordance with the relevant target distribution. We show how the standard strategies of rejection sampling, importance sampling, and Markov-chain sampling can be adapted to this context, where the samples must obey the constraints imposed by the positivity of the statistical operator. For a comparison of these sampling methods, we generate sample points in the probability space for two-qubit states probed with a tomographically incomplete measurement, and then use the sample for the calculation of the size and credibility of the recently-introduced optimal error regions [see New J. Phys. 15 (2013) 123026]. Another illustration is the computation of the fractional volume of separable two-qubit states.

Jiangwei Shang; Yi-Lin Seah; Hui Khoon Ng; David John Nott; Berthold-Georg Englert

2014-07-29T23:59:59.000Z

149

Science Journals Connector (OSTI)

A three-dimensional Monte Carlo description of the neutral gas transport in pipe configurations with almost arbitrary torsion and curvature is presented. To avoid quadratic or even transcendental expressions describing the pipe surfaces confining and ...

A. Nicolai

1993-06-01T23:59:59.000Z

150

Science Journals Connector (OSTI)

Fully vectorized versions of the Los Alamos National Laboratory benchmark code Gamteb, a Monte Carlo photon transport algorithm, were developed for the Cyber 205/ETA-10 and Cray X-MP/Y-MP architectures. Single-processor performance measurements ...

P. J. Burns; M. Christon; R. Schweitzer; O. M. Lubeck; H. J. Wasserman

1989-08-01T23:59:59.000Z

151

A semi Monte Carlo calculation of the flux of high-energy muons in air showers

Science Journals Connector (OSTI)

A semi Monte Carlo method has been used to calculate the flux of muons of energy ?180 GeV associated with air showers at ... of nucleon and pion interactions at ultra-high energies. Various aspects of these muons

Siddheshwar Lal

1967-03-21T23:59:59.000Z

152

APR1400 LBLOCA uncertainty quantification by Monte Carlo method and comparison with Wilks' formula

An analysis of the uncertainty quantification for the PWR LBLOCA by the Monte Carlo calculation has been performed and compared with the tolerance level determined by Wilks' formula. The uncertainty range and distribution of each input parameter associated with the LBLOCA accident were determined by the PIRT results from the BEMUSE project. The Monte-Carlo method shows that the 95. percentile PCT value can be obtained reliably with a 95% confidence level using the Wilks' formula. The extra margin by the Wilks' formula over the true 95. percentile PCT by the Monte-Carlo method was rather large. Even using the 3 rd order formula, the calculated value using the Wilks' formula is nearly 100 K over the true value. It is shown that, with the ever increasing computational capability, the Monte-Carlo method is accessible for the nuclear power plant safety analysis within a realistic time frame. (authors)

Hwang, M.; Bae, S.; Chung, B. D. [Korea Atomic Energy Research Inst., 150 Dukjin-dong, Yuseong-gu, Daejeon (Korea, Republic of)

2012-07-01T23:59:59.000Z

153

Modeling of Asymmetry between Gasoline and Crude Oil Prices: A Monte Carlo Comparison

Science Journals Connector (OSTI)

An Engle–Granger two-step procedure is commonly used to estimate cointegrating vectors and consequently asymmetric error-correction models. This study uses Monte Carlo methods and demonstrates that the Engle–G...

Afshin Honarvar

2010-10-01T23:59:59.000Z

154

Monte Carlo Simulations of Microchannel Plate Based, Fast-Gated X-Ray Imagers

This is a chapter in a book titled Applications of Monte Carlo Method in Science and Engineering Edited by: Shaul Mordechai ISBN 978-953-307-691-1, Hard cover, 950 pages Publisher: InTech Publication date: February 2011

Wu., M., Kruschwitz, C.

2011-02-01T23:59:59.000Z

155

Protein folding and phylogenetic tree reconstruction using stochastic approximation Monte Carlo

folding problems. The numerical results indicate that it outperforms simulated annealing and conventional Monte Carlo algorithms as a stochastic optimization algorithm. We also propose one method for the use of secondary structures in protein folding...

Cheon, Sooyoung

2007-09-17T23:59:59.000Z

156

Science Journals Connector (OSTI)

Probabilistic wind speed forecasts for tropical cyclones from Monte Carlo–type simulations are assessed within a theoretical framework for a simple unbiased Gaussian system that is based on feature size and location error that mimic tropical ...

Michael E. Splitt; Steven M. Lazarus; Sarah Collins; Denis N. Botambekov; William P. Roeder

2014-10-01T23:59:59.000Z

157

MonteCarlo and Analytical Methods for Forced Outage Rate Calculations of Peaking Units

(unavailability) of such units. This thesis examines the representation of peaking units using a four-state model and performs the analytical calculations and Monte Carlo simulations to examine whether such a model does indeed represent the peaking units...

Rondla, Preethi 1988-

2012-10-26T23:59:59.000Z

158

A Monte Carlo Approach To Generator Portfolio Planning And Carbon Emissions

Monte Carlo Approach To Generator Portfolio Planning And Carbon Emissions Monte Carlo Approach To Generator Portfolio Planning And Carbon Emissions Assessments Of Systems With Large Penetrations Of Variable Renewables Jump to: navigation, search GEOTHERMAL ENERGYGeothermal Home Journal Article: A Monte Carlo Approach To Generator Portfolio Planning And Carbon Emissions Assessments Of Systems With Large Penetrations Of Variable Renewables Details Activities (0) Areas (0) Regions (0) Abstract: A new generator portfolio planning model is described that is capable of quantifying the carbon emissions associated with systems that include very high penetrations of variable renewables. The model combines a deterministic renewable portfolio planning module with a Monte Carlo simulation of system operation that determines the expected least-cost

159

In this thesis research, a coherent scattering model for microwave remote sensing of vegetation canopy is developed on the basis of Monte Carlo simulations. An accurate model of vegetation structure is essential for the ...

Wang, Li-Fang, Ph. D. Massachusetts Institute of Technology

2007-01-01T23:59:59.000Z

160

Science Journals Connector (OSTI)

The Monte Carlo Independent Column Approximation (McICA) method for computing domain-average radiative fluxes is unbiased with respect to the full ICA, but its flux estimates contain conditional random noise. Results for five experiments are used ...

P. Räisänen; H. W. Barker; J. N. S. Cole

2005-11-01T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

161

Purpose: Microbeam radiation therapy (MRT) is an experimental radiotherapy technique that has shown potent antitumor effects with minimal damage to normal tissue in animal studies. This unique form of radiation is currently only produced in a few large synchrotron accelerator research facilities in the world. To promote widespread translational research on this promising treatment technology we have proposed and are in the initial development stages of a compact MRT system that is based on carbon nanotube field emission x-ray technology. We report on a Monte Carlo based feasibility study of the compact MRT system design. Methods: Monte Carlo calculations were performed using EGSnrc-based codes. The proposed small animal research MRT device design includes carbon nanotube cathodes shaped to match the corresponding MRT collimator apertures, a common reflection anode with filter, and a MRT collimator. Each collimator aperture is sized to deliver a beam width ranging from 30 to 200 {mu}m at 18.6 cm source-to-axis distance. Design parameters studied with Monte Carlo include electron energy, cathode design, anode angle, filtration, and collimator design. Calculations were performed for single and multibeam configurations. Results: Increasing the energy from 100 kVp to 160 kVp increased the photon fluence through the collimator by a factor of 1.7. Both energies produced a largely uniform fluence along the long dimension of the microbeam, with 5% decreases in intensity near the edges. The isocentric dose rate for 160 kVp was calculated to be 700 Gy/min/A in the center of a 3 cm diameter target. Scatter contributions resulting from collimator size were found to produce only small (<7%) changes in the dose rate for field widths greater than 50 {mu}m. Dose vs depth was weakly dependent on filtration material. The peak-to-valley ratio varied from 10 to 100 as the separation between adjacent microbeams varies from 150 to 1000 {mu}m. Conclusions: Monte Carlo simulations demonstrate that the proposed compact MRT system design is capable of delivering a sufficient dose rate and peak-to-valley ratio for small animal MRT studies.

Schreiber, Eric C.; Chang, Sha X. [Department of Radiation Oncology, University of North Carolina, Chapel Hill, North Carolina 27599 (United States)

2012-08-15T23:59:59.000Z

162

The effect of load imbalances on the performance of Monte Carlo algorithms in LWR analysis

A model is developed to predict the impact of particle load imbalances on the performance of domain-decomposed Monte Carlo neutron transport algorithms. Expressions for upper bound performance “penalties” are derived in terms of simple machine characteristics, material characterizations and initial particle distributions. The hope is that these relations can be used to evaluate tradeoffs among different memory decomposition strategies in next generation Monte Carlo codes, and perhaps as a metric for triggering particle redistribution in production codes.

Siegel, A.R., E-mail: siegela@mcs.anl.gov [Argonne National Laboratory, Nuclear Engineering Division (United States); Argonne National Laboratory, Mathematics and Computer Science Division (United States); Smith, K., E-mail: kord@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering (United States); Romano, P.K., E-mail: romano7@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering (United States); Forget, B., E-mail: bforget@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering (United States); Felker, K., E-mail: felker@mcs.anl.gov [Argonne National Laboratory, Mathematics and Computer Science Division (United States)

2013-02-15T23:59:59.000Z

163

Science Journals Connector (OSTI)

Abstract Existing Monte Carlo burnup codes suffer from instabilities caused by spatial xenon oscillations. These oscillations can be prevented by forcing equilibrium between the neutron flux and saturated xenon distribution. The equilibrium calculation can be integrated to Monte Carlo neutronics, which provides a simple and lightweight solution that can be used with any of the existing burnup calculation algorithms. The stabilizing effect of this approach, as well as its limitations are demonstrated using the reactor physics code Serpent.

A.E. Isotalo; J. Leppänen; J. Dufek

2013-01-01T23:59:59.000Z

164

Comparison of value-added models for school ranking and classification: a Monte Carlo study

COMPARISON OF VALUE-ADDED MODELS FOR SCHOOL RANKING AND CLASSIFICATION: A MONTE CARLO STUDY A Dissertation by ZHONGMIAO WANG Submitted to the Office of Graduate Studies of Texas A&M University... AND CLASSIFICATION: A MONTE CARLO STUDY A Dissertation by ZHONGMIAO WANG Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Approved by: Co...

Wang, Zhongmiao

2009-05-15T23:59:59.000Z

165

PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code

Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.

Iandola, F N; O'Brien, M J; Procassini, R J

2010-11-29T23:59:59.000Z

166

Energy density matrix formalism for interacting quantum systems: a quantum Monte Carlo study

We develop an energy density matrix that parallels the one-body reduced density matrix (1RDM) for many-body quantum systems. Just as the density matrix gives access to the number density and occupation numbers, the energy density matrix yields the energy density and orbital occupation energies. The eigenvectors of the matrix provide a natural orbital partitioning of the energy density while the eigenvalues comprise a single particle energy spectrum obeying a total energy sum rule. For mean-field systems the energy density matrix recovers the exact spectrum. When correlation becomes important, the occupation energies resemble quasiparticle energies in some respects. We explore the occupation energy spectrum for the finite 3D homogeneous electron gas in the metallic regime and an isolated oxygen atom with ground state quantum Monte Carlo techniques imple- mented in the QMCPACK simulation code. The occupation energy spectrum for the homogeneous electron gas can be described by an effective mass below the Fermi level. Above the Fermi level evanescent behavior in the occupation energies is observed in similar fashion to the occupation numbers of the 1RDM. A direct comparison with total energy differences demonstrates a quantita- tive connection between the occupation energies and electron addition and removal energies for the electron gas. For the oxygen atom, the association between the ground state occupation energies and particle addition and removal energies becomes only qualitative. The energy density matrix provides a new avenue for describing energetics with quantum Monte Carlo methods which have traditionally been limited to total energies.

Krogel, Jaron T [ORNL] [ORNL; Kim, Jeongnim [ORNL] [ORNL; Reboredo, Fernando A [ORNL] [ORNL

2014-01-01T23:59:59.000Z

167

A new class of accelerated kinetic Monte Carlo algorithms

Kinetic (aka dynamic) Monte Carlo (KMC) is a powerful method for numerical simulations of time dependent evolution applied in a wide range of contexts including biology, chemistry, physics, nuclear sciences, financial engineering, etc. Generally, in a KMC the time evolution takes place one event at a time, where the sequence of events and the time intervals between them are selected (or sampled) using random numbers. While details of the method implementation vary depending on the model and context, there exist certain common issues that limit KMC applicability in almost all applications. Among such is the notorious 'flicker problem' where the same states of the systems are repeatedly visited but otherwise no essential evolution is observed. In its simplest form the flicker problem arises when two states are connected to each other by transitions whose rates far exceed the rates of all other transitions out of the same two states. In such cases, the model will endlessly hop between the two states otherwise producing no meaningful evolution. In most situation of practical interest, the trapping cluster includes more than two states making the flicker somewhat more difficult to detect and to deal with. Several methods have been proposed to overcome or mitigate the flicker problem, exactly [1-3] or approximately [4,5]. Of the exact methods, the one proposed by Novotny [1] is perhaps most relevant to our research. Novotny formulates the problem of escaping from a trapping cluster as a Markov system with absorbing states. Given an initial state inside the cluster, it is in principle possible to solve the Master Equation for the time dependent probabilities to find the walker in a given state (transient or absorbing) of the cluster at any time in the future. Novotny then proceeds to demonstrate implementation of his general method to trapping clusters containing the initial state plus one or two transient states and all of their absorbing states. Similar methods have been subsequently proposed in [refs] but applied in a different context. The most serious deficiency of the earlier methods is that size of the trapping cluster size is fixed and often too small to bring substantial simulation speedup. Furthermore, the overhead associated with solving for the probability distribution on the trapping cluster sometimes makes such simulations less efficient than the standard KMC. Here we report on a general and exact accelerated kinetic Monte Carlo algorithm generally applicable to arbitrary Markov models1. Two different implementations are attempted both based on incremental expansion of trapping sub-set of Markov states: (1) numerical solution of the Master Equation with absorbing states and (2) incremental graph reduction followed by randomization. Of the two implementations, the 2nd one performs better allowing, for the first time, to overcome trapping basins spanning several million Markov states. The new method is used for simulations of anomalous diffusion on a 2D substrate and of the kinetics of diffusive 1st order phase transformations in binary alloys. Depending on temperature and (alloy) super-saturation conditions, speedups of 3 to 7 orders of magnitude are demonstrated, with no compromise of simulation accuracy.

Bulatov, V V; Oppelstrup, T; Athenes, M

2011-11-30T23:59:59.000Z

168

Quantum Monte Carlo methods and lithium cluster properties

Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) [0.1981], 0.1895(9) [0.1874(4)], 0.1530(34) [0.1599(73)], 0.1664(37) [0.1724(110)], 0.1613(43) [0.1675(110)] Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) [0.0203(12)], 0.0188(10) [0.0220(21)], 0.0247(8) [0.0310(12)], 0.0253(8) [0.0351(8)] Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.

Owen, R.K.

1990-12-01T23:59:59.000Z

169

Monte Carlo Simulations of the Corrosion of Aluminoborosilicate Glasses

Aluminum is one of the most common components included in nuclear waste glasses. Therefore, Monte Carlo (MC) simulations were carried out to investigate the influence of aluminum on the rate and mechanism of dissolution of sodium borosilicate glasses in static conditions. The glasses studied were in the compositional range (70 2x)% SiO2x% Al2O3 15% B2O3 (15 + x)% Na2O, where 0 x 15%. The simulation results show that increasing amounts of aluminum in the pristine glasses slow down the initial rate of dissolution as determined from the rate of boron release. However, the extent of corrosion as measured by the total amount of boron release initially increases with addition of Al2O3, up to 5 mol% Al2O3, but subsequently decreases with further Al2O3 addition. The MC simulations reveal that this behavior is due to the interplay between two opposing mechanisms: (1) aluminum slows down the kinetics of hydrolysis/condensation reactions that drive the reorganization of the glass surface and eventual formation of a blocking layer; and (2) aluminum strengthens the glass thereby increasing the lifetime of the upper part of its surface and allowing for more rapid formation of a blocking layer. Additional MC simulations were performed whereby a process representing the formation of a secondary aluminosilicate phase was included. Secondary phase formation draws dissolved glass components out of the aqueous solution, thereby diminishing the rate of condensation and delaying the formation of a blocking layer. As a result, the extent of corrosion is found to increase continuously with increasing Al2O3 content, as observed experimentally. For Al2O3 < 10 mol%, the MC simulations also indicate that, because the secondary phase solubility eventually controls the aluminum content in the part of the altered layer in contact with the bulk aqueous solution, the dissolved aluminum and silicon concentrations at steady state are not dependent on the Al2O3 content of the pristine aluminoborosilicate glass.

Kerisit, Sebastien [Pacific Northwest National Laboratory (PNNL); Ryan, Joseph V [Pacific Northwest National Laboratory (PNNL); Pierce, Eric M [ORNL

2013-01-01T23:59:59.000Z

170

Monte Carlo Simulations of the Corrosion of Aluminoborosilicate Glasses

Aluminum is one of the most common components included in nuclear waste glasses. Therefore, Monte Carlo (MC) simulations were carried out to investigate the influence of aluminum on the rate and mechanism of dissolution of sodium borosilicate glasses in static conditions. The glasses studied were in the compositional range (70-2x)% SiO2 x% Al2O3 15% B2O3 (15+x)% Na2O, where 0 ? x ? 15%. The simulation results show that increasing amounts of aluminum in the pristine glasses slow down the initial rate of dissolution as determined from the rate of boron release. However, the extent of corrosion - as measured by the total amount of boron release - initially increases with addition of Al2O3, up to 5 Al2O3 mol%, but subsequently decreases with further Al2O3 addition. The MC simulations reveal that this behavior is due to the interplay between two opposing mechanisms: (1) aluminum slows down the kinetics of hydrolysis/condensation reactions that drive the reorganization of the glass surface and eventual formation of a blocking layer; and (2) aluminum strengthens the glass thereby increasing the lifetime of the upper part of its surface and allowing for more rapid formation of a blocking layer. Additional MC simulations were performed whereby a process representing the formation of a secondary aluminosilicate phase was included. Secondary phase formation draws dissolved glass components out of the aqueous solution, thereby diminishing the rate of condensation and delaying the formation of a blocking layer. As a result, the extent of corrosion is found to increase continuously with increasing Al2O3 content, as observed experimentally. For Al2O3 < 10 mol%, the MC simulations also indicate that, because the secondary phase solubility eventually controls the aluminum content in the part of the altered layer in contact with the bulk aqueous solution, the dissolved aluminum and silicon concentrations at steady state are not dependent on the Al2O3 content of the pristine aluminoborosilicate glass.

Kerisit, Sebastien N.; Ryan, Joseph V.; Pierce, Eric M.

2013-10-15T23:59:59.000Z

171

The National Nuclear Data Center is continuing its program to improve the nuclear data base used as input for commercial reactor analysis and design. In the most recent phase of this project the Monte Carlo program SAM-CE, developed by the Mathematical Applications Group, Inc. (MAGI), was made operational at BNL. This program was implemented on the BNL-CDC-7600 Computer, and also on the PDP-10 in-house computer. The NNDC made operational and developed techniques for processing ENDF/B-V cross sections for SAM-CE. A limited ENDF/B-V based library was produced. Use of the SAM-CE program in thermal reactor problems was validated using detailed comparisons of results with other Monte Carlo codes such as RECAP, RCP01 and VIM as well as with experimental data.

Beer, M.; Rose, P.

1981-04-01T23:59:59.000Z

172

A new e?ective Monte Carlo midway coupling method in MCNPand J. E. Hoogenboom, A midway forward-adjoint couplingand described the “midway” forward- adjoint coupling method

Hayakawa, Carole K.; Spanier, Jerome; Venugopalan, Vasan

2007-01-01T23:59:59.000Z

173

Tests of Monte Carlo Independent Column Approximation With a Mixed-Layer Ocean Model

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Tests of Monte Carlo Independent Column Tests of Monte Carlo Independent Column Approximation With a Mixed-Layer Ocean Model Petri Simo JÃ¤rvenoja Heikki JÃ¤rvinen RÃ¤isÃ¤nen Finnish Meteorological Institute Figure 1. Root-mean-square sampling errors in local instant- aneous total (LW+SW) net flux at the surface and total radiative heating rate for the 1COL, CLDS, and REF approaches. Global rms values are given at the upper right hand corner of the plots. 1. Introduction The Monte Carlo Independent Column Approximation (McICA) separates the description of unresolved cloud structure from the radiative transfer solver very flexible ! unbiased with respect to ICA ! However, the radiative fluxes and heating rates contain conditional random errors ("McICA noise"). ? The topic of this poster: All previous tests of McICA

174

Tests of Monte Carlo Independent Column Approximation in the ECHAM5

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Tests of Monte Carlo Independent Column Approximation in the ECHAM5 Tests of Monte Carlo Independent Column Approximation in the ECHAM5 Atmospheric GCM Raisanen, Petri Finnish Meteoroligical Institute Jarvenoja, Simo Finnish Meteorological Institute Jarvinen, Heikki Finnish Meteorological Institute Category: Modeling The Monte Carlo Independent Column Approximation (McICA) was recently introduced as a new approach for parametrizing broadband radiative fluxes in global climate models (GCMs). The McICA allows a flexible description of unresolved cloud structure, and it is unbiased with respect to the full ICA, but its results contain conditional random errors (i.e., noise). In this work, McICA and a stochastic cloud generator have been implemented to the Max Planck Institute for Meteorology's ECHAM5 atmospheric GCM. The

175

A Proposal for a Standard Interface Between Monte Carlo Tools And One-Loop Programs

Many highly developed Monte Carlo tools for the evaluation of cross sections based on tree matrix elements exist and are used by experimental collaborations in high energy physics. As the evaluation of one-loop matrix elements has recently been undergoing enormous progress, the combination of one-loop matrix elements with existing Monte Carlo tools is on the horizon. This would lead to phenomenological predictions at the next-to-leading order level. This note summarises the discussion of the next-to-leading order multi-leg (NLM) working group on this issue which has been taking place during the workshop on Physics at TeV Colliders at Les Houches, France, in June 2009. The result is a proposal for a standard interface between Monte Carlo tools and one-loop matrix element programs.

Binoth, T.; /Edinburgh U.; Boudjema, F.; /Annecy, LAPP; Dissertori, G.; Lazopoulos, A.; /Zurich, ETH; Denner, A.; /PSI, Villigen; Dittmaier, S.; /Freiburg U.; Frederix, R.; Greiner, N.; Hoeche, Stefan; /Zurich U.; Giele, W.; Skands, P.; Winter, J.; /Fermilab; Gleisberg, T.; /SLAC; Archibald, J.; Heinrich, G.; Krauss, F.; Maitre, D.; /Durham U., IPPP; Huber, M.; /Munich, Max Planck Inst.; Huston, J.; /Michigan State U.; Kauer, N.; /Royal Holloway, U. of London; Maltoni, F.; /Louvain U., CP3 /Milan Bicocca U. /INFN, Turin /Turin U. /Granada U., Theor. Phys. Astrophys. /CERN /NIKHEF, Amsterdam /Heidelberg U. /Oxford U., Theor. Phys.

2011-11-11T23:59:59.000Z

176

A proposal for a standard interface between Monte Carlo tools and one-loop programs

Many highly developed Monte Carlo tools for the evaluation of cross sections based on tree matrix elements exist and are used by experimental collaborations in high energy physics. As the evaluation of one-loop matrix elements has recently been undergoing enormous progress, the combination of one-loop matrix elements with existing Monte Carlo tools is on the horizon. This would lead to phenomenological predictions at the next-to-leading order level. This note summarizes the discussion of the next-to-leading order multi-leg (NLM) working group on this issue which has been taking place during the workshop on Physics at TeV colliders at Les Houches, France, in June 2009. The result is a proposal for a standard interface between Monte Carlo tools and one-loop matrix element programs.

Binoth, T.; Boudjema, F.; Dissertori, G.; Lazopoulos, A.; Denner, A.; Dittmaier, S.; Frederix, R.; Greiner, N.; Hoche, S.; Giele, W.; Skands, P.

2010-01-01T23:59:59.000Z

177

Calculation of radiation therapy dose using all particle Monte Carlo transport

The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.

Chandler, William P. (Tracy, CA); Hartmann-Siantar, Christine L. (San Ramon, CA); Rathkopf, James A. (Livermore, CA)

1999-01-01T23:59:59.000Z

178

Postimplant Dosimetry Using a Monte Carlo Dose Calculation Engine: A New Clinical Standard

Purpose: To use the Monte Carlo (MC) method as a dose calculation engine for postimplant dosimetry. To compare the results with clinically approved data for a sample of 28 patients. Two effects not taken into account by the clinical calculation, interseed attenuation and tissue composition, are being specifically investigated. Methods and Materials: An automated MC program was developed. The dose distributions were calculated for the target volume and organs at risk (OAR) for 28 patients. Additional MC techniques were developed to focus specifically on the interseed attenuation and tissue effects. Results: For the clinical target volume (CTV) D{sub 90} parameter, the mean difference between the clinical technique and the complete MC method is 10.7 Gy, with cases reaching up to 17 Gy. For all cases, the clinical technique overestimates the deposited dose in the CTV. This overestimation is mainly from a combination of two effects: the interseed attenuation (average, 6.8 Gy) and tissue composition (average, 4.1 Gy). The deposited dose in the OARs is also overestimated in the clinical calculation. Conclusions: The clinical technique systematically overestimates the deposited dose in the prostate and in the OARs. To reduce this systematic inaccuracy, the MC method should be considered in establishing a new standard for clinical postimplant dosimetry and dose-outcome studies in a near future.

Carrier, Jean-Francois [Departement de Radio-Oncologie, et Centre de Recherche du CHUM, Hopital Notre-Dame du CHUM, Montreal, Quebec (Canada) and Departement de Radio-Oncologie et Centre de Recherche en Cancerologie de Universite Laval, CHUQ Pavillon Hotel-Dieu de Quebec, Quebec (Canada)]. E-mail: jean-francois.carrier.chum@ssss.gouv.qc.ca; D'Amours, Michel [Departement de Radio-Oncologie et Centre de Recherche en Cancerologie de Universite Laval, CHUQ Pavillon Hotel-Dieu de Quebec, Quebec (Canada); Verhaegen, Frank [Medical Physics Unit, McGill University, Montreal, Quebec (Canada); Reniers, Brigitte [Medical Physics Unit, McGill University, Montreal, Quebec (Canada); Martin, Andre-Guy [Departement de Radio-Oncologie et Centre de Recherche en Cancerologie de Universite Laval, CHUQ Pavillon Hotel-Dieu de Quebec, Quebec (Canada); Vigneault, Eric [Departement de Radio-Oncologie et Centre de Recherche en Cancerologie de Universite Laval, CHUQ Pavillon Hotel-Dieu de Quebec, Quebec (Canada); Beaulieu, Luc [Departement de Radio-Oncologie et Centre de Recherche en Cancerologie de Universite Laval, CHUQ Pavillon Hotel-Dieu de Quebec, Quebec (Canada)

2007-07-15T23:59:59.000Z

179

Magnetic properties of carbon doped CdS: A first-principles and Monte Carlo study

Science Journals Connector (OSTI)

Carbon doping of CdS is studied using first-principles calculations and Monte Carlo simulation. Our calculations predict ferromagnetism in C doped CdS, resulting from carbon substitution of sulfur. A single carbon substitution of sulfur favors a spin-polarized state with a magnetic moment of 1.22?B. Ferromagnetic coupling is generally observed between these magnetic moments. A transition temperature of 270K is predicted through Monte Carlo simulation. The ferromagnetism of C doped CdS can be explained by the hole-mediated double exchange mechanism.

Hui Pan; Yuan Ping Feng; Qin Yun Wu; Zhi Gao Huang; Jianyi Lin

2008-03-13T23:59:59.000Z

180

Pseudo-random number generators for Monte Carlo simulations on Graphics Processing Units

Basic uniform pseudo-random number generators are implemented on ATI Graphics Processing Units (GPU). The performance results of the realized generators (multiplicative linear congruential (GGL), XOR-shift (XOR128), RANECU, RANMAR, RANLUX and Mersenne Twister (MT19937)) on CPU and GPU are discussed. The obtained speed-up factor is hundreds of times in comparison with CPU. RANLUX generator is found to be the most appropriate for using on GPU in Monte Carlo simulations. The brief review of the pseudo-random number generators used in modern software packages for Monte Carlo simulations in high-energy physics is present.

Demchik, Vadim

2010-01-01T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

181

Pseudo-random number generators for Monte Carlo simulations on Graphics Processing Units

Basic uniform pseudo-random number generators are implemented on ATI Graphics Processing Units (GPU). The performance results of the realized generators (multiplicative linear congruential (GGL), XOR-shift (XOR128), RANECU, RANMAR, RANLUX and Mersenne Twister (MT19937)) on CPU and GPU are discussed. The obtained speed-up factor is hundreds of times in comparison with CPU. RANLUX generator is found to be the most appropriate for using on GPU in Monte Carlo simulations. The brief review of the pseudo-random number generators used in modern software packages for Monte Carlo simulations in high-energy physics is present.

Vadim Demchik

2010-03-09T23:59:59.000Z

182

We have released Version 2 of Milagro, an object-oriented, C++ code that performs radiative transfer using Fleck and Cummings' Implicit Monte Carlo method. Milagro, a part of the Jayenne program, is a stand-alone driver code used as a methods research vehicle and to verify its underlying classes. These underlying classes are used to construct Implicit Monte Carlo packages for external customers. Milagro-2 represents a design overhaul that allows better parallelism and extensibility. New features in Milagro-2 include verified momentum deposition, restart capability, graphics capability, exact energy conservation, and improved load balancing and parallel efficiency. A users' guide also describes how to configure, make, and run Milagro2.

T.J. Urbatsch; T.M. Evans

2006-02-15T23:59:59.000Z

183

ARTICLES Monte Carlo study of a compressible Ising antiferromagnet on a triangular lattice Lei Gu the compressible antiferromagnetic Ising Model on a triangular lattice using Monte Carlo simulations. It is found broken symmetries: the Ising symmetry and a three-state Potts symmetry characteristic of the triangular

Garrido, Pedro L.

184

Inverted List Kinetic Monte Carlo with Rejection ap- plied to Directed Self-Assembly of Epitaxial of subsequently deposited material using a kinetic Monte Carlo algorithm that combines the use of inverted lists finding is that the relative performance of the inverted list algorithm improves with increasing system

Schulze, Tim

185

Quasi-Monte Carlo simulation of the light environment of plants Mikolaj CieslakA,E,F

Quasi-Monte Carlo simulation of the light environment of plants Mikolaj CieslakA,E,F , Christiane-based CARIBU software (Chelle et al. 2004),and we showthat thesetwo programs produceconsistent results. Wealso assessed theperformance oftheRQMCpath tracing algorithm by comparing it with Monte Carlo path tracing

Prusinkiewicz, Przemyslaw

186

Science Journals Connector (OSTI)

In this study, the application of the two-dimensional direct simulation Monte Carlo (DSMC) method using an MPI-CUDA parallelization paradigm on Graphics Processing Units (GPUs) clusters is presented. An all-device (i.e. GPU) computational approach is ... Keywords: Graphics Processing Unit (GPU), MPI-CUDA, Parallel direct simulation Monte Carlo, Rarefied gas dynamics, Very large-scale simulation

C. -C. Su; M. R. Smith; F. -A. Kuo; J. -S. Wu; C. -W. Hsieh; K. -C. Tseng

2012-10-01T23:59:59.000Z

187

Energetics of carbon clusters C8 and C10 from all-electron quantum Monte Carlo calculations

Energetics of carbon clusters C8 and C10 from all-electron quantum Monte Carlo calculations Yuri calculations. The total electronic energies obtained are 0.4Â1.2 hartrees lower than those of the lowest of the scaling of computational effort with the number of electrons Ne for these quantum Monte Carlo calculations

Anderson, James B.

188

While the use of Monte Carlo method has been prevalent in nuclear engineering, it has yet to fully blossom in the study of solute transport in porous media. By using an etched-glass micromodel, an attempt is made to apply Monte Carlo method...

Chung, Kiwhan

2012-06-07T23:59:59.000Z

189

Monte-Carlo simulation of Ising droplets in correlated site-bond percolation

L-99 Monte-Carlo simulation of Ising droplets in correlated site-bond percolation D. Stauffer ordinateur la dÃ©finition de Coniglio et Klein pour les gouttes du modÃ¨le Ising, sur les rÃ©seaux carrÃ© et doublÃ©. Abstract. 2014 The definition of droplets in the Ising model by Coniglio and Klein

Paris-Sud XI, UniversitÃ© de

190

Ising nematic phase in ultrathin magnetic films: A Monte Carlo study Sergio A. Cannas,1,

Ising nematic phase in ultrathin magnetic films: A Monte Carlo study Sergio A. Cannas,1, * Mateus F-dimensional Ising model with competing ferromagnetic exchange and dipolar interactions, which models an ultrathin at different temperatures with an intermediate Ising nematic phase between the stripe and the tetragonal ones

Stariolo, Daniel AdriÃ¡n

191

Simulating and Visualising Phase Transitions: Small-World Effects on the Monte Carlo Ising Model

Simulating and Visualising Phase Transitions: Small-World Effects on the Monte Carlo Ising Model K Science, Institute of Information & Mathematical Sciences, Massey University, Albany The Ising Model Many temperature and the material becomes magnetic. A simulation model such as the Ising model has been widely used

Hawick, Ken

192

Monte-Carlo Simulations of Linear Polarization in Clumpy OB-Star Winds

Monte-Carlo Simulations of Linear Polarization in Clumpy OB-Star Winds Rich Townsend & Nick Mast Department of Astronomy, University of Wisconsin-Madison Observations of linear polarization in OB-star winds can in principle be used to constrain the characteristics of wind clumping. However, models exploring

Townsend, Richard

193

Path-Integral Monte Carlo And The Squeezed Trapped Bose-Einstein Gas

Path-Integral Monte Carlo And The Squeezed Trapped Bose-Einstein Gas Juan Pablo FernÃ¡ndez1 the gas becomes effectively two-dimensional (2D). We confirm the plausibility of this result by performing different estimates for the condensate fraction. For the ideal gas, we find that the PIMC column density

Mullin, William J.

194

Quantum Monte Carlo for large chemical systems: Implementing efficient strategies for petascale platforms and beyond Anthony Scemama, Michel Caffarel Laboratoire de Chimie et Physique Quantiques, CNRS efficiently QMC simulations for large chemical systems are pre- sented. These include: i.) the introduction

Paris-Sud XI, UniversitÃ© de

195

Multi-level Monte-Carlo Wiener-Hopf simulation for Lvy processes.

Multi-level Monte-Carlo Wiener-Hopf simulation for LÃ©vy processes. Andreas Kyprianou University of Bath Friday May 23, 2014 15:00-16:00 Salle/Room PK-5115, Pavillon PrÃ©sident-Kennedy Building UQAM for a large family of LÃ©vy processes that is based on the Wiener-Hopf decomposition. We pursue this idea

Leclercq, Remi

196

to the many-body Schrödinger equation and proceeds to use Monte Carlo methods to calculate the perturbations in the internal electron field to determine the aforementioned processes. Results are computed for molecular water in the form of linear energy loss...

Madsen, Jonathan R

2013-08-13T23:59:59.000Z

197

A Java-Based Direct Monte Carlo Simulation of a Nano-Scale Pulse Detonation Engine

A Java-Based Direct Monte Carlo Simulation of a Nano- Scale Pulse Detonation Engine Darryl J. Here, the pulse detonation engine is proposed as a means of propulsion for micro-air vehicles and nano attempting to implement the pulse detonation engine at such small length scales is the dominance of the wall

198

wavelengths, which can be more efficiently converted to electricity by a PV cell. To achieve this, most-remission events. This is also a big advantage over conventional single material semiconductor nanopar- ticles of semiconductor-based LSCs in detail we employ Monte Carlo simulations (see Sec. II) using the measured data

Ilan, Boaz

199

Monte Carlo simulation methodology of the ghost interface theory for the planar surface tension

Monte Carlo simulation methodology of the ghost interface theory for the planar surface tension October 2003 A novel ``ghost interface'' expression for the surface tension of a planar liquid coexisting phases. Results generated from the ghost interface theory for the surface tension are presented

Attard, Phil

200

Monte Carlo charged-particle tracking and energy deposition on a Lagrangian mesh

Science Journals Connector (OSTI)

A Monte Carlo algorithm for alpha particle tracking and energy deposition on a RZ cylindrical computational mesh in a Lagrangian hydrodynamics code used for inertial confinement fusion (ICF) simulations is presented. The straight line approximation is used to follow propagation of “Monte Carlo particles” which represent collections of alpha particles generated from thermonuclear deuterium-tritium (DT) reactions. Energy deposition in the plasma is modeled by the continuous slowing down approximation. The scheme addresses various aspects arising in the coupling of Monte Carlo tracking with Lagrangian hydrodynamics; such as non-orthogonal severely distorted mesh cells, particle relocation on the moving mesh and particle relocation after rezoning. A comparison with the flux-limited multi-group diffusion transport method is presented for a polar direct drive target design for the National Ignition Facility. Simulations show the Monte Carlo transport method predicts about 30picosecond earlier ignition than predicted by the diffusion method, and generates higher hot spot temperature. Nearly linear speed-up is achieved for multi-processor parallel simulations.

J. Yuan; G. A. Moses; P. W. McKenty

2005-10-10T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

201

Quasi-Monte Carlo Simulation of the Light Environment of Plants Mikolaj Cieslak1,5

Quasi-Monte Carlo Simulation of the Light Environment of Plants Mikolaj Cieslak1,5 , Christiane and Food Research Institute of New Zealand Limited Running Title: QMC Simulation of the Light Environment. In this paper, we will outline the RQMC path tracing algorithm that we use in our light environment program

Lemieux, Christiane

202

Uncertainty of Oil Field GHG Emissions Resulting from Information Gaps: A Monte Carlo Approach

Science Journals Connector (OSTI)

Uncertainty of Oil Field GHG Emissions Resulting from Information Gaps: A Monte Carlo Approach ... Regulations on greenhouse gas (GHG) emissions from liquid fuel production generally work with incomplete data about oil production operations. ... We study the effect of incomplete information on estimates of GHG emissions from oil production operations. ...

Kourosh Vafi; Adam R. Brandt

2014-08-10T23:59:59.000Z

203

Use of single scatter electron monte carlo transport for medical radiation sciences

The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.

Svatos, Michelle M. (Oakland, CA)

2001-01-01T23:59:59.000Z

204

Monte Carlo Study of the Spin Transport in Magnetic Materials , K. Akablia,b

Monte Carlo Study of the Spin Transport in Magnetic Materials Y. Magnina , K. Akablia,b , H. T of Natural Science and Technology, Okayama University 3-1-1 Tsushima-naka, Kita-ku, Okayama 700-8530, Japan.. Abstract The resistivity in magnetic materials has been theoretically shown to depend on the spin

205

VIM Monte Carlo versus CASMO comparisons for BWR advanced fuel designs

Eigenvalues and two-dimensional fission rate distributions computed with the CASMO-3G lattice physics code and the VIM Monte Carlo Code are compared. The cases assessed are two advanced commercial BWR pin bundle designs. Generally, the two codes show good agreement in K{sub inf}, fission rate distributions, and control rod worths.

Pallotta, A.S. [Commonwealth Edison Co., Chicago, IL (United States); Blomquist, R.N. [Argonne National Lab., IL (United States)

1994-03-01T23:59:59.000Z

206

Quantum Monte Carlo calculations of electronic excitation energies: the case of the singlet n

) transition in acrolein Julien Toulouse1 , Michel Caffarel2 , Peter Reinhardt1 , Philip E. Hoggan3 , and C. J-of-the-art quantum Monte Carlo calculations of the singlet n (CO) vertical excitation energy in the acrolein in the acrolein molecule without reoptimization of the determinantal part of the wave function. The acrolein

Paris-Sud XI, UniversitÃ© de

207

Monte Carlo Simulation of Electromagnetic Interactions of Radiation with Liquid Water in

; nevertheless the concept of dose is not adequate to estimate the radiation effects when microscopic entitiesMonte Carlo Simulation of Electromagnetic Interactions of Radiation with Liquid Water. They address a physics domain relevant to the simulation of radiation effects in biological systems, where

Paris-Sud XI, UniversitÃ© de

208

Monte Carlo calculations of pair production in high-intensity laser-plasma interactions

Gamma-ray and electron-positron pair production will figure prominently in laser-plasma experiments with next generation lasers. Using a Monte Carlo approach we show that straggling effects arising from the finite recoil an electron experiences when it emits a high energy photon, increase the number of pairs produced on further interaction with the laser fields.

Roland Duclous; John Kirk; Anthony Bell

2010-10-21T23:59:59.000Z

209

Direct Simulation Monte Carlo of Inductively Coupled Plasma and Comparison with Experiments

Direct Simulation Monte Carlo of Inductively Coupled Plasma and Comparison with Experiments Justine of Chemical Engineering, University of Houston, Houston, Texas 77204-4 792, USA ABSTRACT Direct simulation-density inductively coupled reactor with chlorine (electronegative) chemistry. Electron density and temperature were

Economou, Demetre J.

210

Optimisation of masked ion irradiation damage profiles in YBCO thin films by Monte Carlo simulation

Optimisation of masked ion irradiation damage profiles in YBCO thin films by Monte Carlo simulation production with a given mask structure. The results suggest that minimum ion scattering broadening tails with beam energy up to a few hundred keV, though the throughput is intrinsically low [1]. A combination

Webb, Roger P.

211

Collective enhancement of nuclear state densities by the shell model Monte Carlo approach

The shell model Monte Carlo (SMMC) approach allows for the microscopic calculation of statistical and collective properties of heavy nuclei using the framework of the configuration-interaction shell model in very large model spaces. We present recent applications of the SMMC method to the calculation of state densities and their collective enhancement factors in rare-earth nuclei.

Özen, C; Nakada, H

2015-01-01T23:59:59.000Z

212

The effects of mapping CT images to Monte Carlo materials on GEANT4 proton simulation accuracy

Purpose: Monte Carlo simulations of radiation therapy require conversion from Hounsfield units (HU) in CT images to an exact tissue composition and density. The number of discrete densities (or density bins) used in this mapping affects the simulation accuracy, execution time, and memory usage in GEANT4 and other Monte Carlo code. The relationship between the number of density bins and CT noise was examined in general for all simulations that use HU conversion to density. Additionally, the effect of this on simulation accuracy was examined for proton radiation. Methods: Relative uncertainty from CT noise was compared with uncertainty from density binning to determine an upper limit on the number of density bins required in the presence of CT noise. Error propagation analysis was also performed on continuously slowing down approximation range calculations to determine the proton range uncertainty caused by density binning. These results were verified with Monte Carlo simulations. Results: In the presence of even modest CT noise (5 HU or 0.5%) 450 density bins were found to only cause a 5% increase in the density uncertainty (i.e., 95% of density uncertainty from CT noise, 5% from binning). Larger numbers of density bins are not required as CT noise will prevent increased density accuracy; this applies across all types of Monte Carlo simulations. Examining uncertainty in proton range, only 127 density bins are required for a proton range error of <0.1 mm in most tissue and <0.5 mm in low density tissue (e.g., lung). Conclusions: By considering CT noise and actual range uncertainty, the number of required density bins can be restricted to a very modest 127 depending on the application. Reducing the number of density bins provides large memory and execution time savings in GEANT4 and other Monte Carlo packages.

Barnes, Samuel; McAuley, Grant; Slater, James [Department of Radiation Medicine, Loma Linda University, Loma Linda, California 92350 (United States); Wroe, Andrew [Department of Radiation Medicine, Loma Linda University Medical Center, Loma Linda, California 92350 (United States)

2013-04-15T23:59:59.000Z

213

Path integral Monte Carlo and density functional molecular dynamics simulations of hot, dense helium

Science Journals Connector (OSTI)

Two first-principles simulation techniques, path integral Monte Carlo (PIMC) and density functional molecular dynamics (DFT-MD), are applied to study hot, dense helium in the density-temperature range of 0.387–5.35?g?cm?3 and 500?K–1.28×108?K. One coherent equation of state is derived by combining DFT-MD data at lower temperatures with PIMC results at higher temperatures. Good agreement between both techniques is found in an intermediate-temperature range. For the highest temperatures, the PIMC results converge to the Debye-Hückel limiting law. In order to derive the entropy, a thermodynamically consistent free-energy fit is used that reproduces the internal energies and pressure derived from the first-principles simulations. The equation of state is presented in the form of a table as well as a fit and is compared with different free-energy models. Pair-correlation functions and the electronic density of states are discussed. Shock Hugoniot curves are compared with recent laser shock-wave experiments.

B. Militzer

2009-04-08T23:59:59.000Z

214

In this paper we consider a new generalized algorithm for the efficient calculation of component object volumes given their equivalent constructive solid geometry (CSG) definition. The new method relies on domain decomposition to recursively subdivide the original component into smaller pieces with volumes that can be computed analytically or stochastically, if needed. Unlike simpler brute-force approaches, the proposed decomposition scheme is guaranteed to be robust and accurate to within a user-defined tolerance. The new algorithm is also fully general and can handle any valid CSG component definition, without the need for additional input from the user. The new technique has been specifically optimized to calculate volumes of component definitions commonly found in models used for Monte Carlo particle transport simulations for criticality safety and reactor analysis applications. However, the algorithm can be easily extended to any application which uses CSG representations for component objects. The paper provides a complete description of the novel volume calculation algorithm, along with a discussion of the conjectured error bounds on volumes calculated within the method. In addition, numerical results comparing the new algorithm with a standard stochastic volume calculation algorithm are presented for a series of problems spanning a range of representative component sizes and complexities. (authors)

Millman, D. L. [Dept. of Computer Science, Univ. of North Carolina at Chapel Hill (United States); Griesheimer, D. P.; Nease, B. R. [Bechtel Marine Propulsion Corporation, Bertis Atomic Power Laboratory (United States); Snoeyink, J. [Dept. of Computer Science, Univ. of North Carolina at Chapel Hill (United States)

2012-07-01T23:59:59.000Z

215

Treating realistically the ambient water is one of the main difficulties in applying Monte Carlo methods to protein folding. The solvent-accessible area method, a popular method for treating water implicitly, is investigated by means of Metropolis simulations of the brain peptide Met-Enkephalin. For the phenomenological energy function ECEPP/2 nine atomic solvation parameter (ASP) sets are studied that had been proposed by previous authors. The simulations are compared with each other, with simulations with a distance dependent electrostatic permittivity $\\epsilon (r)$, and with vacuum simulations ($\\epsilon =2$). Parallel tempering and a recently proposed biased Metropolis technique are employed and their performances are evaluated. The measured observables include energy and dihedral probability densities (pds), integrated autocorrelation times, and acceptance rates. Two of the ASP sets turn out to be unsuitable for these simulations. For all other sets, selected configurations are minimized in search of the global energy minima. Unique minima are found for the vacuum and the $\\epsilon(r)$ system, but for none of the ASP models. Other observables show a remarkable dependence on the ASPs. In particular, autocorrelation times vary dramatically with the ASP parameters. Three ASP sets have much smaller autocorrelations at 300 K than the vacuum simulations, opening the possibility that simulations can be speeded up vastly by judiciously chosing details of the force

Hsiao-Ping Hsu; Bernd A. Berg; Peter Grassberger

2004-08-26T23:59:59.000Z

216

In the OSTI Collections: Monte Carlo Methods | OSTI, US Dept of Energy,

Office of Scientific and Technical Information (OSTI)

Monte Carlo Methods Monte Carlo Methods "The first thoughts and attempts I made ... were suggested by a question which occurred to me in 1946 as I was convalescing from an illness and playing solitaires. The question was what are the chances that a Canfield solitaire laid out with 52 cards will come out successfully? After spending a lot of time trying to estimate them by pure combinatorial calculations, I wondered whether a more practical method than 'abstract thinking' might not be to lay it out say one hundred times and simply observe and count the number of successful plays. This was already possible to envisage with the beginning of the new era of fast computers, and I immediately thought of problems of neutron diffusion and other questions of mathematical physics,

217

Calculating alpha Eigenvalues in a Continuous-Energy Infinite Medium with Monte Carlo

The {alpha} eigenvalue has implications for time-dependent problems where the system is sub- or supercritical. We present methods and results from calculating the {alpha}-eigenvalue spectrum for a continuous-energy infinite medium with a simplified Monte Carlo transport code. We formulate the {alpha}-eigenvalue problem, detail the Monte Carlo code physics, and provide verification and results. We have a method for calculating the {alpha}-eigenvalue spectrum in a continuous-energy infinite-medium. The continuous-time Markov process described by the transition rate matrix provides a way of obtaining the {alpha}-eigenvalue spectrum and kinetic modes. These are useful for the approximation of the time dependence of the system.

Betzler, Benjamin R. [Los Alamos National Laboratory; Kiedrowski, Brian C. [Los Alamos National Laboratory; Brown, Forrest B. [Los Alamos National Laboratory; Martin, William R. [Los Alamos National Laboratory

2012-09-04T23:59:59.000Z

218

Study of Monte Carlo approach to experimental uncertainty propagation with MSTW 2008 PDFs

We investigate the Monte Carlo approach to propagation of experimental uncertainties within the context of the established "MSTW 2008" global analysis of parton distribution functions (PDFs) of the proton at next-to-leading order in the strong coupling. We show that the Monte Carlo approach using replicas of the original data gives PDF uncertainties in good agreement with the usual Hessian approach using the standard Delta(chi^2) = 1 criterion, then we explore potential parameterisation bias by increasing the number of free parameters, concluding that any parameterisation bias is likely to be small, with the exception of the valence-quark distributions at low momentum fractions x. We motivate the need for a larger tolerance, Delta(chi^2) > 1, by making fits to restricted data sets and idealised consistent or inconsistent pseudodata. Instead of using data replicas, we alternatively produce PDF sets randomly distributed according to the covariance matrix of fit parameters including appropriate tolerance values,...

Watt, G

2012-01-01T23:59:59.000Z

219

We present the Monte Carlo with Absorbing Markov Chains (MCAMC) method for extremely long kinetic Monte Carlo simulations. The MCAMC algorithm does not modify the system dynamics. It is extremely useful for models with discrete state spaces when low-temperature simulations are desired. To illustrate the strengths and limitations of this algorithm we introduce a simple model involving random walkers on an energy landscape. This simple model has some of the characteristics of protein folding and could also be experimentally realizable in domain motion in nanoscale magnets. We find that even the simplest MCAMC algorithm can speed up calculations by many orders of magnitude. More complicated MCAMC simulations can gain further increases in speed by orders of magnitude.

M. A. Novotny; Shannon M. Wheeler

2002-11-02T23:59:59.000Z

220

Adaptive kinetic Monte Carlo simulation of methanol decomposition on Cu(100)

The adaptive kinetic Monte Carlo method was used to calculate the dynamics of methanol decomposition on Cu(100) at room temperature over a time scale of minutes. Mechanisms of reaction were found using min-mode following saddle point searches based upon forces and energies from density functional theory. Rates of reaction were calculated with harmonic transition state theory. The dynamics followed a pathway from CH3-OH, CH3-O, CH2-O, CH-O and finally C-O. Our calculations confirm that methanol decomposition starts with breaking the O-H bond followed by breaking C-H bonds in the dehydrogenated intermediates until CO is produced. The bridge site on the Cu(100) surface is the active site for scissoring chemical bonds. Reaction intermediates are mobile on the surface which allows them to find this active reaction site. This study illustrates how the adaptive kinetic Monte Carlo method can model the dynamics of surface chemistry from first principles.

Xu, Lijun; Mei, Donghai; Henkelman, Graeme A.

2009-12-31T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

221

The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that the use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.

Beer, M.

1980-12-01T23:59:59.000Z

222

Science Journals Connector (OSTI)

Abstract Geant4 Monte Carlo code simulations were used to solve experimental and theoretical complications for calculation of mass energy-absorption coefficients of elements, air, and compounds. The mass energy-absorption coefficients for nuclear track detectors were computed first time using Geant4 Monte Carlo code for energy 1 keV–20 MeV. Very good agreements for simulated results of mass energy-absorption coefficients for carbon, nitrogen, silicon, sodium iodide and nuclear track detectors were observed on comparison with the values reported in the literatures. Kerma relative to air for energy 1 keV–20 MeV and energy absorption buildup factors for energy 50 keV–10 MeV up to 10 mfp penetration depths of the selected nuclear track detectors were also calculated to evaluate the absorption of the gamma photons. Geant4 simulation can be utilized for estimation of mass energy-absorption coefficients in elements and composite materials.

Vishwanath P. Singh; M.E. Medhat; N.M. Badiger

2015-01-01T23:59:59.000Z

223

Geometric representations in the developmental Monte Carlo transport code MC21

The geometry kernel of the developmental Monte Carlo transport code MC21 is designed as a combination of the geometry capabilities of several existing Monte Carlo codes. This combination of capabilities is intended to meet efficiently the general requirements associated with in-core design products and, at the same time, be flexible enough to support highly general geometric models. This paper provides a description of the different geometry representations of MC21 and outlines how the geometric data is stored internally through the use of Fortran-90 data structures. Finally, two alternative geometric representations of a published BWR unit assembly model are discussed. Results for the two representations are contrasted, including k-effective results, relative memory footprints, and relative computational speeds. While total memory footprint is not noticeably reduced, results show significant speed advantages of one representation. (authors)

Donovan, T. [KAPL, Inc. - A Lockheed Martin Company, Schenectady, NY (United States); Tyburski, L. [Bechtel Bettis, Inc., West Mifflin, PA (United States)

2006-07-01T23:59:59.000Z

224

Properties that are necessarily formulated within pure (symmetric) expectation values are difficult to calculate for projector quantum Monte Carlo approaches, but are critical in order to compute many of the important observable properties of electronic systems. Here, we investigate an approach for the sampling of unbiased reduced density matrices within the Full Configuration Interaction Quantum Monte Carlo dynamic, which requires only small computational overheads. This is achieved via an independent replica population of walkers in the dynamic, sampled alongside the original population. The resulting reduced density matrices are free from systematic error (beyond those present via constraints on the dynamic itself), and can be used to compute a variety of expectation values and properties, with rapid convergence to an exact limit. A quasi-variational energy estimate derived from these density matrices is proposed as an accurate alternative to the projected estimator for multiconfigurational wavefunctions, ...

Overy, Catherine; Blunt, N S; Shepherd, James; Cleland, Deidre; Alavi, Ali

2014-01-01T23:59:59.000Z

225

Validation of a Monte Carlo Based Depletion Methodology Using HFIR Post-Irradiation Measurements

Post-irradiation uranium isotopic atomic densities within the core of the High Flux Isotope Reactor (HFIR) were calculated and compared to uranium mass spectrographic data measured in the late 1960s and early 70s [1]. This study was performed in order to validate a Monte Carlo based depletion methodology for calculating the burn-up dependent nuclide inventory, specifically the post-irradiation uranium

Chandler, David [ORNL; Maldonado, G Ivan [ORNL; Primm, Trent [ORNL

2009-11-01T23:59:59.000Z

226

Hybrid Monte Carlo with Wilson Dirac operator on the Fermi GPU

In this article we present our implementation of a Hybrid Monte Carlo algorithm for Lattice Gauge Theory using two degenerate flavours of Wilson-Dirac fermions on a Fermi GPU. We find that using registers instead of global memory speeds up the code by almost an order of magnitude. To map the array variables to scalars, so that the compiler puts them in the registers, we use code generators. Our final program is more than 10 times faster than a generic single CPU.

Abhijit Chakrabarty; Pushan Majumdar

2012-07-10T23:59:59.000Z

227

SU?FF?T?109: Automation of Monte Carlo Simulations For A Proton Therapy System

Science Journals Connector (OSTI)

Purpose: To develop a code system to automate the processes associated with Monte Carlo simulations of a clinical proton therapy system. Method and Materials: A software system was developed that accepts a clinical prescription (beam range range modulation and field size) and generates a complete Monte Carlo simulation input file that includes all major components in the M. D. Anderson passively scattered treatment head plus one of several user?selectable phantoms. The simulations are automatically submitted to a 130 dual?CPU cluster. Post processing scripts were also developed to analyze the simulation results and generate required configuration data for the Varian Eclipse treatment planning system. Quality assurance procedures such as design inspection unit test incremental integration test regression test and integration test were performed to ensure the code system produces correct results. The code system was written in mainly C language with some shell scripts and it runs on the LINUX operating system. Results: A code system has been developed to automatically generate MCNPX input files run simulations and perform post?processing of simulation results for a proton therapy system. The code system has been used to simulate dose profiles and generate required data for commissioning the M. D. Anderson proton therapy system. Over one thousand dose profiles were generated for different beam configurations by the code system in two months. Example beam data will be presented. Conclusion: The automated Monte Carlo code system has proved to be a useful tool for simulations of clinical applications in proton therapy. It allows for rapid modeling of proton therapy systems and the results of this study suggest that data from Monte Carlo simulations will play an increasingly prominent role in proton therapy projects i.e. pre?clinical design commissioning studies and routine clinical tasks.

Y Zheng; J Fontenot; N Koch

2006-01-01T23:59:59.000Z

228

The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units

We present a CPU–GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm, which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU–GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU–GPU duets. -- Highlights: •We parallelize the Metropolis Monte Carlo (MMC) algorithm on one CPU—GPU duet. •The Adaptive Tempering Monte Carlo employs MMC and profits from this CPU—GPU implementation. •Our benchmark shows a size scaling-up speedup of 62 for systems with 225,000 particles. •The testbed involves a polymeric system of oligopyrroles in the condensed phase. •The CPU—GPU parallelization includes dipole—dipole and Mie—Jones classic potentials.

Hall, Clifford [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States) [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); Ji, Weixiao [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States)] [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); Blaisten-Barojas, Estela, E-mail: blaisten@gmu.edu [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States) [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States)

2014-02-01T23:59:59.000Z

229

Path-integral Monte Carlo calculation of the kinetic energy of condensed lithium

Science Journals Connector (OSTI)

We report path-integral Monte Carlo calculations of the kinetic energy of condensed lithium for several temperatures in both the solid and liquid phases. The excess kinetic energy of lithium decreases from about 10.4% of the classical value at 300 K to 3.2% at 520 K indicating a very slow decay with temperature. A Wigner-Kirkwood perturbation treatment of quantum effects to order ?2 gives a satisfactory agreement with the path-integral results.

Claudia Filippi and David M. Ceperley

1998-01-01T23:59:59.000Z

230

Modular, object-oriented redesign of a large-scale Monte Carlo neutron transport program

This paper describes the modular, object-oriented redesign of a large-scale Monte Carlo neutron transport program. This effort represents a complete 'white sheet of paper' rewrite of the code. In this paper, the motivation driving this project, the design objectives for the new version of the program, and the design choices and their consequences will be discussed. The design itself will also be described, including the important subsystems as well as the key classes within those subsystems.

Moskowitz, B.S.

2000-02-01T23:59:59.000Z

231

A unified Monte Carlo approach to fast neutron cross section data evaluation.

A unified Monte Carlo (UMC) approach to fast neutron cross section data evaluation that incorporates both model-calculated and experimental information is described. The method is based on applications of Bayes Theorem and the Principle of Maximum Entropy as well as on fundamental definitions from probability theory. This report describes the formalism, discusses various practical considerations, and examines a few numerical examples in some detail.

Smith, D.; Nuclear Engineering Division

2008-03-03T23:59:59.000Z

232

of the thesis is written with the intent of reviewing some of the significant pieces of literature relating to Monte Carlo simulated REDT and exploratory data analysis Box Plots. In 1964 David Hertz published an article in the Harvard Business Review... entitled, "Risk Analysis in Capital Investment" (Hertz 1964). While this article does not directly discuss range estimating, it is the foundation for the current REDT theory. In his atticle, Hertz discussed the problems associated with estimating...

Clutter, David John

1992-01-01T23:59:59.000Z

233

Monte Carlo method is an invaluable tool in the field of radiation protection, used to calculate shielding effectiveness, as well as dose for medical applications. With few exceptions, most of the objects currently simulated have been homogeneous...

Tutt, Teresa Elizabeth

2009-05-15T23:59:59.000Z

234

Science Journals Connector (OSTI)

In this paper, electron emission from non-planar potential barrier structures is analyzed using a Monte Carlo electron transport model. Compared to the planar structures, about twice bigger emission current ca...

Z. Bian; A. Shakouri

2006-01-01T23:59:59.000Z

235

Monte Carlo Study of Patchy Nanostructures Self-Assembled from a Single Multiblock Chain

We present a lattice Monte Carlo simulation for a multiblock copolymer chain of length N=240 and microarchitecture $(10-10)_{12}$.The simulation was performed using the Monte Carlo method with the Metropolis algorithm. We measured average energy, heat capacity, the mean squared radius of gyration, and the histogram of cluster count distribution. Those quantities were investigated as a function of temperature and incompatibility between segments, quantified by parameter {\\omega}. We determined the temperature of the coil-globule transition and constructed the phase diagram exhibiting a variety of patchy nanostructures. The presented results yield a qualitative agreement with those of the off-lattice Monte Carlo method reported earlier, with a significant exception for small incompatibilities,{\\omega}, and low temperatures, where 3-cluster patchy nanostructures are observed in contrast to the 2-cluster structures observed for the off-lattice $(10-10)_{12}$ chain. We attribute this difference to a considerable stiffness of lattice chains in comparison to that of the off-lattice chains.

Jakub Krajniak; Michal Banaszak

2014-10-15T23:59:59.000Z

236

MC21 analysis of the nuclear energy agency Monte Carlo performance benchmark problem

Due to the steadily decreasing cost and wider availability of large scale computing platforms, there is growing interest in the prospects for the use of Monte Carlo for reactor design calculations that are currently performed using few-group diffusion theory or other low-order methods. To facilitate the monitoring of the progress being made toward the goal of practical full-core reactor design calculations using Monte Carlo, a performance benchmark has been developed and made available through the Nuclear Energy Agency. A first analysis of this benchmark using the MC21 Monte Carlo code was reported on in 2010, and several practical difficulties were highlighted. In this paper, a newer version of MC21 that addresses some of these difficulties has been applied to the benchmark. In particular, the confidence-interval-determination method has been improved to eliminate source correlation bias, and a fission-source-weighting method has been implemented to provide a more uniform distribution of statistical uncertainties. In addition, the Forward-Weighted, Consistent-Adjoint-Driven Importance Sampling methodology has been applied to the benchmark problem. Results of several analyses using these methods are presented, as well as results from a very large calculation with statistical uncertainties that approach what is needed for design applications. (authors)

Kelly, D. J.; Sutton, T. M. [Knolls Atomic Power Laboratory, Bechtel Marine Propulsion Corporation, P. O. Box 1072, Schenectady, NY 12301-1072 (United States); Wilson, S. C. [Bertis Atomic Power Laboratory, Bechtel Marine Propulsion Corporation, P. O. Box 79, West Mifflin, PA 15122-0079 (United States)

2012-07-01T23:59:59.000Z

237

Spatial homogenization of thermal feedback regions in Monte Carlo reactor calculations

An integrated thermal-hydraulic feedback module has previously been developed for the Monte Carlo transport solver, MC21. The module incorporates a flexible input format that allows the user to describe heat transfer and coolant flow paths within the geometric model at any level of spatial detail desired. The effect that the varying levels of spatial homogenization of thermal regions has on the accuracy of the Monte Carlo simulations is examined in this study. Six thermal feedback mappings are constructed from the same geometric model of the Calvert Cliffs core. The spatial homogenization of the thermal regions is varied, giving each scheme a different level of detail, and the adequacy of the spatial homogenization is determined based on the eigenvalue produced by each Monte Carlo calculation. The purpose of these numerical experiments is to determine the level of detail necessarily to accurately capture the thermal feedback effect on reactivity. Several different core models are considered: axial-flow only, axial and lateral flow, asymmetry due to control rod insertion, and fuel heating (temperature -dependent cross sections). The thermal results generated by the MC21 thermal feedback module are consistent with expectations. Based upon the numerical experiments conducted it is concluded that the amount of spatial detail necessary to accurately capture the feedback effect on reactivity is relatively small. Homogenization at the assembly level for the Calvert Cliffs PWR model results in a similar power defect to that calculated with individual pin-cells modeled as explicit thermal regions. (authors)

Hanna, B. R.; Gill, D. F.; Griesheimer, D. P. [Bertis Atomic Power Laboratory, Bechtel Marine Propulsion Corporation, P.O. Box 79, West Mifflin, PA 15122 (United States)

2012-07-01T23:59:59.000Z

238

Movable geometry and eigenvalue search capability in the MC21 Monte Carlo code

A description of a robust and flexible movable geometry implementation in the Monte Carlo code MC21 is described along with a search algorithm that can be used in conjunction with the movable geometry capability to perform eigenvalue searches based on the position of some geometric component. The natural use of the combined movement and search capability is searching to critical through variation of control rod (or control drum) position. The movable geometry discussion provides the mathematical framework for moving surfaces in the MC21 combinatorial solid geometry description. A discussion of the interface between the movable geometry system and the user is also described, particularly the ability to create a hierarchy of movable groups. Combined with the hierarchical geometry description in MC21 the movable group framework provides a very powerful system for inline geometry modification. The eigenvalue search algorithm implemented in MC21 is also described. The foundations of this algorithm are a regula falsi search though several considerations are made in an effort to increase the efficiency of the algorithm for use with Monte Carlo. Specifically, criteria are developed to determine after each batch whether the Monte Carlo calculation should be continued, the search iteration can be rejected, or the search iteration has converged. These criteria seek to minimize the amount of time spent per iteration. Results for the regula falsi method are shown, illustrating that the method as implemented is indeed convergent and that the optimizations made ultimately reduce the total computational expense. (authors)

Gill, D. F.; Nease, B. R.; Griesheimer, D. P. [Bettis Atomic Power Laboratory, PO Box 79, West Mifflin, PA 15122 (United States)

2013-07-01T23:59:59.000Z

239

Multiphoton Monte Carlo event generator for Bhabha scattering at small angles

Science Journals Connector (OSTI)

We describe in this paper the application of the theory of Yennie, Frautschi, and Suura (YFS) to construct a Monte Carlo (MC) event generator with multiple-photon production for Bhabha scattering at low angles. The respective generator provides the four-momenta of the electron and positron and of all soft and hard photons with a proper treatment of the phase space and conservation of the total four-momentum. The final-state electron and positron are assumed to be visible above some minimum angle with respect to the beams (double tag). The QED matrix element in the algorithm is taken according to the YFS exponentiation scheme. The Monte Carlo program will be helpful in luminosity determination at experiments at the SLAC Linear Collider and the CERN collider LEP; it takes into account QED O(?) and the leading higher-order corrections. The important difference with the existing MC procedures is that the minimum energy above which photons are generated may be set arbitrarily low. Sample Monte Carlo data are illustrated in our discussion.

Stanislaw Jadach and B. F. L. Ward

1989-12-01T23:59:59.000Z

240

MCNPX Monte Carlo burnup simulations of the isotope correlation experiments in the NPP obrigheim.

This paper describes the simulation work of the Isotope Correlation Experiment (ICE) using the MCNPX Monte Carlo computer code package. The Monte Carlo simulation results are compared with the ICE-Experimental measurements for burnup up to 30 GWD/t. The comparison shows the good capabilities of the MCNPX computer code package for predicting the depletion of the uranium fuel and the buildup of the plutonium isotopes in a PWR thermal reactor. The Monte Carlo simulation results show also good agreements with the experimental data for calculating several long-lived and stable fission products. However, for the americium and curium actinides, it is difficult to judge the predication capabilities for these actinides due to the large uncertainties in the ICE-Experimental data. In the MCNPX numerical simulations, a pin cell model is utilized to simulate the fuel lattice of the nuclear power reactor. Temperature dependent libraries based on JEFF3.1 nuclear data files are utilized for the calculations. In addition, temperature dependent libraries based ENDF/B-VII nuclear data files are utilized and the obtained results are very close to the JEFF3.1 results, except for {approx}10% differences in the prediction of the minor actinide isotopes buildup.

Cao, Y.; Gohar, Y.; Broeders, C. (Nuclear Engineering Division); (Inst. for Neutron Physics and Reactor Technology)

2010-10-01T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

241

Monte Carlo depletion calculations using VESTA 2.1 new features and perspectives

VESTA is a Monte Carlo depletion interface code that is currently under development at IRSN. With VESTA, the emphasis lies on both accuracy and performance, so that the code will be capable of providing accurate and complete answers in an acceptable amount of time compared to other Monte Carlo depletion codes. From its inception, VESTA is intended to be a generic interface code so that it will ultimately be capable of using any Monte-Carlo code or depletion module and that can be tailored to the users needs. A new version of the code (version 2.1.x) will be released in 2012. The most important additions to the code are a burn up dependent isomeric branching ratio treatment to improve the prediction of metastable nuclides such as {sup 242m}Am and the integration of the PHOENIX point depletion module (also developed at IRSN) to overcome some of the limitations of the ORIGEN 2.2 module. The task of extracting and visualising the basic results and also the calculation of physical quantities or other data that can be derived from the basic output provided by VESTA will be the task of the AURORA depletion analysis tool which will be released at the same time as VESTA 2.1.x. The experimental validation database was also extended for this new version and it now contains a total of 35 samples with chemical assay data and 34 assembly decay heat measurements. (authors)

Haeck, W.; Cochet, B.; Aguiar, L. [Institut de Radioprotection et de Surete Nucleaire IRSN, BP 17, 92262 Fontenay-aux-Roses Cedex (France)

2012-07-01T23:59:59.000Z

242

We generalize a simple Monte Carlo (MC) model for dilute gases to consider the transport behavior of positrons and electrons in Percus-Yevick model liquids under highly non-equilibrium conditions, accounting rigorously for coherent scattering processes. The procedure extends an existing technique [Wojcik and Tachiya, Chem. Phys. Lett. 363, 3--4 (1992)], using the static structure factor to account for the altered anisotropy of coherent scattering in structured material. We identify the effects of the approximation used in the original method, and develop a modified method that does not require that approximation. We also present an enhanced MC technique that has been designed to improve the accuracy and flexibility of simulations in spatially-varying electric fields. All of the results are found to be in excellent agreement with an independent multi-term Boltzmann equation solution, providing benchmarks for future transport models in liquids and structured systems.

Tattersall, W J; Boyle, G J; White, R D

2015-01-01T23:59:59.000Z

243

Measured and Monte Carlo calculated k{sub Q} factors: Accuracy and comparison

Purpose: The journal Medical Physics recently published two papers that determine beam quality conversion factors, k{sub Q}, for large sets of ion chambers. In the first paper [McEwen Med. Phys. 37, 2179-2193 (2010)], k{sub Q} was determined experimentally, while the second paper [Muir and Rogers Med. Phys. 37, 5939-5950 (2010)] provides k{sub Q} factors calculated using Monte Carlo simulations. This work investigates a variety of additional consistency checks to verify the accuracy of the k{sub Q} factors determined in each publication and a comparison of the two data sets. Uncertainty introduced in calculated k{sub Q} factors by possible variation of W/e with beam energy is investigated further. Methods: The validity of the experimental set of k{sub Q} factors relies on the accuracy of the NE2571 reference chamber measurements to which k{sub Q} factors for all other ion chambers are correlated. The stability of NE2571 absorbed dose to water calibration coefficients is determined and comparison to other experimental k{sub Q} factors is analyzed. Reliability of Monte Carlo calculated k{sub Q} factors is assessed through comparison to other publications that provide Monte Carlo calculations of k{sub Q} as well as an analysis of the sleeve effect, the effect of cavity length and self-consistencies between graphite-walled Farmer-chambers. Comparison between the two data sets is given in terms of the percent difference between the k{sub Q} factors presented in both publications. Results: Monitoring of the absorbed dose calibration coefficients for the NE2571 chambers over a period of more than 15 yrs exhibit consistency at a level better than 0.1%. Agreement of the NE2571 k{sub Q} factors with a quadratic fit to all other experimental data from standards labs for the same chamber is observed within 0.3%. Monte Carlo calculated k{sub Q} factors are in good agreement with most other Monte Carlo calculated k{sub Q} factors. Expected results are observed for the sleeve effect and the effect of cavity length on k{sub Q}. The mean percent differences between experimental and Monte Carlo calculated k{sub Q} factors are -0.08, -0.07, and -0.23% for the Elekta 6, 10, and 25 MV nominal beam energies, respectively. An upper limit on the variation of W/e in photon beams from cobalt-60 to 25 MV is determined as 0.4% with 95% confidence. The combined uncertainty on Monte Carlo calculated k{sub Q} factors is reassessed and amounts to between 0.40 and 0.49% depending on the wall material of the chamber. Conclusions: Excellent agreement (mean percent difference of only 0.13% for the entire data set) between experimental and calculated k{sub Q} factors is observed. For some chambers, k{sub Q} is measured for only one chamber of each type--the level of agreement observed in this study would suggest that for those chambers the measured k{sub Q} values are generally representative of the chamber type.

Muir, B. R.; McEwen, M. R.; Rogers, D. W. O. [Ottawa Medical Physics Institute (OMPI), Ottawa Carleton Institute for Physics, Carleton University Campus, 1125 Colonel By Drive, Ottawa, Ontario K1S 5B6 (Canada); Institute for National Measurement Standards, National Research Council of Canada, Ottawa, Ontario K1A 0R6 (Canada); Ottawa Medical Physics Institute (OMPI), Ottawa Carleton Institute for Physics, Carleton University Campus, 1125 Colonel By Drive, Ottawa, Ontario K1S 5B6 (Canada)

2011-08-15T23:59:59.000Z

244

The applicability of certain Monte Carlo methods to the analysis of interacting polymers

The authors consider polymers, modeled as self-avoiding walks with interactions on a hexagonal lattice, and examine the applicability of certain Monte Carlo methods for estimating their mean properties at equilibrium. Specifically, the authors use the pivoting algorithm of Madras and Sokal and Metroplis rejection to locate the phase transition, which is known to occur at {beta}{sub crit} {approx} 0.99, and to recalculate the known value of the critical exponent {nu} {approx} 0.58 of the system for {beta} = {beta}{sub crit}. Although the pivoting-Metropolis algorithm works well for short walks (N < 300), for larger N the Metropolis criterion combined with the self-avoidance constraint lead to an unacceptably small acceptance fraction. In addition, the algorithm becomes effectively non-ergodic, getting trapped in valleys whose centers are local energy minima in phase space, leading to convergence towards different values of {nu}. The authors use a variety of tools, e.g. entropy estimation and histograms, to improve the results for large N, but they are only of limited effectiveness. Their estimate of {beta}{sub crit} using smaller values of N is 1.01 {+-} 0.01, and the estimate for {nu} at this value of {beta} is 0.59 {+-} 0.005. They conclude that even a seemingly simple system and a Monte Carlo algorithm which satisfies, in principle, ergodicity and detailed balance conditions, can in practice fail to sample phase space accurately and thus not allow accurate estimations of thermal averages. This should serve as a warning to people who use Monte Carlo methods in complicated polymer folding calculations. The structure of the phase space combined with the algorithm itself can lead to surprising behavior, and simply increasing the number of samples in the calculation does not necessarily lead to more accurate results.

Krapp, D.M. Jr. [Univ. of California, Berkeley, CA (United States)

1998-05-01T23:59:59.000Z

245

AIM: We have recently developed a microscopic Monte Carlo approach to study surface chemistry on interstellar grains and the morphology of ice mantles. The method is designed to eliminate the problems inherent in the rate-equation formalism to surface chemistry. Here we report the first use of this method in a chemical model of cold interstellar cloud cores that includes both gas-phase and surface chemistry. The surface chemical network consists of a small number of diffusive reactions that can produce molecular oxygen, water, carbon dioxide, formaldehyde, methanol and assorted radicals. METHOD: The simulation is started by running a gas-phase model including accretion onto grains but no surface chemistry or evaporation. The starting surface consists of either flat or rough olivine. We introduce the surface chemistry of the three species H, O and CO in an iterative manner using our stochastic technique. Under the conditions of the simulation, only atomic hydrogen can evaporate to a significant extent. Althoug...

Chang, Q; Herbst, E

2007-01-01T23:59:59.000Z

246

SIM-RIBRAS: A Monte-Carlo simulation package for RIBRAS system

SIM-RIBRAS is a Root-based Monte-Carlo simulation tool designed to help RIBRAS users on experience planning and experimental setup enhancing and caracterization. It is divided into two main programs: CineRIBRAS, aiming beam kinematics, and SolFocus, aiming beam optics. SIM-RIBRAS replaces other methods and programs used in the past, providing more complete and accurate results and requiring much less manual labour. Moreover, the user can easily make modifications in the codes, adequating it for specific requirements of an experiment.

Leistenschneider, E.; Lepine-Szily, A.; Lichtenthaeler, R. [Departamento de Fisica Nuclear, Instituto de Fisica, Universidade de Sao Paulo (Brazil)

2013-05-06T23:59:59.000Z

247

Introduction to the Latest Version of the Test-Particle Monte Carlo Code Molflow+

The Test-Particle Monte Carlo code Molflow+ is getting more and more attention from the scientific community needing detailed 3D calculations of vacuum in the molecular flow regime mainly, but not limited to, the particle accelerator field. Substantial changes, bug fixes, geometry-editing and modelling features, and computational speed improvements have been made to the code in the last couple of years. This paper will outline some of these new features, and show examples of applications to the design and analysis of vacuum systems at CERN and elsewhere.

Ady, M

2014-01-01T23:59:59.000Z

248

Purpose of paper is to confirm the feasibility of acquisition of three dimensional single photon emission computed tomography image from boron neutron capture therapy using Monte Carlo simulation. Prompt gamma ray (478?keV) was used to reconstruct image with ordered subsets expectation maximization method. From analysis of receiver operating characteristic curve, area under curve values of three boron regions were 0.738, 0.623, and 0.817. The differences between length of centers of two boron regions and distance of maximum count points were 0.3?cm, 1.6?cm, and 1.4?cm.

Yoon, Do-Kun; Jung, Joo-Young; Suk Suh, Tae, E-mail: suhsanta@catholic.ac.kr [Department of Biomedical Engineering and Research Institute of Biomedical Engineering, College of Medicine, Catholic University of Korea, Seoul 505 (Korea, Republic of); Jo Hong, Key [Molecular Imaging Program at Stanford (MIPS), Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, California 94305 (United States)

2014-02-24T23:59:59.000Z

249

Monte-Carlo study of the phase transition in the AA-stacked bilayer graphene

Tight-binding model of the AA-stacked bilayer graphene with screened electron-electron interactions has been studied using the Hybrid Monte Carlo simulations on the original double-layer hexagonal lattice. Instantaneous screened Coulomb potential is taken into account using Hubbard-Stratonovich transformation. G-type antiferromagnetic ordering has been studied and the phase transition with spontaneous generation of the mass gap has been observed. Dependence of the antiferromagnetic condensate on the on-site electron-electron interaction is examined.

Nikolaev, A A

2014-01-01T23:59:59.000Z

250

Monte-Carlo study of the phase transition in the AA-stacked bilayer graphene

Tight-binding model of the AA-stacked bilayer graphene with screened electron-electron interactions has been studied using the Hybrid Monte Carlo simulations on the original double-layer hexagonal lattice. Instantaneous screened Coulomb potential is taken into account using Hubbard-Stratonovich transformation. G-type antiferromagnetic ordering has been studied and the phase transition with spontaneous generation of the mass gap has been observed. Dependence of the antiferromagnetic condensate on the on-site electron-electron interaction is examined.

A. A. Nikolaev; M. V. Ulybyshev

2014-12-03T23:59:59.000Z

251

3D Monte Carlo Modeling of the EEDF in Negative Hydrogen Ion Sources

For optimization and accurate prediction of the amount of H{sup -} ion production in negative ion sources, analysis of electron energy distribution function (EEDF) is necessary. We are developing a numerical code which analyzes EEDF in tandem-type arc-discharge sources. It is a three-dimensional Monte Carlo simulation code with the realistic geometry and magnetic configuration. Coulomb collision between electrons is treated with 'Binary Collision' model and collisions with hydrogen species are treated with 'Null-collision (NC)' method. We have applied this code to the analysis of the JAEA 10 ampere negative ion source. The numerical result shows that the obtained EEDFs reasonably agree with experimental results.

Terasaki, R.; Hatayama, A.; Shibata, T. [Graduate School of Science and Technology, Keio University, 3-14-1 Hiyoshi, Yokohama 223-8522 (Japan); Inoue, T. [Japan Atomic Energy Agency, 801-1 Mukouyama, Naka 311-0193 (Japan)

2011-09-26T23:59:59.000Z

252

A Monte Carlo implementation of the predictor-corrector Quasi-Static method

The Quasi-Static method (QS) is a useful tool for solving reactor transients since it allows for larger time steps when updating neutron distributions. Because of the beneficial attributes of Monte Carlo (MC) methods (exact geometries and continuous energy treatment), it is desirable to develop a MC implementation for the QS method. In this work, the latest version of the QS method known as the Predictor-Corrector Quasi-Static method is implemented. Experiments utilizing two energy-groups provide results that show good agreement with analytical and reference solutions. The method as presented can easily be implemented in any continuous energy, arbitrary geometry, MC code. (authors)

Hackemack, M. W.; Ragusa, J. C. [Department of Nuclear Engineering, Texas A and M University, 337 Zachry Engineering Building, College Station, TX 77843 (United States); Griesheimer, D. P.; Pounders, J. M. [Bettis Atomic Laboratory, Bechtel Marine Propulsion Corporation, P.O. Box 79, West Mifflin, PA 15122 (United States)

2013-07-01T23:59:59.000Z

253

Role of collisional broadening in Monte Carlo simulations of terahertz quantum cascade lasers

Using a generalized version of Fermi's golden rule, collisional broadening is self-consistently implemented into ensemble Monte Carlo carrier transport simulations, and its effect on the transport and optical properties of terahertz quantum cascade lasers is investigated. The inclusion of broadening yields improved agreement with the experiment, without a significant increase of the numerical load. Specifically, this effect is crucial for a correct modeling at low biases. In the lasing regime, broadening can lead to significantly reduced optical gain and output power, affecting the obtained current-voltage characteristics.

Matyas, Alpar; Lugli, Paolo; Jirauschek, Christian [Institute for Nanoelectronics, Technische Universitaet Muenchen, D-80333 Munich (Germany)] [Institute for Nanoelectronics, Technische Universitaet Muenchen, D-80333 Munich (Germany)

2013-01-07T23:59:59.000Z

254

Anisotropic transverse flow introduction in Monte Carlo generators for heavy ion collisions

Science Journals Connector (OSTI)

Anisotropic transverse flow patterns that are observed in relativistic heavy ion collisions can be added to the available microscopic Monte Carlo event generators as a final state modification to the azimuthal angles of the particles, which are generated isotropically. The method proposed for this purpose by A. M. Poskanzer and S. A. Voloshin [Phys. Rev. C 58, 1671 (1998)] is valid only for small values of the Fourier coefficients vn and therefore it is not suitable for simulations with large values of anisotropy such as the ones predicted for Pb-Pb collisions at the LHC. We present here a possible solution to treat the cases of large anisotropies.

M. Masera; G. Ortona; M. G. Poghosyan; F. Prino

2009-06-26T23:59:59.000Z

255

Thermonuclear reaction rate of $^{18}$Ne($?$,$p$)$^{21}$Na from Monte-Carlo calculations

The $^{18}$Ne($\\alpha$,$p$)$^{21}$Na reaction impacts the break-out from the hot CNO-cycles to the $rp$-process in type I X-ray bursts. We present a revised thermonuclear reaction rate, which is based on the latest experimental data. The new rate is derived from Monte-Carlo calculations, taking into account the uncertainties of all nuclear physics input quantities. In addition, we present the reaction rate uncertainty and probability density versus temperature. Our results are also consistent with estimates obtained using different indirect approaches.

P. Mohr; R. Longland; C. Iliadis

2014-12-09T23:59:59.000Z

256

Thermonuclear reaction rate of $^{18}$Ne($?$,$p$)$^{21}$Na from Monte-Carlo calculations

The $^{18}$Ne($\\alpha$,$p$)$^{21}$Na reaction impacts the break-out from the hot CNO-cycles to the $rp$-process in type I X-ray bursts. We present a revised thermonuclear reaction rate, which is based on the latest experimental data. The new rate is derived from Monte-Carlo calculations, taking into account the uncertainties of all nuclear physics input quantities. In addition, we present the reaction rate uncertainty and probability density versus temperature. Our results are also consistent with estimates obtained using different indirect approaches.

P. Mohr; R. Longland; C. Iliadis

2014-12-14T23:59:59.000Z

257

Thermonuclear reaction rate of $^{18}$Ne($\\alpha$,$p$)$^{21}$Na from Monte-Carlo calculations

The $^{18}$Ne($\\alpha$,$p$)$^{21}$Na reaction impacts the break-out from the hot CNO-cycles to the $rp$-process in type I X-ray bursts. We present a revised thermonuclear reaction rate, which is based on the latest experimental data. The new rate is derived from Monte-Carlo calculations, taking into account the uncertainties of all nuclear physics input quantities. In addition, we present the reaction rate uncertainty and probability density versus temperature. Our results are also consistent with estimates obtained using different indirect approaches.

Mohr, P; Iliadis, C

2014-01-01T23:59:59.000Z

258

Science Journals Connector (OSTI)

We used Monte Carlo modeling to calculate the organs doses due to out-of field photons during radiation therapy of the nasopharynx.

Asghar Mesbahi; Farshad Seyednejad; Amir Gasemi-Jangjoo

2010-06-01T23:59:59.000Z

259

Statistical Exploration of Electronic Structure of Molecules from Quantum Monte-Carlo Simulations

In this report, we present results from analysis of Quantum Monte Carlo (QMC) simulation data with the goal of determining internal structure of a 3N-dimensional phase space of an N-electron molecule. We are interested in mining the simulation data for patterns that might be indicative of the bond rearrangement as molecules change electronic states. We examined simulation output that tracks the positions of two coupled electrons in the singlet and triplet states of an H2 molecule. The electrons trace out a trajectory, which was analyzed with a number of statistical techniques. This project was intended to address the following scientific questions: (1) Do high-dimensional phase spaces characterizing electronic structure of molecules tend to cluster in any natural way? Do we see a change in clustering patterns as we explore different electronic states of the same molecule? (2) Since it is hard to understand the high-dimensional space of trajectories, can we project these trajectories to a lower dimensional subspace to gain a better understanding of patterns? (3) Do trajectories inherently lie in a lower-dimensional manifold? Can we recover that manifold? After extensive statistical analysis, we are now in a better position to respond to these questions. (1) We definitely see clustering patterns, and differences between the H2 and H2tri datasets. These are revealed by the pamk method in a fairly reliable manner and can potentially be used to distinguish bonded and non-bonded systems and get insight into the nature of bonding. (2) Projecting to a lower dimensional subspace ({approx}4-5) using PCA or Kernel PCA reveals interesting patterns in the distribution of scalar values, which can be related to the existing descriptors of electronic structure of molecules. Also, these results can be immediately used to develop robust tools for analysis of noisy data obtained during QMC simulations (3) All dimensionality reduction and estimation techniques that we tried seem to indicate that one needs 4 or 5 components to account for most of the variance in the data, hence this 5D dataset does not necessarily lie on a well-defined, low dimensional manifold. In terms of specific clustering techniques, K-means was generally useful in exploring the dataset. The partition around medoids (pam) technique produced the most definitive results for our data showing distinctive patterns for both a sample of the complete data and time-series. The gap statistic with tibshirani criteria did not provide any distinction across the 2 dataset. The gap statistic w/DandF criteria, Model based clustering and hierarchical modeling simply failed to run on our datasets. Thankfully, the vanilla PCA technique was successful in handling our entire dataset. PCA revealed some interesting patterns for the scalar value distribution. Kernel PCA techniques (vanilladot, RBF, Polynomial) and MDS failed to run on the entire dataset, or even a significant fraction of the dataset, and we resorted to creating an explicit feature map followed by conventional PCA. Clustering using K-means and PAM in the new basis set seems to produce promising results. Understanding the new basis set in the scientific context of the problem is challenging, and we are currently working to further examine and interpret the results.

Prabhat, Mr; Zubarev, Dmitry; Lester, Jr., William A.

2010-12-22T23:59:59.000Z

260

Applications of FLUKA Monte Carlo Code for Nuclear and Accelerator Physics

FLUKA is a general purpose Monte Carlo code capable of handling all radiation components from thermal energies (for neutrons) or 1 keV (for all other particles) to cosmic ray energies and can be applied in many different fields. Presently the code is maintained on Linux. The validity of the physical models implemented in FLUKA has been benchmarked against a variety of experimental data over a wide energy range, from accelerator data to cosmic ray showers in the Earth atmosphere. FLUKA is widely used for studies related both to basic research and to applications in particle accelerators, radiation protection and dosimetry, including the specific issue of radiation damage in space missions, radiobiology (including radiotherapy) and cosmic ray calculations. After a short description of the main features that make FLUKA valuable for these topics, the present paper summarizes some of the recent applications of the FLUKA Monte Carlo code in the nuclear as well high energy physics. In particular it addresses such topics as accelerator related applications.

Battistoni, Giuseppe; /INFN, Milan /Milan U.; Broggi, Francesco; /INFN, Milan /Milan U.; Brugger, Markus; /CERN; Campanella, Mauro; /INFN, Milan /Milan U.; Carboni, Massimo; /INFN, Legnaro; Empl, Anton; /Houston U.; Fasso, Alberto; /SLAC; Gadioli, Ettore; /INFN, Milan /Milan U.; Cerutti, Francesco; /CERN; Ferrari, Alfredo; /CERN; Ferrari, Anna; /Frascati; Lantz, Matthias; /Nishina Ctr., RIKEN; Mairani, Andrea; /INFN, Milan /Milan U.; Margiotta, M.; /INFN, Bologna /Bologna U.; Morone, Christina; /Rome U.,Tor Vergata /INFN, Rome2; Muraro, Silvia; /INFN, Milan /Milan U.; Parodi, Katerina; /HITS, Heidelberg; Patera, Vincenzo; /Frascati; Pelliccioni, Maurizio; /Frascati; Pinsky, Lawrence; /Houston U.; Ranft, Johannes; /Siegen U. /CERN /Seibersdorf, Reaktorzentrum /INFN, Milan /Milan U. /SLAC /INFN, Legnaro /INFN, Bologna /Bologna U. /CERN /HITS, Heidelberg /CERN /CERN /Frascati /CERN /CERN /CERN /CERN /NASA, Houston

2012-04-17T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

261

A Monte Carlo Study of Multiplicity Fluctuations in Pb-Pb Collisions at LHC Energies

With large volumes of data available from LHC, it has become possible to study the multiplicity distributions for the various possible behaviours of the multiparticle production in collisions of relativistic heavy ion collisions, where a system of dense and hot partons has been created. In this context it is important and interesting as well to check how well the Monte Carlo generators can describe the properties or the behaviour of multiparticle production processes. One such possible behaviour is the self-similarity in the particle production, which can be studied with the intermittency studies and further with chaoticity/erraticity, in the heavy ion collisions. We analyse the behaviour of erraticity index in central Pb-Pb collisions at centre of mass energy of 2.76 TeV per nucleon using the AMPT monte carlo event generator, following the recent proposal by R.C. Hwa and C.B. Yang, concerning the local multiplicity fluctuation study as a signature of critical hadronization in heavy-ion collisions. We report ...

Gupta, Ramni

2015-01-01T23:59:59.000Z

262

Nuclear data processing for energy release and deposition calculations in the MC21 Monte Carlo code

With the recent emphasis in performing multiphysics calculations using Monte Carlo transport codes such as MC21, the need for accurate estimates of the energy deposition-and the subsequent heating - has increased. However, the availability and quality of data necessary to enable accurate neutron and photon energy deposition calculations can be an issue. A comprehensive method for handling the nuclear data required for energy deposition calculations in MC21 has been developed using the NDEX nuclear data processing system and leveraging the capabilities of NJOY. The method provides a collection of data to the MC21 Monte Carlo code supporting the computation of a wide variety of energy release and deposition tallies while also allowing calculations with different levels of fidelity to be performed. Detailed discussions on the usage of the various components of the energy release data are provided to demonstrate novel methods in borrowing photon production data, correcting for negative energy release quantities, and adjusting Q values when necessary to preserve energy balance. Since energy deposition within a reactor is a result of both neutron and photon interactions with materials, a discussion on the photon energy deposition data processing is also provided. (authors)

Trumbull, T. H. [Knolls Atomic Power Laboratory, PO Box 1072, Schenectady, NY 12301 (United States)

2013-07-01T23:59:59.000Z

263

Massively parallel Monte Carlo for many-particle simulations on GPUs

Current trends in parallel processors call for the design of efficient massively parallel algorithms for scientific computing. Parallel algorithms for Monte Carlo simulations of thermodynamic ensembles of particles have received little attention because of the inherent serial nature of the statistical sampling. In this paper, we present a massively parallel method that obeys detailed balance and implement it for a system of hard disks on the GPU. We reproduce results of serial high-precision Monte Carlo runs to verify the method. This is a good test case because the hard disk equation of state over the range where the liquid transforms into the solid is particularly sensitive to small deviations away from the balance conditions. On a Tesla K20, our GPU implementation executes over one billion trial moves per second, which is 148 times faster than on a single Intel Xeon E5540 CPU core, enables 27 times better performance per dollar, and cuts energy usage by a factor of 13. With this improved performance we are able to calculate the equation of state for systems of up to one million hard disks. These large system sizes are required in order to probe the nature of the melting transition, which has been debated for the last forty years. In this paper we present the details of our computational method, and discuss the thermodynamics of hard disks separately in a companion paper.

Anderson, Joshua A.; Jankowski, Eric [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)] [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Grubb, Thomas L. [Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)] [Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Engel, Michael [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)] [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Glotzer, Sharon C., E-mail: sglotzer@umich.edu [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)

2013-12-01T23:59:59.000Z

264

We compute the non-zero temperature conductivity of conserved flavor currents in conformal field theories (CFTs) in 2+1 spacetime dimensions. At frequencies much greater than the temperature, $\\hbar\\omega>> k_B T$, the $\\omega$ dependence can be computed from the operator product expansion (OPE) between the currents and operators which acquire a non-zero expectation value at T > 0. Such results are found to be in excellent agreement with quantum Monte Carlo studies of the O(2) Wilson-Fisher CFT. Results for the conductivity and other observables are also obtained in vector 1/N expansions. We match these large $\\omega$ results to the corresponding correlators of holographic representations of the CFT: the holographic approach then allows us to extrapolate to small $\\hbar \\omega/(k_B T)$. Other holographic studies implicitly only used the OPE between the currents and the energy-momentum tensor, and this yields the correct leading large $\\omega$ behavior for a large class of CFTs. However, for the Wilson-Fisher CFT a relevant "thermal" operator must also be considered, and then consistency with the Monte Carlo results is obtained without a previously needed ad hoc rescaling of the T value. We also establish sum rules obeyed by the conductivity of a wide class of CFTs.

Emanuel Katz; Subir Sachdev; Erik S. Sorensen; William Witczak-Krempa

2014-09-12T23:59:59.000Z

265

Study of Monte Carlo approach to experimental uncertainty propagation with MSTW 2008 PDFs

We investigate the Monte Carlo approach to propagation of experimental uncertainties within the context of the established "MSTW 2008" global analysis of parton distribution functions (PDFs) of the proton at next-to-leading order in the strong coupling. We show that the Monte Carlo approach using replicas of the original data gives PDF uncertainties in good agreement with the usual Hessian approach using the standard Delta(chi^2) = 1 criterion, then we explore potential parameterisation bias by increasing the number of free parameters, concluding that any parameterisation bias is likely to be small, with the exception of the valence-quark distributions at low momentum fractions x. We motivate the need for a larger tolerance, Delta(chi^2) > 1, by making fits to restricted data sets and idealised consistent or inconsistent pseudodata. Instead of using data replicas, we alternatively produce PDF sets randomly distributed according to the covariance matrix of fit parameters including appropriate tolerance values, then we demonstrate a simpler method to produce an arbitrary number of random predictions on-the-fly from the existing eigenvector PDF sets. Finally, as a simple example application, we use Bayesian reweighting to study the effect of recent LHC data on the lepton charge asymmetry from W boson decays.

G. Watt; R. S. Thorne

2012-05-17T23:59:59.000Z

266

Monte Carlo approach for hadron azimuthal correlations in high energy proton and nuclear collisions

We use a Monte Carlo approach to study hadron azimuthal angular correlations in high energy proton-proton and central nucleus-nucleus collisions at the BNL Relativistic Heavy Ion Collider (RHIC) energies at mid-rapidity. We build a hadron event generator that incorporates the production of $2\\to 2$ and $2\\to 3$ parton processes and their evolution into hadron states. For nucleus-nucleus collisions we include the effect of parton energy loss in the Quark-Gluon Plasma using a modified fragmentation function approach. In the presence of the medium, for the case when three partons are produced in the hard scattering, we analyze the Monte Carlo sample in parton and hadron momentum bins to reconstruct the angular correlations. We characterize this sample by the number of partons that are able to hadronize by fragmentation within the selected bins. In the nuclear environment the model allows hadronization by fragmentation only for partons with momentum above a threshold $p_T^{{\\tiny{thresh}}}=2.4$ GeV. We argue that...

Ayala, Alejandro; Jalilian-Marian, Jamal; Magnin, J; Tejeda-Yeomans, Maria Elena

2012-01-01T23:59:59.000Z

267

Monte Carlo approach for hadron azimuthal correlations in high energy proton and nuclear collisions

We use a Monte Carlo approach to study hadron azimuthal angular correlations in high energy proton-proton and central nucleus-nucleus collisions at the BNL Relativistic Heavy Ion Collider (RHIC) energies at mid-rapidity. We build a hadron event generator that incorporates the production of $2\\to 2$ and $2\\to 3$ parton processes and their evolution into hadron states. For nucleus-nucleus collisions we include the effect of parton energy loss in the Quark-Gluon Plasma using a modified fragmentation function approach. In the presence of the medium, for the case when three partons are produced in the hard scattering, we analyze the Monte Carlo sample in parton and hadron momentum bins to reconstruct the angular correlations. We characterize this sample by the number of partons that are able to hadronize by fragmentation within the selected bins. In the nuclear environment the model allows hadronization by fragmentation only for partons with momentum above a threshold $p_T^{{\\tiny{thresh}}}=2.4$ GeV. We argue that one should treat properly the effect of those partons with momentum below the threshold, since their interaction with the medium may lead to showers of low momentum hadrons along the direction of motion of the original partons as the medium becomes diluted.

Alejandro Ayala; Isabel Dominguez; Jamal Jalilian-Marian; J. Magnin; Maria Elena Tejeda-Yeomans

2012-07-31T23:59:59.000Z

268

Physics and Algorithm Enhancements for a Validated MCNP/X Monte Carlo Simulation Tool, Phase VII

Currently the US lacks an end-to-end (i.e., source-to-detector) radiation transport simulation code with predictive capability for the broad range of DHS nuclear material detection applications. For example, gaps in the physics, along with inadequate analysis algorithms, make it difficult for Monte Carlo simulations to provide a comprehensive evaluation, design, and optimization of proposed interrogation systems. With the development and implementation of several key physics and algorithm enhancements, along with needed improvements in evaluated data and benchmark measurements, the MCNP/X Monte Carlo codes will provide designers, operators, and systems analysts with a validated tool for developing state-of-the-art active and passive detection systems. This project is currently in its seventh year (Phase VII). This presentation will review thirty enhancements that have been implemented in MCNPX over the last 3 years and were included in the 2011 release of version 2.7.0. These improvements include 12 physics enhancements, 4 source enhancements, 8 tally enhancements, and 6 other enhancements. Examples and results will be provided for each of these features. The presentation will also discuss the eight enhancements that will be migrated into MCNP6 over the upcoming year.

McKinney, Gregg W [Los Alamos National Laboratory

2012-07-17T23:59:59.000Z

269

Basic physical and chemical information needed for development of Monte Carlo codes

It is important to view track structure analysis as an application of a branch of theoretical physics (i.e., statistical physics and physical kinetics in the language of the Landau school). Monte Carlo methods and transport equation methods represent two major approaches. In either approach, it is of paramount importance to use as input the cross section data that best represent the elementary microscopic processes. Transport analysis based on unrealistic input data must be viewed with caution, because results can be misleading. Work toward establishing the cross section data, which demands a wide scope of knowledge and expertise, is being carried out through extensive international collaborations. In track structure analysis for radiation biology, the need for cross sections for the interactions of electrons with DNA and neighboring protein molecules seems to be especially urgent. Finally, it is important to interpret results of Monte Carlo calculations fully and adequately. To this end, workers should document input data as thoroughly as possible and report their results in detail in many ways. Workers in analytic transport theory are then likely to contribute to the interpretation of the results.

Inokuti, M.

1993-08-01T23:59:59.000Z

270

MONTE CARLO SIMULATION MODEL OF ENERGETIC PROTON TRANSPORT THROUGH SELF-GENERATED ALFVEN WAVES

A new Monte Carlo simulation model for the transport of energetic protons through self-generated Alfven waves is presented. The key point of the model is that, unlike the previous ones, it employs the full form (i.e., includes the dependence on the pitch-angle cosine) of the resonance condition governing the scattering of particles off Alfven waves-the process that approximates the wave-particle interactions in the framework of quasilinear theory. This allows us to model the wave-particle interactions in weak turbulence more adequately, in particular, to implement anisotropic particle scattering instead of isotropic scattering, which the previous Monte Carlo models were based on. The developed model is applied to study the transport of flare-accelerated protons in an open magnetic flux tube. Simulation results for the transport of monoenergetic protons through the spectrum of Alfven waves reveal that the anisotropic scattering leads to spatially more distributed wave growth than isotropic scattering. This result can have important implications for diffusive shock acceleration, e.g., affect the scattering mean free path of the accelerated particles in and the size of the foreshock region.

Afanasiev, A.; Vainio, R., E-mail: alexandr.afanasiev@helsinki.fi [Department of Physics, University of Helsinki (Finland)

2013-08-15T23:59:59.000Z

271

Explicit temperature treatment in Monte Carlo neutron tracking routines - First results

This article discusses the preliminary implementation of the new explicit temperature treatment method to the development version Monte Carlo reactor physics code Serpent 2 and presents the first practical results calculated using the method. The explicit temperature treatment method, as introduced in [1], is a stochastic method for taking the effect of thermal motion into account on-the-fly in a Monte Carlo neutron transport calculation. The method is based on explicit treatment of the motion of target nuclei at collision sites and requires cross sections at 0 K temperature only, regardless of the number of temperatures in the problem geometry. The method includes a novel capability of modelling continuous temperature distributions. Test calculations are performed for two test cases, a PWR pin-cell and a HTGR system. The resulting k{sub eff} and flux spectra are compared to a reference solution calculated using Serpent 1.1.16 with Doppler-broadening rejection correction [2]. The results are in very good agreement with the reference and also the increase in calculation time due to the new method is on acceptable level although not fully insignificant. On the basis of the current study, the explicit treatment method can be considered feasible for practical calculations. (authors)

Tuomas, V.; Jaakko, L. [VTT Technical Research Centre of Finland, P.O. Box 1000, FI-02044 VTT (Finland)

2012-07-01T23:59:59.000Z

272

CPMC-Lab: A Matlab package for Constrained Path Monte Carlo calculations

Science Journals Connector (OSTI)

Abstract We describe CPMC-Lab, a Matlab program for the constrained-path and phaseless auxiliary-field Monte Carlo methods. These methods have allowed applications ranging from the study of strongly correlated models, such as the Hubbard model, to ab initio calculations in molecules and solids. The present package implements the full ground-state constrained-path Monte Carlo (CPMC) method in Matlab with a graphical interface, using the Hubbard model as an example. The package can perform calculations in finite supercells in any dimensions, under periodic or twist boundary conditions. Importance sampling and all other algorithmic details of a total energy calculation are included and illustrated. This open-source tool allows users to experiment with various model and run parameters and visualize the results. It provides a direct and interactive environment to learn the method and study the code with minimal overhead for setup. Furthermore, the package can be easily generalized for auxiliary-field quantum Monte Carlo (AFQMC) calculations in many other models for correlated electron systems, and can serve as a template for developing a production code for AFQMC total energy calculations in real materials. Several illustrative studies are carried out in one- and two-dimensional lattices on total energy, kinetic energy, potential energy, and charge- and spin-gaps. Program summary Program title: CPMC-Lab Catalogue identifier: AEUD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEUD_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2850 No. of bytes in distributed program, including test data, etc.: 24 838 Distribution format: tar.gz Programming language:Matlab. Computer: The non-interactive scripts can be executed on any computer capable of running Matlab with all Matlab versions. The GUI requires Matlab \\{R2010b\\} (version 7.11) and above. Operating system: Windows, Mac OS X, Linux. RAM: Variable. Classification: 7.3. External routines:Matlab Nature of problem: Obtaining ground state energy of a repulsive Hubbard model in a supercell in any number of dimensions. Solution method: In the Constrained Path Monte Carlo (CPMC) method, the ground state of a many-fermion system is projected from an initial trial wave function by a branching random walk in an overcomplete basis of Slater determinants. Constraining the determinants according to a trial wave function | ? T ? removes the exponential decay of the signal-to-noise ratio characteristic of the sign problem. The method is exact if | ? T ? is exact. Unusual features: Direct and interactive environment with a Graphical User Interface for beginners to learn and study the Constrained Path Monte Carlo method with minimal overhead for setup. Running time: The sample provided takes a few seconds to run, the batch sample a few minutes.

Huy Nguyen; Hao Shi; Jie Xu; Shiwei Zhang

2014-01-01T23:59:59.000Z

273

A novel approach in electron beam radiation therapy of lips carcinoma: A Monte Carlo study

Purpose: Squamous cell carcinoma (SCC) is commonly treated by electron beam radiotherapy (EBRT) followed by a boost via brachytherapy. Considering the limitations associated with brachytherapy, in this study, a novel boosting technique in EBRT of lip carcinoma using an internal shield as an internal dose enhancer tool (IDET) was evaluated. An IDET is referred to a partially covered internal shield located behind the lip. It was intended to show that while the backscattered electrons are absorbed in the portion covered with a low atomic number material, they will enhance the target dose in the uncovered area. Methods: Monte-Carlo models of 6 and 8 MeV electron beams were developed using BEAMnrc code and were validated against experimental measurements. Using the developed models, dose distributions in a lip phantom were calculated and the effect of an IDET on target dose enhancement was evaluated. Typical lip thicknesses of 1.5 and 2.0 cm were considered. A 5 Multiplication-Sign 5 cm{sup 2} of lead covered by 0.5 cm of polystyrene was used as an internal shield, while a 4 Multiplication-Sign 4 cm{sup 2} uncovered area of the shield was used as the dose enhancer. Results: Using the IDET, the maximum dose enhancement as a percentage of dose at d{sub max} of the unshielded field was 157.6% and 136.1% for 6 and 8 MeV beams, respectively. The best outcome was achieved for lip thickness of 1.5 cm and target thickness of less than 0.8 cm. For lateral dose coverage of planning target volume, the 80% isodose curve at the lip-IDET interface showed a 1.2 cm expansion, compared to the unshielded field. Conclusions: This study showed that a boost concomitant EBRT of lip is possible by modifying an internal shield into an IDET. This boosting method is especially applicable to cases in which brachytherapy faces limitations, such as small thicknesses of lips and targets located at the buccal surface of the lip.

Shokrani, Parvaneh [Medical Physics and Medical Engineering Department, School of Medicine, Isfahan University of Medical Sciences, Isfahan 81746-73461 (Iran, Islamic Republic of); Baradaran-Ghahfarokhi, Milad [Medical Physics and Medical Engineering Department, School of Medicine, Isfahan University of Medical Sciences, Isfahan 81746-73461, Iran and Medical Radiation Engineering Department, Faculty of Advanced Sciences and Technologies, Isfahan University, Isfahan 81746-73441 (Iran, Islamic Republic of); Zadeh, Maryam Khorami [Medical Physics Department, School of Medicine, Ahwaz Jundishapour University of Medical Sciences, Ahwaz 15794-61357 (Iran, Islamic Republic of)

2013-04-15T23:59:59.000Z

274

To evaluate the bootstrap current in nonaxisymmetric toroidal plasmas quantitatively, a {delta}f Monte Carlo method is incorporated into the moment approach. From the drift-kinetic equation with the pitch-angle scattering collision operator, the bootstrap current and neoclassical conductivity coefficients are calculated. The neoclassical viscosity is evaluated from these two monoenergetic transport coefficients. Numerical results obtained by the {delta}f Monte Carlo method for a model heliotron are in reasonable agreement with asymptotic formulae and with the results obtained by the variational principle.

Matsuyama, A. [Graduate School of Energy Science, Kyoto University, Gokasho, Uji, Kyoto 611-0011 (Japan); Isaev, M. Yu. [Nuclear Fusion Institute, RRC Kurchatov Institute, 123182 Moscow (Russian Federation); Watanabe, K. Y.; Suzuki, Y.; Nakajima, N. [National Institute for Fusion Science, Toki, Gifu 509-5292 (Japan); Hanatani, K. [Institute of Advanced Energy, Kyoto University, Gokasho, Uji, Kyoto 611-0011 (Japan); Cooper, W. A.; Tran, T. M. [Centre de Recherches en Physique des Plasmas, Association Euratom-Suisse, Ecole Polytechnique Federale de Lausanne, CH1015 Lausanne (Switzerland)

2009-05-15T23:59:59.000Z

275

Surface tension of an electrolyteÂair interface: a Monte Carlo study This article has been 24 (2012) 284115 (5pp) doi:10.1088/0953-8984/24/28/284115 Surface tension of an electrolyte for calculating the surface tension of an electrolyteÂair interface using Monte Carlo (MC) simulations

Levin, Yan

276

Overview of Geometry Representation in Monte Carlo Codes Ronald P. Kensek

National Nuclear Security Administration (NNSA)

Overview of Geometry Representation Overview of Geometry Representation in Monte Carlo Codes Ronald P. Kensek Brian C. Franke Thomas W. Laub Leonard J. Lorence Matthew R. Martin Sandia National Laboratories Steve Warren Kansas State University Joint Russian-American Five-Laboratory Conference on Computational Mathematics / Physics Vienna, Austria June 19-23, 2005 Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States National Nuclear Security Administration and the Department of Energy under contract DE-AC04-94AL85000. 2 Problem Setup: Engineering designs CG vs. CAD Combinatorial Geometry (CG) * Engineering designs are not typically created in this format * No general automatic translation from CAD to CG yet exists * Problem setup is difficult: Creation

277

Coil-bridge transition and Monte Carlo simulation of a stretched polymer

Science Journals Connector (OSTI)

The structure of the system consisting of a grafted self-avoiding polymer chain attracted to the surface layer of a flat wall at a distance away by a short-ranged force is investigated. A first-order transition is determined between the coil state at a low attraction energy and the bridge state at a high attraction energy. The transition properties of the system are obtained by a Monte Carlo simulation, which uses the inverse density of states as the transition weight and is reweighted back to a canonical ensemble. The determination of the density of states follows a revised Wang-Landau procedure in which the center-of-mass distance from the grafted site is used as the variable. Scaling arguments are also given for the observed numerical results.

Jeff Z. Y. Chen

2011-10-25T23:59:59.000Z

278

Kinetics of electron-positron pair plasmas using an adaptive Monte Carlo method

A new algorithm for implementing the adaptive Monte Carlo method is given. It is used to solve the relativistic Boltzmann equations that describe the time evolution of a nonequilibrium electron-positron pair plasma containing high-energy photons and pairs. The collision kernels for the photons as well as pairs are constructed for Compton scattering, pair annihilation and creation, bremsstrahlung, and Bhabha & Moller scattering. For a homogeneous and isotropic plasma, analytical equilibrium solutions are obtained in terms of the initial conditions. For two non-equilibrium models, the time evolution of the photon and pair spectra is determined using the new method. The asymptotic numerical solutions are found to be in a good agreement with the analytical equilibrium states. Astrophysical applications of this scheme are discussed.

Ravi P. Pilla; Jacob Shaham

1997-02-21T23:59:59.000Z

279

Reversible jump Markov chain Monte Carlo computation and Bayesian model determination

Markov chain Monte Carlo methods for Bayesian computation have until recently been restricted to problems where the joint distribution of all variables has a density with respect to some xed standard underlying measure. They have therefore not been available for application to Bayesian model determination, where the dimensionality of the parameter vector is typically not xed. This article proposes a new framework for the construction of reversible Markov chain samplers that jump between parameter subspaces of di ering dimensionality, which is exible and entirely constructive. It should therefore have wide applicability in model determination problems. The methodology is illustrated with applications to multiple change-point analysis in one and two dimensions, and toaBayesian comparison of binomial experiments.

Peter J. Green

1995-01-01T23:59:59.000Z

280

Monte Carlo simulation of the electrical properties of electrolytes adsorbed in charged slit-systems

We study the adsorption of primitive model electrolytes into a layered slit system using grand canonical Monte Carlo simulations. The slit system contains a series of charged membranes. The ions are forbidden from the membranes, while they are allowed to be adsorbed into the slits between the membranes. We focus on the electrical properties of the slit system. We show concentration, charge, electric field, and electrical potential profiles. We show that the potential difference between the slit system and the bulk phase is mainly due to the double layers formed at the boundaries of the slit system, but polarization of external slits also contributes to the potential drop. We demonstrate that the electrical work necessary to bring an ion into the slit system can be studied only if we simulate the slit together with the bulk phases in one single simulation cell.

R. Kovács; M. Valiskó; D. Boda

2012-07-13T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

281

This work illustrates a methodology based on photon interrogation and coincidence counting for determining the characteristics of fissile material. The feasibility of the proposed methods was demonstrated using a Monte Carlo code system to simulate the full statistics of the neutron and photon field generated by the photon interrogation of fissile and non-fissile materials. Time correlation functions between detectors were simulated for photon beam-on and photon beam-off operation. In the latter case, the correlation signal is obtained via delayed neutrons from photofission, which induce further fission chains in the nuclear material. An analysis methodology was demonstrated based on features selected from the simulated correlation functions and on the use of artificial neural networks. We show that the methodology can reliably differentiate between highly enriched uranium and plutonium. Furthermore, the mass of the material can be determined with a relative error of about 12%. Keywords: MCNP, MCNP-PoliMi, Artificial neural network, Correlation measurement, Photofission

Pozzi, Sara A [ORNL; Downar, Thomas J [ORNL; Padovani, Enrico [Nuclear Engineering Department Politecnico di Milano, Milan, Italy; Clarke, Shaun D [ORNL

2006-01-01T23:59:59.000Z

282

Validation of GEANT4 Monte Carlo Models with a Highly Granular Scintillator-Steel Hadron Calorimeter

Calorimeters with a high granularity are a fundamental requirement of the Particle Flow paradigm. This paper focuses on the prototype of a hadron calorimeter with analog readout, consisting of thirty-eight scintillator layers alternating with steel absorber planes. The scintillator plates are finely segmented into tiles individually read out via Silicon Photomultipliers. The presented results are based on data collected with pion beams in the energy range from 8GeV to 100GeV. The fine segmentation of the sensitive layers and the high sampling frequency allow for an excellent reconstruction of the spatial development of hadronic showers. A comparison between data and Monte Carlo simulations is presented, concerning both the longitudinal and lateral development of hadronic showers and the global response of the calorimeter. The performance of several GEANT4 physics lists with respect to these observables is evaluated.

C. Adloff; J. Blaha; J. -J. Blaising; C. Drancourt; A. Espargilière; R. Gaglione; N. Geffroy; Y. Karyotakis; J. Prast; G. Vouters; K. Francis; J. Repond; J. Schlereth; J. Smith; L. Xia; E. Baldolemar; J. Li; S. T. Park; M. Sosebee; A. P. White; J. Yu; T. Buanes; G. Eigen; Y. Mikami; N. K. Watson; G. Mavromanolakis; M. A. Thomson; D. R. Ward; W. Yan; D. Benchekroun; A. Hoummada; Y. Khoulaki; J. Apostolakis; A. Dotti; G. Folger; V. Ivantchenko; V. Uzhinskiy; M. Benyamna; C. Cârloganu; F. Fehr; P. Gay; S. Manen; L. Royer; G. C. Blazey; A. Dyshkant; J. G. R. Lima; V. Zutshi; J. -Y. Hostachy; L. Morin; U. Cornett; D. David; G. Falley; K. Gadow; P. Göttlicher; C. Günter; B. Hermberg; S. Karstensen; F. Krivan; A. -I. Lucaci-Timoce; S. Lu; B. Lutz; S. Morozov; V. Morgunov; M. Reinecke; F. Sefkow; P. Smirnov; M. Terwort; A. Vargas-Trevino; N. Feege; E. Garutti; I. Marchesinik; M. Ramilli; P. Eckert; T. Harion; A. Kaplan; H. -Ch. Schultz-Coulon; W. Shen; R. Stamen; B. Bilki; E. Norbeck; Y. Onel; G. W. Wilson; K. Kawagoe; P. D. Dauncey; A. -M. Magnan; V. Bartsch; M. Wing; F. Salvatore; E. Calvo Alamillo; M. -C. Fouz; J. Puerta-Pelayo; B. Bobchenko; M. Chadeeva; M. Danilov; A. Epifantsev; O. Markin; R. Mizuk; E. Novikov; V. Popov; V. Rusinov; E. Tarkovsky; N. Kirikova; V. Kozlov; P. Smirnov; Y. Soloviev; P. Buzhan; A. Ilyin; V. Kantserov; V. Kaplin; A. Karakash; E. Popova; V. Tikhomirov; C. Kiesling; K. Seidel; F. Simon; C. Soldner; M. Szalay; M. Tesar; L. Weuste; M. S. Amjad; J. Bonis; S. Callier; S. Conforti di Lorenzo; P. Cornebise; Ph. Doublet; F. Dulucq; J. Fleury; T. Frisson; N. van der Kolk; H. Li; G. Martin-Chassard; F. Richard; Ch. de la Taille; R. Pöschl; L. Raux; J. Rouëné; N. Seguin-Moreau; M. Anduze; V. Boudry; J-C. Brient; D. Jeans; P. Mora de Freitas; G. Musat; M. Reinhard; M. Ruan; H. Videau; B. Bulanek; J. Zacek; J. Cvach; P. Gallus; M. Havranek; M. Janata; J. Kvasnicka; D. Lednicky; M. Marcisovsky; I. Polak; J. Popule; L. Tomasek; M. Tomasek; P. Ruzicka; P. Sicho; J. Smolik; V. Vrba; J. Zalesak; B. Belhorma; H. Ghazlane; T. Takeshita; S. Uozumi; M. Götze; O. Hartbrich; J. Sauer; S. Weber; C. Zeitnitz

2014-06-15T23:59:59.000Z

283

Uncertainties associated with the use of the KENO Monte Carlo criticality codes

The KENO multi-group Monte Carlo criticality codes have earned the reputation of being efficient, user friendly tools especially suited for the analysis of situations commonly encountered in the storage and transportation of fissile materials. Throughout their twenty years of service, a continuing effort has been made to maintain and improve these codes to meet the needs of the nuclear criticality safety community. Foremost among these needs is the knowledge of how to utilize the results safely and effectively. Therefore it is important that code users be aware of uncertainties that may affect their results. These uncertainties originate from approximations in the problem data, methods used to process cross sections, and assumptions, limitations and approximations within the criticality computer code itself. 6 refs., 8 figs., 1 tab.

Landers, N.F.; Petrie, L.M. (Oak Ridge National Lab., TN (USA))

1989-01-01T23:59:59.000Z

284

Monte-Carlo study of quasiparticle dispersion relation in monolayer graphene

The density of electronic one-particle states in monolayer graphene is studied by performing the Hybrid Monte-Carlo simulations of the tight-binding model for electrons on the pi orbitals of carbon atoms which make up the graphene lattice. Density of states is approximated as a derivative of the number of particles over the chemical potential at sufficiently small temperature. Simulations are performed in the partially quenched approximation, in which virtual particles and holes have zero chemical potential. It is found that the Van Hove singularity becomes much sharper than in the free tight-binding model. Simulation results also suggest that the Fermi velocity increases with interaction strength up to the transition to the phase with spontaneously broken chiral symmetry.

P. V. Buividovich

2013-01-07T23:59:59.000Z

285

Monte Carlo Study on Distortion of the Space-Dimension in COBE Monopole Data

A concise explanation of studies on distortion of space-time dimension is briefly introduced. Second we obtain the limits (i.e., bounded values) of the dimensionless chemical potential $\\mu$, the Sunyaev--Zeldovich (SZ) effect y and distortion of the space-dimension $\\varepsilon$ by Monte Carlo (MC) analysis of the parameter set (T, $d=3+\\varepsilon$, $\\mu$, and $y$) in cosmic microwave data assuming that the SZ effect is positive (y>0). In this analysis, the magnitude of the space-dimension d with distortion of the space-dimension $\\varepsilon$ is defined by $d=3+\\varepsilon$. The limits of $\\mu$ and $y$ are determined as $|\\mu| |y|$. The estimated limit of $|y| < 5\\times 10^{-6}$ appears to be related to re-ionization processes occurring at redshift $z_{ri}\\sim 10$. We also present data analysis assuming a relativistic SZ effect.

Minoru Biyajima; Takuya Mizoguchi

2013-05-31T23:59:59.000Z

286

2 × 2 commensurate-incommensurate transition in Ising models: Monte Carlo simulation

Science Journals Connector (OSTI)

Phase diagrams of Ising models with antiferromagnetic nearest-(NN) and next-nearest-neighbor (NNN) interactions are obtained by Monte Carlo simulations. For the triangular lattice a paramagnetic (P)-2×2 commensurate (C) phase transition is found, which is second order when the NN interaction is small. The exponents are consistent with the ones of the four-state Potts model. For large NNN interactions the transition becomes first order. For three-dimensional stacking of triangular layers an incommensurate (I) phase is found in addition. The P-C and I-C transitions are of first order whereas the P-I transition seems to be of second order. The model is used to interpret the P-I-C transitions in ?-eucryptite.

Y. Saito

1981-12-01T23:59:59.000Z

287

Monte Carlo study of living polymers with the bond-fluctuation method

Science Journals Connector (OSTI)

The highly efficient bond-fluctuation method for Monte Carlo simulations of both static and dynamic properties of polymers is applied to a system of living polymers. Parallel to stochastic movements of monomers, which result in Rouse dynamics of the macromolecules, the polymer chains break, or associate at chain ends with other chains and single monomers, in the process of equilibrium polymerization. We study the changes in equilibrium properties, such as molecular-weight distribution, average chain length, and radius of gyration, and specific heat with varying density and temperature of the system. The results of our numeric experiments indicate a very good agreement with the recently suggested description in terms of the mean-field approximation. The coincidence of the specific heat maximum position at kBT=V/4 in both theory and simulation suggests the use of calorimetric measurements for the determination of the scission-recombination energy V in real experiments.

Yannick Rouault and Andrey Milchev

1995-06-01T23:59:59.000Z

288

Dynamical Monte Carlo study of equilibrium polymers: Effects of high density and ring formation

Science Journals Connector (OSTI)

An off-lattice Monte Carlo algorithm for solutions of equilibrium polymers (EPs) is proposed. At low and moderate densities this is shown to reproduce faithfully the (static) properties found recently for flexible linear EPs using a lattice model. The molecular weight distribution (MWD) is well described in the dilute limit by a Schultz-Zimm distribution and becomes purely exponential in the semidilute limit. Additionally, very concentrated molten systems are studied. The MWD remains a pure exponential in contrast to recent claims. The mean chain mass is found to increase faster with density than in the semidilute regime due to additional entropic interactions generated by the dense packing of spheres. We also consider systems in which the formation of rings is allowed so that both the linear chains and the rings compete for the monomers. In agreement with earlier predictions the MWD of the rings reveals a strong singularity whereas the MWD of the coexisting linear chains remains essentially unaffected.

A. Milchev; J. P. Wittmer; D. P. Landau

2000-03-01T23:59:59.000Z

289

Simulation of Cone Beam CT System Based on Monte Carlo Method

Adaptive Radiation Therapy (ART) was developed based on Image-guided Radiation Therapy (IGRT) and it is the trend of photon radiation therapy. To get a better use of Cone Beam CT (CBCT) images for ART, the CBCT system model was established based on Monte Carlo program and validated against the measurement. The BEAMnrc program was adopted to the KV x-ray tube. Both IOURCE-13 and ISOURCE-24 were chosen to simulate the path of beam particles. The measured Percentage Depth Dose (PDD) and lateral dose profiles under 1cm water were compared with the dose calculated by DOSXYZnrc program. The calculated PDD was better than 1% within the depth of 10cm. More than 85% points of calculated lateral dose profiles was within 2%. The correct CBCT system model helps to improve CBCT image quality for dose verification in ART and assess the CBCT image concomitant dose risk.

Wang, Yu; Cao, Ruifen; Hu, Liqin; Li, Bingbing

2014-01-01T23:59:59.000Z

290

A Monte Carlo approach to forecasting the demand for offshore supply vessels

Science Journals Connector (OSTI)

In the near future, the demand for offshore supply vessels in Brazil will be driven by the activities induced by the bids carried out by the regulatory agency, ANP. The likely tendency is to increase the number of bids and consequently, the demand for vessels in the coming years. The proposed model consists of a Monte Carlo simulation of the offshore oil exploration and production projects. The model considers some parameters that aim at capturing the effect of the operators patterns, water depth, duration of seismic research and exploration and drilling work, number of wells, geographic location and geological risk. An estimate is obtained for the additional offshore supply vessels demand, for the period of 2006-2008.

Jr">Floriano C.M. Pires Jr; Augusto R. Antoun

2012-01-01T23:59:59.000Z

291

Monte Carlo procedure for protein folding in lattice model. Conformational rigidity

A rigourous Monte Carlo method for protein folding simulation on lattice model is introduced. We show that a parameter which can be seen as the rigidity of the conformations has to be introduced in order to satisfy the detailed balance condition. Its properties are discussed and its role during the folding process is elucidated. This method is applied on small chains on two-dimensional lattice. A Bortz-Kalos-Lebowitz type algorithm which allows to study the kinetic of the chains at very low temperature is implemented in the presented method. We show that the coefficients of the Arrhenius law are in good agreement with the value of the main potential barrier of the system.

Olivier Collet

1999-07-19T23:59:59.000Z

292

Interfaces in partly compatible polymer mixtures: A Monte Carlo simulation approach

The structure of polymer coils near interfaces between coexisting phases of symmetrical polymer mixtures (AB) is discussed, as well as the structure of symmetric diblock copolymers of the same chain length N adsorbed at the interface. The problem is studied by Monte Carlo simulations of the bond fluctuation model on the simple cubic lattice, using massively parallel computers (CRAY T3D). While homopolymer coils in the strong segregation limit are oriented parallel to the interface, the diblocks form ``dumbbells'' oriented perpendicular to the interface. However, in the dilute case (``mushroom regime'' rather than ``brush regime''), the diblocks are only weakly stretched. Distribution functions for monomers at the chain ends and in the center of the polymer are obtained, and a comparison to the self consistent field theory is made.

K. Binder; M. Mueller; F. Schmid; A. Werner

1997-06-27T23:59:59.000Z

293

Reliable theoretical predictions of noncovalent interaction energies, which are important e.g. in drug-design and hydrogen-storage applications, belong to longstanding challenges of contemporary quantum chemistry. In this respect, the fixed-node diffusion Monte Carlo (FN-DMC) is a promising alternative to the commonly used ``gold standard'' coupled-cluster CCSD(T)/CBS method for its benchmark accuracy and favourable scaling, in contrast to other correlated wave function approaches. This work is focused on the analysis of protocols and possible tradeoffs for FN-DMC estimations of noncovalent interaction energies and proposes a significantly more efficient yet accurate computational protocol using simplified explicit correlation terms. Its performance is illustrated on a number of weakly bound complexes, including water dimer, benzene/hydrogen, T-shape benzene dimer and stacked adenine-thymine DNA base pair complex. The proposed protocol achieves excellent agreement ($\\sim$0.2 kcal/mol) with respect to the reli...

Dubecký, Matúš; Jure?ka, Petr; Mitas, Lubos; Hobza, Pavel; Otyepka, Michal

2014-01-01T23:59:59.000Z

294

Shell-model Monte Carlo studies of fp-shell nuclei

Science Journals Connector (OSTI)

We study the gross properties of even-even and N=Z nuclei with A=48–64 using shell-model Monte Carlo methods. Our calculations account for all 0?? configurations in the fp shell and employ the modified Kuo-Brown interaction KB3. We find good agreement with data for masses and total B(E2) strengths, the latter employing effective charges ep=1.35e and en=0.35e. The calculated total Gamow-Teller strengths agree consistently with the B(GT+) values deduced from (n,p) data if the shell-model results are renormalized by 0.64, as has already been established for sd-shell nuclei. The present calculations therefore suggest that this renormalization (i.e., gA=1 in the nuclear medium) is universal.

K. Langanke; D. J. Dean; P. B. Radha; Y. Alhassid; S. E. Koonin

1995-08-01T23:59:59.000Z

295

An Auxiliary-Field Quantum Monte Carlo Study of the Chromium Dimer

The chromium dimer (Cr2) presents an outstanding challenge for many-body electronic structure methods. Its complicated nature of binding, with a formal sextuple bond and an unusual potential energy curve, is emblematic of the competing tendencies and delicate balance found in many strongly correlated materials. We present a near-exact calculation of the potential energy curve (PEC) and ground state properties of Cr2, using the auxiliary-field quantum Monte Carlo (AFQMC) method. Unconstrained, exact AFQMC calculations are first carried out for a medium-sized but realistic basis set. Elimination of the remaining finite-basis errors and extrapolation to the complete basis set (CBS) limit is then achieved with a combination of phaseless and exact AFQMC calculations. Final results for the PEC and spectroscopic constants are in excellent agreement with experiment.

Purwanto, Wirawan; Krakauer, Henry

2014-01-01T23:59:59.000Z

296

MaGe - a Geant4-based Monte Carlo framework for low-background experiments

A Monte Carlo framework, MaGe, has been developed based on the Geant4 simulation toolkit. Its purpose is to simulate physics processes in low-energy and low-background radiation detectors, specifically for the Majorana and Gerda $^{76}$Ge neutrinoless double-beta decay experiments. This jointly-developed tool is also used to verify the simulation of physics processes relevant to other low-background experiments in Geant4. The MaGe framework contains simulations of prototype experiments and test stands, and is easily extended to incorporate new geometries and configurations while still using the same verified physics processes, tunings, and code framework. This reduces duplication of efforts and improves the robustness of and confidence in the simulation output.

Yuen-Dat Chan; Jason A. Detwiler; Reyco Henning; Victor M. Gehman; Rob A. Johnson; David V. Jordan; Kareem Kazkaz; Markus Knapp; Kevin Kroninger; Daniel Lenz; Jing Liu; Xiang Liu; Michael G. Marino; Akbar Mokhtarani; Luciano Pandola; Alexis G. Schubert; Claudia Tomei

2008-02-06T23:59:59.000Z

297

Monte Carlo simulation on the resistivity and magnetization in anisotropic layered structure

Science Journals Connector (OSTI)

An anisotropic layered mode structure composed of line groups as an approach to anisotropic bilayered manganites is constructed based on the elementary interactions existing in the bilayered manganites. The anisotropic electronic transport and magnetic behaviors of the mode structure are investigated using Monte Carlo simulation and the microscopic resistor network scheme in Ising model. The simulation reproduces qualitatively the main characteristictransport behaviors of bilayered manganites. The significant anisotropy in resistivity and ferromagnetic orderings along different orientations is observed and the underlying physics is discussed in the framework of spatial correlation of the microscopic metallic resistor network. The simulated results are believed to cast some light on the understanding of the anomaly in the transport behaviors of bilayered manganites which are gaining more and more importance.

X. Y. Yao; Sh. Dong; H. Zhu; H. Yu; J.-M. Liu

2005-01-01T23:59:59.000Z

298

Science Journals Connector (OSTI)

Shear stresses on a rough seabed under irregular waves plus current are calculated. Parameterized models valid for regular waves plus current have been used in Monte Carlo simulations, assuming the wave amplitudes to be Rayleigh-distributed. Numerical estimates of the probability distribution functions are presented. For waves only, the shear stress maxima follow a Weibull distribution, while for waves plus current, both the maximum and time-averaged shear stresses are well represented by a three-parameter Weibull distribution. The behaviour of the maximum shear stresses under a wide range of wave-current conditions has been investigated, and it appears that under certain conditions, the current has a significant influence on the maximum shear stresses. Results of comparison between predictions and measurements of the maximum bottom shear stresses from laboratory and field experiments are presented.

Lars Erik Holmedal; Dag Myrhaug; Håvard Rue

2000-01-01T23:59:59.000Z

299

Ab-initio molecular dynamics simulation of liquid water by Quantum Monte Carlo

Despite liquid water is ubiquitous in chemical reactions at roots of life and climate on earth, the prediction of its properties by high-level ab initio molecular dynamics simulations still represents a formidable task for quantum chemistry. In this article we present a room temperature simulation of liquid water based on the potential energy surface obtained by a many-body wave function through quantum Monte Carlo (QMC) methods. The simulated properties are in excellent agreement with recent neutron scattering and X-ray experiments, particularly concerning the position of the oxygen-oxygen peak in the radial distribution function, at variance of previous Density Functional Theory attempts. Given the excellent performances of QMC on large scale supercomputers, this work opens new perspectives for predictive and reliable ab-initio simulations of complex chemical systems.

Zen, Andrea; Mazzola, Guglielmo; Guidoni, Leonardo; Sorella, Sandro

2014-01-01T23:59:59.000Z

300

Ab-initio molecular dynamics simulation of liquid water by Quantum Monte Carlo

Despite liquid water is ubiquitous in chemical reactions at roots of life and climate on earth, the prediction of its properties by high-level ab initio molecular dynamics simulations still represents a formidable task for quantum chemistry. In this article we present a room temperature simulation of liquid water based on the potential energy surface obtained by a many-body wave function through quantum Monte Carlo (QMC) methods. The simulated properties are in excellent agreement with recent neutron scattering and X-ray experiments, particularly concerning the position of the oxygen-oxygen peak in the radial distribution function, at variance of previous Density Functional Theory attempts. Given the excellent performances of QMC on large scale supercomputers, this work opens new perspectives for predictive and reliable ab-initio simulations of complex chemical systems.

Andrea Zen; Ye Luo; Guglielmo Mazzola; Leonardo Guidoni; Sandro Sorella

2014-12-09T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

301

Using fast lattice Monte Carlo (FLMC) simulations [Q. Wang, Soft Matter 5, 4564 (2009)] and the corresponding lattice self-consistent field (LSCF) calculations, we studied a model system of grafted homopolymers, in both the brush and mushroom regimes, in an explicit solvent compressed by an impenetrable surface. Direct comparisons between FLMC and LSCF results, both of which are based on the same Hamiltonian (thus without any parameter-fitting between them), unambiguously and quantitatively reveal the fluctuations/correlations neglected by the latter. We studied both the structure (including the canonical-ensemble averages of the height and the mean-square end-to-end distances of grafted polymers) and thermodynamics (including the ensemble-averaged reduced energy density and the related internal energy per chain, the differences in the Helmholtz free energy and entropy per chain from the uncompressed state, and the pressure due to compression) of the system. In particular, we generalized the method for calculating pressure in lattice Monte Carlo simulations proposed by Dickman [J. Chem. Phys. 87, 2246 (1987)], and combined it with the Wang-Landau–Optimized Ensemble sampling [S. Trebst, D. A. Huse, and M. Troyer, Phys. Rev. E 70, 046701 (2004)] to efficiently and accurately calculate the free energy difference and the pressure due to compression. While we mainly examined the effects of the degree of compression, the distance between the nearest-neighbor grafting points, the reduced number of chains grafted at each grafting point, and the system fluctuations/correlations in an athermal solvent, the ?-solvent is also considered in some cases.

Zhang, Pengfei; Wang, Qiang, E-mail: q.wang@colostate.edu [Department of Chemical and Biological Engineering, Colorado State University, Fort Collins, Colorado 80523-1370 (United States)] [Department of Chemical and Biological Engineering, Colorado State University, Fort Collins, Colorado 80523-1370 (United States)

2014-01-28T23:59:59.000Z

302

Implementation of the probability table method in a continuous-energy Monte Carlo code system

RACER is a particle-transport Monte Carlo code that utilizes a continuous-energy treatment for neutrons and neutron cross section data. Until recently, neutron cross sections in the unresolved resonance range (URR) have been treated in RACER using smooth, dilute-average representations. This paper describes how RACER has been modified to use probability tables to treat cross sections in the URR, and the computer codes that have been developed to compute the tables from the unresolved resonance parameters contained in ENDF/B data files. A companion paper presents results of Monte Carlo calculations that demonstrate the effect of the use of probability tables versus the use of dilute-average cross sections for the URR. The next section provides a brief review of the probability table method as implemented in the RACER system. The production of the probability tables for use by RACER takes place in two steps. The first step is the generation of probability tables from the nuclear parameters contained in the ENDF/B data files. This step, and the code written to perform it, are described in Section 3. The tables produced are at energy points determined by the ENDF/B parameters and/or accuracy considerations. The tables actually used in the RACER calculations are obtained in the second step from those produced in the first. These tables are generated at energy points specific to the RACER calculation. Section 4 describes this step and the code written to implement it, as well as modifications made to RACER to enable it to use the tables. Finally, some results and conclusions are presented in Section 5.

Sutton, T.M.; Brown, F.B. [Lockheed Martin Corp., Schenectady, NY (United States)

1998-10-01T23:59:59.000Z

303

Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.

Arampatzis, Georgios, E-mail: garab@math.uoc.gr [Department of Applied Mathematics, University of Crete (Greece) [Department of Applied Mathematics, University of Crete (Greece); Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 (United States); Katsoulakis, Markos A., E-mail: markos@math.umass.edu [Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 (United States)

2014-03-28T23:59:59.000Z

304

Development of a randomized 3D cell model for Monte Carlo microdosimetry simulations

Purpose: The objective of the current work was to develop an algorithm for growing a macroscopic tumor volume from individual randomized quasi-realistic cells. The major physical and chemical components of the cell need to be modeled. It is intended to import the tumor volume into GEANT4 (and potentially other Monte Carlo packages) to simulate ionization events within the cell regions. Methods: A MATLAB Copyright-Sign code was developed to produce a tumor coordinate system consisting of individual ellipsoidal cells randomized in their spatial coordinates, sizes, and rotations. An eigenvalue method using a mathematical equation to represent individual cells was used to detect overlapping cells. GEANT4 code was then developed to import the coordinate system into GEANT4 and populate it with individual cells of varying sizes and composed of the membrane, cytoplasm, reticulum, nucleus, and nucleolus. Each region is composed of chemically realistic materials. Results: The in-house developed MATLAB Copyright-Sign code was able to grow semi-realistic cell distributions ({approx}2 Multiplication-Sign 10{sup 8} cells in 1 cm{sup 3}) in under 36 h. The cell distribution can be used in any number of Monte Carlo particle tracking toolkits including GEANT4, which has been demonstrated in this work. Conclusions: Using the cell distribution and GEANT4, the authors were able to simulate ionization events in the individual cell components resulting from 80 keV gamma radiation (the code is applicable to other particles and a wide range of energies). This virtual microdosimetry tool will allow for a more complete picture of cell damage to be developed.

Douglass, Michael; Bezak, Eva; Penfold, Scott [School of Chemistry and Physics, University of Adelaide, North Terrace, Adelaide 5005, South Australia (Australia) and Department of Medical Physics, Royal Adelaide Hospital, North Terrace, Adelaide 5000, South Australia (Australia)

2012-06-15T23:59:59.000Z

305

Analysis of ITU TRIGA Mark II research reactor using Monte Carlo method

Science Journals Connector (OSTI)

Abstract Research reactors include many complicated components with various shapes and sizes. Such complex parts also observed in TRIGA core are modelled by the researchers as simplified physical geometries when a particle transport computer code is used to analyse the reactors. These models are used to gain information on possible modifications in the reactors with no cost except a certain computational time demand. Besides, they can be used to understand the fabrication uncertainties of the core components and the methodologies used in the design process. The main objective of this study is to make a detailed three-dimensional full-core model of ITU (Istanbul Technical University) TRIGA Mark II research reactor for the use of Monte Carlo method and making a comparison of the simulation with the experimental observations. In case of lacking of experimental values reported, Final Safety Analysis Report values are used as reference. Furthermore, it is aimed to observe possible influences of using various neutron cross-section libraries (ENDF/Bs and JEFFs) onto the simulation results. The Monte Carlo simulations are carried out by using MCNP5 radiation transport code. All unsteady conditions are ignored, assuming the reactor operates at cold-zero power under the steady-state condition. For comparison, effective core multiplication factor (keff) and effective delayed neutron fraction (?eff) are computed. Reactivity worth ($) of control rods with rod position is presented. Pin power distribution within the fuel elements, axial power peaking distribution along the fuel length and normalized distribution of fast/thermal neutron flux throughout the core are analysed. The simulation results show that MCNP5 model of the reactor is properly established with sufficient detail in such a way that all simulation results are in an excellent agreement with the experimental data (or FSAR values). Results also show that the model yields more or less the same value even different neutron libraries are used.

Mehmet Türkmen; Üner Çolak

2014-01-01T23:59:59.000Z

306

Current Monte Carlo codes use one of three models to model neutron scattering in the epithermal energy range: (1) the asymptotic scattering model, (2) the free gas scattering model, or (3) the S({alpha},{beta}) model, depending on the neutron energy and the specific Monte Carlo code. The free gas scattering model assumes the scattering cross section is constant over the neutron energy range, which is usually a good approximation for light nuclei, but not for heavy nuclei where the scattering cross section may have several resonances in the epithermal region. Several researchers in the field have shown that using the free gas scattering model in the vicinity of the resonances in the lower epithermal range can under-predict resonance absorption due to the up-scattering phenomenon. Existing methods all involve performing the collision analysis in the center-of-mass frame, followed by a conversion back to the laboratory frame. In this paper, we will present a new sampling methodology that (1) accounts for the energy-dependent scattering cross sections in the collision analysis and (2) acts in the laboratory frame, avoiding the conversion to the center-of-mass frame. The energy dependence of the scattering cross section was modeled with even-ordered polynomials to approximate the scattering cross section in Blackshaw's equations for the moments of the differential scattering PDFs. These moments were used to sample the outgoing neutron speed and angle in the laboratory frame on-the-fly during the random walk of the neutron. Results for criticality studies on fuel pin and fuel assembly calculations using these methods showed very close comparison to results using the reference Doppler-broadened rejection correction (DBRC) scheme. (authors)

Sunny, E. E.; Martin, W. R. [University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor MI 48109 (United States)

2013-07-01T23:59:59.000Z

307

Correction of CT artifacts and its influence on Monte Carlo dose calculations

Computed tomography (CT) images of patients having metallic implants or dental fillings exhibit severe streaking artifacts. These artifacts may disallow tumor and organ delineation and compromise dose calculation outcomes in radiotherapy. We used a sinogram interpolation metal streaking artifact correction algorithm on several phantoms of exact-known compositions and on a prostate patient with two hip prostheses. We compared original CT images and artifact-corrected images of both. To evaluate the effect of the artifact correction on dose calculations, we performed Monte Carlo dose calculation in the EGSnrc/DOSXYZnrc code. For the phantoms, we performed calculations in the exact geometry, in the original CT geometry and in the artifact-corrected geometry for photon and electron beams. The maximum errors in 6 MV photon beam dose calculation were found to exceed 25% in original CT images when the standard DOSXYZnrc/CTCREATE calibration is used but less than 2% in artifact-corrected images when an extended calibration is used. The extended calibration includes an extra calibration point for a metal. The patient dose volume histograms of a hypothetical target irradiated by five 18 MV photon beams in a hypothetical treatment differ significantly in the original CT geometry and in the artifact-corrected geometry. This was found to be mostly due to miss-assignment of tissue voxels to air due to metal artifacts. We also developed a simple Monte Carlo model for a CT scanner and we simulated the contribution of scatter and beam hardening to metal streaking artifacts. We found that whereas beam hardening has a minor effect on metal artifacts, scatter is an important cause of these artifacts.

Bazalova, Magdalena; Beaulieu, Luc; Palefsky, Steven; Verhaegen, Frank [Medical Physics Department, McGill University, Montreal General Hospital, 1650 Cedar Avenue, Montreal, Quebec, H3G1A4 (Canada); Department de Physique, de Genie Physique et d'Optique, Universite Laval, Quebec City, Quebec, G1K7P4 (Canada) and Department de Radio-Oncologie, Hotel Dieu de Quebec, Centre Hospitalier Universitaire de Quebec, Quebec City, Quebec, G1R2J6 (Canada); Medical Physics Department, McGill University, Montreal General Hospital, 1650 Cedar Avenue, Montreal, Quebec, H3G1A4 (Canada)

2007-06-15T23:59:59.000Z

308

Monte Carlo Simulations of the Dissolution of Borosilicate Glasses in Near-Equilibrium Conditions

Monte Carlo simulations were performed to investigate the mechanisms of glass dissolution as equilibrium conditions are approached in both static and flow-through conditions. The glasses studied are borosilicate glasses in the compositional range (80 x)% SiO2 (10 + x / 2)% B2O3 (10 + x / 2)% Na2O, where 5 < x < 30%. In static conditions, dissolution/condensation reactions lead to the formation, for all compositions studied, of a blocking layer composed of polymerized Si sites with principally 4 connections to nearest Si sites. This layer forms atop the altered glass layer and shows similar composition and density for all glass compositions considered. In flow-through conditions, three main dissolution regimes are observed: at high flow rates, the dissolving glass exhibits a thin alteration layer and congruent dissolution; at low flow rates, a blocking layer is formed as in static conditions but the simulations show that water can occasionally break through the blocking layer causing the corrosion process to resume; and, at intermediate flow rates, the glasses dissolve incongruently with an increasingly deepening altered layer. The simulation results suggest that, in geological disposal environments, small perturbations or slow flows could be enough to prevent the formation of a permanent blocking layer. Finally, a comparison between predictions of the linear rate law and the Monte Carlo simulation results indicates that, in flow-through conditions, the linear rate law is applicable at high flow rates and deviations from the linear rate law occur under low flow rates (e.g., at near-saturated conditions with respect to amorphous silica). This effect is associated with the complex dynamics of Si dissolution/condensation processes at the glass water interface.

Kerisit, Sebastien [Pacific Northwest National Laboratory (PNNL); Pierce, Eric M [ORNL

2012-01-01T23:59:59.000Z

309

-dimensional frustrated Ising models Yuan Wang and Hans De Sterck Department of Applied Mathematics, University a generalized loop move (GLM) update for Monte Carlo simulations of frustrated Ising models on two implementation on several frustrated Ising models, we demonstrate the effectiveness of the GLM updates in cases

De Sterck, Hans

310

lattice Ising model Shawn Andrews and Hans De Sterck Department of Applied Mathematics, University a classical fully frustrated honeycomb lattice Ising model using Markov-chain Monte Carlo methods and exact effective- ness on the Ising Hamiltonian with and without perturbative interactions. The two perturbations

De Sterck, Hans

311

Ab Initio Geometry and Bright Excitation of Carotenoids: Quantum Monte Carlo and Many Body Green state. Many Body Green's Function Theory (MBGFT) calculations of the vertical excitation energy and coupling with Qy of the chlorophyll.8-13 Measurements in several solvents have been reported

Guidoni, Leonardo

312

Monte Carlo Simulation-based Sensitivity Analysis of the Model of a Thermal-Hydraulic Passive System

1 Monte Carlo Simulation-based Sensitivity Analysis of the Model of a Thermal-Hydraulic Passive, and for this reason are expected to improve the safety of nuclear power plants. However, uncertainties are present Engineering and System Safety 107 (2012) 90-106" DOI : 10.1016/j.ress.2011.08.006 #12;2 power plants because

Paris-Sud XI, UniversitÃ© de

313

Calculation of Nonlinear Thermoelectric Coefficients of InAs1xSbx Using Monte Carlo Method

and increase the cooling power density when a lightly doped thermoelectric material is under a large electrical with local nonequi- librium charge distribution. InAs1Ã?xSbx is a favorable thermoelectric materialCalculation of Nonlinear Thermoelectric Coefficients of InAs1Ã?xSbx Using Monte Carlo Method RAMIN

314

Molecular-level Monte Carlo simulation at fixed entropy William R. Smith a,*, Martin Lisal b,c

Molecular-level Monte Carlo simulation at fixed entropy William R. Smith a,*, Martin LiÂ´sal b.05.121 * Corresponding author. Fax: +1 905 721 3304. E-mail address: william.smith@uoit.ca (W.R. Smith). www

Lisal, Martin

315

September 1, 2001 / Vol. 26, No. 17 / OPTICS LETTERS 1335 Perturbation Monte Carlo methods to solve with respect to perturbations in background tissue optical properties. We then feed this derivative information to a nonlinear optimization algorithm to determine the optical properties of the tissue heterogeneity under

Boas, David

316

ATLAS-Experiment Abbildung 48: Monte-Carlo-Simulation eines tÂ¯t-Ereignisses in einem Layout fÂ¨ur den Inne- ren Detektor des ATLAS-Experiments mit vier Lagen von Silizium-Pixeldetektoren und fÂ¨unf Lagen von Silizium-Streifendetektoren. 82 #12;ATLAS-Experiment ATLAS-Experiment Gruppenleiter: M

317

in Electronic Structure Calculations F. A. Reboredo, R. Q. Hood, and P. R. C. Kent, Phys. Rev. B 79, 195117 Monte Carlo (SH-DMC) method for calculating the ground-state properties of many electron systems (2009) Accurately solving the many electron SchrÃ¶dinger equation for real materials is a holy grail

318

Purpose: To present a new accelerated Monte Carlo code for CT-based dose calculations in high dose rate (HDR) brachytherapy. The new code (HDRMC) accounts for both tissue and nontissue heterogeneities (applicator and contrast medium). Methods: HDRMC uses a fast ray-tracing technique and detailed physics algorithms to transport photons through a 3D mesh of voxels representing the patient anatomy with applicator and contrast medium included. A precalculated phase space file for the{sup 192}Ir source is used as source term. HDRM is calibrated to calculated absolute dose for real plans. A postprocessing technique is used to include the exact density and composition of nontissue heterogeneities in the 3D phantom. Dwell positions and angular orientations of the source are reconstructed using data from the treatment planning system (TPS). Structure contours are also imported from the TPS to recalculate dose-volume histograms. Results: HDRMC was first benchmarked against the MCNP5 code for a single source in homogenous water and for a loaded gynecologic applicator in water. The accuracy of the voxel-based applicator model used in HDRMC was also verified by comparing 3D dose distributions and dose-volume parameters obtained using 1-mm{sup 3} versus 2-mm{sup 3} phantom resolutions. HDRMC can calculate the 3D dose distribution for a typical HDR cervix case with 2-mm resolution in 5 min on a single CPU. Examples of heterogeneity effects for two clinical cases (cervix and esophagus) were demonstrated using HDRMC. The neglect of tissue heterogeneity for the esophageal case leads to the overestimate of CTV D90, CTV D100, and spinal cord maximum dose by 3.2%, 3.9%, and 3.6%, respectively. Conclusions: A fast Monte Carlo code for CT-based dose calculations which does not require a prebuilt applicator model is developed for those HDR brachytherapy treatments that use CT-compatible applicators. Tissue and nontissue heterogeneities should be taken into account in modern HDR brachytherapy planning.

Chibani, Omar, E-mail: omar.chibani@fccc.edu; C-M Ma, Charlie [Fox Chase Cancer Center, Philadelphia, Pennsylvania 19111 (United States)] [Fox Chase Cancer Center, Philadelphia, Pennsylvania 19111 (United States)

2014-05-15T23:59:59.000Z

319

A Monte Carlo procedure for the construction of complementary cumulative distribution functions (CCDFs) for comparison with the US Environmental Protection Agency (EPA) release limits for radioactive waste disposal (40 CFR 191, Subpart B) is described and illustrated with results from a recent performance assessment (PA) for the Waste Isolation Pilot Plant (WIPP). The Monte Carlo procedure produces CCDF estimates similar to those obtained with stratified sampling in several recent PAs for the WIPP. The advantages of the Monte Carlo procedure over stratified sampling include increased resolution in the calculation of probabilities for complex scenarios involving drilling intrusions and better use of the necessarily limited number of mechanistic calculations that underlie CCDF construction.

Helton, J.C.; Shiver, A.W.

1994-10-01T23:59:59.000Z

320

Monte Carlo simulation based study of a proposed multileaf collimator for a telecobalt machine

Purpose: The objective of the present work was to propose a design of a secondary multileaf collimator (MLC) for a telecobalt machine and optimize its design features through Monte Carlo simulation. Methods: The proposed MLC design consists of 72 leaves (36 leaf pairs) with additional jaws perpendicular to leaf motion having the capability of shaping a maximum square field size of 35 Multiplication-Sign 35 cm{sup 2}. The projected widths at isocenter of each of the central 34 leaf pairs and 2 peripheral leaf pairs are 10 and 5 mm, respectively. The ends of the leaves and the x-jaws were optimized to obtain acceptable values of dosimetric and leakage parameters. Monte Carlo N-Particle code was used for generating beam profiles and depth dose curves and estimating the leakage radiation through the MLC. A water phantom of dimension 50 Multiplication-Sign 50 Multiplication-Sign 40 cm{sup 3} with an array of voxels (4 Multiplication-Sign 0.3 Multiplication-Sign 0.6 cm{sup 3}= 0.72 cm{sup 3}) was used for the study of dosimetric and leakage characteristics of the MLC. Output files generated for beam profiles were exported to the PTW radiation field analyzer software through locally developed software for analysis of beam profiles in order to evaluate radiation field width, beam flatness, symmetry, and beam penumbra. Results: The optimized version of the MLC can define radiation fields of up to 35 Multiplication-Sign 35 cm{sup 2} within the prescribed tolerance values of 2 mm. The flatness and symmetry were found to be well within the acceptable tolerance value of 3%. The penumbra for a 10 Multiplication-Sign 10 cm{sup 2} field size is 10.7 mm which is less than the generally acceptable value of 12 mm for a telecobalt machine. The maximum and average radiation leakage through the MLC were found to be 0.74% and 0.41% which are well below the International Electrotechnical Commission recommended tolerance values of 2% and 0.75%, respectively. The maximum leakage through the leaf ends in closed condition was observed to be 8.6% which is less than the values reported for other MLCs designed for medical linear accelerators. Conclusions: It is concluded that dosimetric parameters and the leakage radiation of the optimized secondary MLC design are well below their recommended tolerance values. The optimized design of the proposed MLC can be integrated into a telecobalt machine by replacing the existing adjustable secondary collimator for conformal radiotherapy treatment of cancer patients.

Sahani, G.; Dash Sharma, P. K.; Hussain, S. A. [Radiological Safety Division, Atomic Energy Regulatory Board, Anushaktinagar, Mumbai-400094 (India); Dutt Sharma, Sunil [Radiological Physics and Advisory Division, Bhabha Atomic Research Centre, CT and CRS, Anushaktinagar, Mumbai-400094 (India); Sharma, D. N. [Health Safety and Environment Group, Bhabha Atomic Research Centre, Trombay, Mumbai-400085 (India)

2013-02-15T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

321

Science Journals Connector (OSTI)

Organ doses are important quantities in assessing the radiation risk. In the case of children, estimation of this risk is of particular concern due to their significant radiosensitivity and the greater health detriment. The purpose of this study is to estimate the organ doses to paediatric patients undergoing barium meal and micturating cystourethrography examinations by clinical measurements and Monte Carlo simulation. In clinical measurements, dose–area products (DAPs) were assessed during examination of 50 patients undergoing barium meal and 90 patients undergoing cystourethrography examinations, separated equally within three age categories: namely newborn, 1 year and 5 years old. Monte Carlo simulation of photon transport in male and female mathematical phantoms was applied using the MCNP5 code in order to estimate the equivalent organ doses. Regarding the micturating cystourethrography examinations, the organs receiving considerable amounts of radiation doses were the urinary bladder (1.87, 2.43 and 4.7 mSv, the first, second and third value in the parentheses corresponds to neonatal, 1 year old and 5 year old patients, respectively), the large intestines (1.54, 1.8, 3.1 mSv), the small intestines (1.34, 1.56, 2.78 mSv), the stomach (1.46, 1.02, 2.01 mSv) and the gall bladder (1.46, 1.66, 2.18 mSv), depending upon the age of the child. Organs receiving considerable amounts of radiation during barium meal examinations were the stomach (9.81, 9.92, 11.5 mSv), the gall bladder (3.05, 5.74, 7.15 mSv), the rib bones (9.82, 10.1, 11.1 mSv) and the pancreas (5.8, 5.93, 6.65 mSv), depending upon the age of the child. DAPs to organ/effective doses conversion factors were derived for each age and examination in order to be compared with other studies.

A Dimitriadis; G Gialousis; T Makri; M Karlatira; P Karaiskos; E Georgiou; S Papaodysseas; E Yakoumakis

2011-01-01T23:59:59.000Z

322

Science Journals Connector (OSTI)

A quantum Monte Carlo method that combines the second-order many-body perturbation theory and Monte Carlo (MC) integration has been developed for correlation and correlation-corrected (quasiparticle) energy bands of one-dimensional solids. The sum-of-product expressions of correlation energy and self-energy are transformed, with the aid of a Laplace transform, into high-dimensional integrals, which are subject to a highly scalable MC integration with the Metropolis algorithm for importance sampling. The method can compute correlation energies of polyacetylene and polyethylene within a few mEh and quasiparticle energy bands within a few tenths of an eV. It does not suffer from the fermion sign problem and its description can be systematically improved by raising the perturbation order.

Soohaeng Yoo Willow; Kwang S. Kim; So Hirata

2014-11-18T23:59:59.000Z

323

We investigate interfacial properties between two highly incompatible polymers of different stiffness. The extensive Monte Carlo simulations of the binary polymer melt yield detailed interfacial profiles and the interfacial tension via an analysis of capillary fluctuations. We extract an effective Flory-Huggins parameter from the simulations, which is used in self-consistent field calculations. These take due account of the chain architecture via a partial enumeration of the single chain partition function, using chain conformations obtained by Monte Carlo simulations of the pure phases. The agreement between the simulations and self-consistent field calculations is almost quantitative, however we find deviations from the predictions of the Gaussian chain model for high incompatibilities or large stiffness. The interfacial width at very high incompatibilities is smaller than the prediction of the Gaussian chain model, and decreases upon increasing the statistical segment length of the semi-flexible component.

Marcus Mueller; Andreas Werner

1997-09-11T23:59:59.000Z

324

Motivated by the disagreement between recent diffusion Monte Carlo calculations and experiments on the phase transition pressure between the ambient and beta-Sn phases of silicon, we present a study of the HCP to BCC phase transition in beryllium. This lighter element provides an oppor- tunity for directly testing many of the approximations required for calculations on silicon and may suggest a path towards increasing the practical accuracy of diffusion Monte Carlo calculations of solids in general. We demonstrate that the single largest approximation in these calculations is the pseudopotential approximation. After removing this we find excellent agreement with experiment for the ambient HCP phase and results similar to careful calculations using density functional theory for the phase transition pressure.

Shulenburger, Luke; Desjarlais, M P

2015-01-01T23:59:59.000Z

325

Adaptive {delta}f Monte Carlo Method for Simulation of RF-heating and Transport in Fusion Plasmas

Essential for modeling heating and transport of fusion plasma is determining the distribution function of the plasma species. Characteristic for RF-heating is creation of particle distributions with a high energy tail. In the high energy region the deviation from a Maxwellian distribution is large while in the low energy region the distribution is close to a Maxwellian due to the velocity dependency of the collision frequency. Because of geometry and orbit topology Monte Carlo methods are frequently used. To avoid simulating the thermal part, {delta}f methods are beneficial. Here we present a new {delta}f Monte Carlo method with an adaptive scheme for reducing the total variance and sources, suitable for calculating the distribution function for RF-heating.

Hoeoek, J.; Hellsten, T. [Fusion Plasma Physics, School of Electrical Engineering, Royal Institute of Technology (KTH), SE-100 44, Stockholm, Association VR-Euratom (Sweden)

2009-11-26T23:59:59.000Z

326

Science Journals Connector (OSTI)

Abstract In this paper, a Monte Carlo simulation based two-stage adaptive resonance theory mapping (MC-TSAM) model was developed to classify a given site into distinguished zones representing different levels of offshore Oil Spill Vulnerability Index (OSVI). It consisted of an adaptive resonance theory (ART) module, an ART Mapping module, and a centroid determination module. Monte Carlo simulation was integrated with the TSAM approach to address uncertainties that widely exist in site conditions. The applicability of the proposed model was validated by classifying a large coastal area, which was surrounded by potential oil spill sources, based on 12 features. Statistical analysis of the results indicated that the classification process was affected by multiple features instead of one single feature. The classification results also provided the least or desired number of zones which can sufficiently represent the levels of offshore OSVI in an area under uncertainty and complexity, saving time and budget in spill monitoring and response.

Pu Li; Bing Chen; Zelin Li; Xiao Zheng; Hongjing Wu; Liang Jing; Kenneth Lee

2014-01-01T23:59:59.000Z

327

The Savannah River Laboratory LTRIIA slightly-enriched uranium-D/sub 2/O critical experiment was analyzed with ENDF/B-IV data and the RCP01 Monte Carlo program, which modeled the entire assembly in explicit detail. The integral parameters delta/sup 25/ and delta/sup 28/ showed good agreement with experiment. However, calculated K/sub eff/ was 2 to 3% low, due primarily to an overprediction of U238 capture. This is consistent with results obtained in similar analyses of the H/sub 2/O-moderated TRX critical experiments. In comparisons with the VIM and MCNP2 Monte Carlo programs, good agreement was observed for calculated reeaction rates in the B/sup 2/=0 cell.

Hardy, J. Jr.; Shore, J.M.

1981-11-01T23:59:59.000Z

328

Materials of high atomic number such as gold, can provide a high probability for photon interaction by photoelectric effects during radiation therapy. In cancer therapy, the object of brachytherapy as a kind of radiotherapy is to deliver adequate radiation dose to tumor while sparing surrounding healthy tissue. Several studies demonstrated that the preferential accumulation of gold nanoparticles within the tumor can enhance the absorbed dose by the tumor without increasing the radiation dose delivered externally. Accordingly, the required time for tumor irradiation decreases as the estimated adequate radiation dose for tumor is provided following this method. The dose delivered to healthy tissue is reduced when the time of irradiation is decreased. Hear, GNPs effects on choroidal Melanoma dosimetry is discussed by Monte Carlo study. Monte Carlo Ophthalmic brachytherapy dosimetry usually, is studied by simulation of water phantom. Considering the composition and density of eye material instead of water in thes...

Asadi, Somayeh; Masoudi, S Farhad; Rahmani, Faezeh

2014-01-01T23:59:59.000Z

329

Science Journals Connector (OSTI)

Abstract Ethylene cracking furnace tube is one of the most critical components in the petrochemical industry to crack molecules at high temperature. The furnace tube degrades easily during operations which would cause equipment failure and lead to serious consequences, such as fire and explosion. In this work, a quantitative analysis of failure probability for the ethylene cracking furnace tube is performed using the Monte Carlo method and API Risk-Based Inspection (RBI) technology. The results have shown that the operation life of ethylene cracking furnace tube under interaction of creep and carburization is less than that under creep, and the failure probability calculated based on API RBI technology is lower than that using the Monte Carlo method. Moreover, the comparative analysis results prove further that creep and carburization are two main failure modes of the furnace tube rupture. Therefore, it is very necessary to provide reliable data to perform risk assessment and inspections on ethylene cracking furnace tube.

Wenhe Wang; Kaiwu Liang; Changyou Wang; Qingsheng Wang

2014-01-01T23:59:59.000Z

330

MOCABA is a combination of Monte Carlo sampling and Bayesian updating algorithms for the prediction of integral functions of nuclear data, such as reactor power distributions or neutron multiplication factors. Similarly to the established Generalized Linear Least Squares (GLLS) methodology, MOCABA offers the capability to utilize integral experimental data to reduce the prior uncertainty of integral observables. The MOCABA approach, however, does not involve any series expansions and, therefore, does not suffer from the breakdown of first-order perturbation theory for large nuclear data uncertainties. This is related to the fact that, in contrast to the GLLS method, the updating mechanism within MOCABA is applied directly to the integral observables without having to "adjust" any nuclear data. A central part of MOCABA is the nuclear data Monte Carlo program NUDUNA, which performs random sampling of nuclear data evaluations according to their covariance information and converts them into libraries for transpor...

Hoefer, Axel; Hennebach, Maik; Schmid, Michael; Porsch, Dieter

2014-01-01T23:59:59.000Z

331

Science Journals Connector (OSTI)

Au-Cu bimetallic alloy clusters are produced in a laser vaporization source starting from Au-Cu alloy targets with different stoichiometric compositions. The clusters are deposited on two different substrates—amorphous carbon and crystalline MgO—and are characterized by electron diffraction and high-resolution electron microscopy. The experiments show that the overall chemical composition in the clusters is the same as the chemical composition of the target material; but the crystal structure of the Au-Cu alloy clusters differs from their known bulk crystal structure. Electron microscopy experiments provide evidence that no chemical ordering exists between Au and Cu atoms and that the clusters are solid solutions. Monte Carlo simulations using the second moment tight-binding approximation, however, predict Cu3Au clusters ordered in the core but with a disordered mantle. The possible origins of the differences between experiment and Monte Carlo simulations are discussed.

B. Pauwels; G. Van Tendeloo; E. Zhurkin; M. Hou; G. Verschoren; L. Theil Kuhn; W. Bouwen; P. Lievens

2001-03-27T23:59:59.000Z

332

Methodology of risk analysis by Monte Carlo Method applied to power generation with renewable energy

Science Journals Connector (OSTI)

Abstract This paper presents a methodology that uses the Monte Carlo Method (MCM) to estimate the behavior of economic parameters which may help decision, considering the risk in project sustainability. In order to show how this methodology can be used, a Grid-Connected Photovoltaic System (GCPVS) of 1.575 kWp, located on the roof top of the laboratory building of the Grupo de Estudos e Desenvolvimento de Alternativas Energéticas – GEDAE, at the Universidade Federal do Pará – UFPA, Belém – Pará – Brazil, and operating since December 2007, is analyzed. This system was chosen because it was the first GCPVS installed in the Brazilian Amazon Region, being the first risk evaluation on using a renewable energy source connected to the grid in the Region. This work also presents a similar treatment for the case of a stand-alone photovoltaic system (SAPVS) installed in the remote Santo Antônio Village, municipality of Breves, Pará, Brazil, considering the risk of investment assumed by an investor in power generation projects with similar characteristics or using other renewable energy sources. The last case allows a better assessment for other important applications of renewable energy in the Amazon Region, where the demand for energy is growing, but is still costly and often not a priority in government actions.

Edinaldo José da Silva Pereira; João Tavares Pinho; Marcos André Barros Galhardo; Wilson Negrão Macêdo

2014-01-01T23:59:59.000Z

333

Elastic Constants of Solid Ar, Kr, and Xe: A Monte Carlo Study

Science Journals Connector (OSTI)

The elastic constants of classical systems of 108 particles arranged (with periodic boundary conditions) on an fcc lattice and interacting with pairwise-additive forces have been evaluated to an accuracy of about 2% by a Monte Carlo procedure closely related to that used by Hoover and his co-workers. For Ar(80 °K) and Kr(85 and 115 °K) we have used the Bobetic-Barker pair potentials and also included the corrections for the truncated tail of the pair potential, quantum effects, and three-body forces. For Ar(80 °K) and Xe(156 °K) we have carried out a similar calculation for the familiar Lennard-Jones 6:12 potential. Our 6:12 Ar(80 °K) elastic constants agree well with the previous work of Hoover et al. but unfortunately differ only little from the more realistic Bobetic-Barker Ar(80 °K) values. Bulk moduli for both potentials are compatible with the currently available experimental data. Comparison of our Kr results with experimental data indicates a need for refinement of the Bobetic-Barker Kr potential. The Xe(156 °K) results agree very well with the recent Brillouin-scattering work of Gornall and Stoicheff which is to some extent disappointing because the same 6:12 potential is in poor agreement with the low-temperature heat capacity.

M. L. Klein and R. D. Murphy

1972-09-15T23:59:59.000Z

334

Monte Carlo uncertainty estimation for an oscillating-vessel viscosity measurement

This paper discusses the initial design and evaluation of a high temperature viscosity measurement system with the focus on the uncertainty assessment. Numerical simulation of the viscometer is used to estimate viscosity uncertainties through the Monte Carlo method. The simulation computes the system response for a particular set of inputs (viscosity, moment of inertia, spring constant and hysteretic damping), and the viscosity is calculated using two methods: the Roscoe approximate solution and a numerical-fit method. For numerical fitting, a residual function of the logarithmic decay of oscillation amplitude and oscillation period is developed to replace the residual function of angular oscillation, which is mathematically stiff. The results of this study indicate that the method using computational solution of the equations and fitting for the parameters should be used, since it almost always out-performs the Roscoe approximation in uncertainty. The hysteretic damping and spring stiffness uncertainties translate into viscosity uncertainties almost directly, whereas the moment of inertial and vessel-height uncertainties are magnified approximately two-fold. As the hysteretic damping increases, so does the magnification of its uncertainty, therefore it should be minimized in the system design. The result of this study provides a general guide for the design and application of all oscillation-vessel viscosity measurement systems.

K. Horne; H. Ban; R. Fielding; R. Kennedy

2012-08-01T23:59:59.000Z

335

MONTE CARLO SIMULATIONS OF NONLINEAR PARTICLE ACCELERATION IN PARALLEL TRANS-RELATIVISTIC SHOCKS

We present results from a Monte Carlo simulation of a parallel collisionless shock undergoing particle acceleration. Our simulation, which contains parameterized scattering and a particular thermal leakage injection model, calculates the feedback between accelerated particles ahead of the shock, which influence the shock precursor and 'smooth' the shock, and thermal particle injection. We show that there is a transition between nonrelativistic shocks, where the acceleration efficiency can be extremely high and the nonlinear compression ratio can be substantially greater than the Rankine-Hugoniot value, and fully relativistic shocks, where diffusive shock acceleration is less efficient and the compression ratio remains at the Rankine-Hugoniot value. This transition occurs in the trans-relativistic regime and, for the particular parameters we use, occurs around a shock Lorentz factor ?{sub 0} = 1.5. We also find that nonlinear shock smoothing dramatically reduces the acceleration efficiency presumed to occur with large-angle scattering in ultra-relativistic shocks. Our ability to seamlessly treat the transition from ultra-relativistic to trans-relativistic to nonrelativistic shocks may be important for evolving relativistic systems, such as gamma-ray bursts and Type Ibc supernovae. We expect a substantial evolution of shock accelerated spectra during this transition from soft early on to much harder when the blast-wave shock becomes nonrelativistic.

Ellison, Donald C.; Warren, Donald C. [Physics Department, North Carolina State University, Box 8202, Raleigh, NC 27695 (United States); Bykov, Andrei M., E-mail: don_ellison@ncsu.edu, E-mail: ambykov@yahoo.com [Ioffe Institute for Physics and Technology, 194021 St. Petersburg (Russian Federation)

2013-10-10T23:59:59.000Z

336

MONTE CARLO SIMULATIONS OF THE PHOTOSPHERIC EMISSION IN GAMMA-RAY BURSTS

We studied the decoupling of photons from ultra-relativistic spherically symmetric outflows expanding with constant velocity by means of Monte Carlo simulations. For outflows with finite widths we confirm the existence of two regimes: photon-thick and photon-thin, introduced recently by Ruffini et al. (RSV). The probability density function of the last scattering of photons is shown to be very different in these two cases. We also obtained spectra as well as light curves. In the photon-thick case, the time-integrated spectrum is much broader than the Planck function and its shape is well described by the fuzzy photosphere approximation introduced by RSV. In the photon-thin case, we confirm the crucial role of photon diffusion, hence the probability density of decoupling has a maximum near the diffusion radius well below the photosphere. The time-integrated spectrum of the photon-thin case has a Band shape that is produced when the outflow is optically thick and its peak is formed at the diffusion radius.

Begue, D.; Siutsou, I. A.; Vereshchagin, G. V. [University of Roma ''Sapienza'', I-00185, p.le A. Moro 5, Rome (Italy)

2013-04-20T23:59:59.000Z

337

We compute the non-zero temperature conductivity of conserved flavor currents in conformal field theories (CFTs) in 2+1 spacetime dimensions. At frequencies much greater than the temperature, $\\hbar\\omega>> k_B T$, the $\\omega$ dependence can be computed from the operator product expansion (OPE) between the currents and operators which acquire a non-zero expectation value at T > 0. Such results are found to be in excellent agreement with quantum Monte Carlo studies of the O(2) Wilson-Fisher CFT. Results for the conductivity and other observables are also obtained in vector 1/N expansions. We match these large $\\omega$ results to the corresponding correlators of holographic representations of the CFT: the holographic approach then allows us to extrapolate to small $\\hbar \\omega/(k_B T)$. Other holographic studies implicitly only used the OPE between the currents and the energy-momentum tensor, and this yields the correct leading large $\\omega$ behavior for a large class of CFTs. However, for the Wilson-Fisher ...

Katz, Emanuel; Sorensen, Erik S; Witczak-Krempa, William

2014-01-01T23:59:59.000Z

338

A BAYESIAN MONTE CARLO ANALYSIS OF THE M-{sigma} RELATION

We present an analysis of selection biases in the M{sub bh}-{sigma} relation using Monte Carlo simulations including the sphere of influence resolution selection bias and a selection bias in the velocity dispersion distribution. We find that the sphere of influence selection bias has a significant effect on the measured slope of the M{sub bh}-{sigma} relation, modeled as {beta}{sub intrinsic} = -4.69 + 2.22{beta}{sub measured}, where the measured slope is shallower than the model slope in the parameter range of {beta} > 4, with larger corrections for steeper model slopes. Therefore, when the sphere of influence is used as a criterion to exclude unreliable measurements, it also introduces a selection bias that needs to be modeled to restore the intrinsic slope of the relation. We find that the selection effect due to the velocity dispersion distribution of the sample, which might not follow the overall distribution of the population, is not important for slopes of {beta} {approx} 4-6 of a logarithmically linear M{sub bh}-{sigma} relation, which could impact some studies that measure low (e.g., {beta} < 4) slopes. Combining the selection biases in velocity dispersions and the sphere of influence cut, we find that the uncertainty of the slope is larger than the value without modeling these effects and estimate an intrinsic slope of {beta} = 5.28{sup +0.84}{sub -0.55}.

Morabito, Leah K.; Dai Xinyu, E-mail: morabito@nhn.ou.edu, E-mail: dai@nhn.ou.edu [Homer L. Dodge Department of Physics and Astronomy, University of Oklahoma, Norman, OK 73019 (United States)

2012-10-01T23:59:59.000Z

339

Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.

Vrugt, Jasper A [Los Alamos National Laboratory; Hyman, James M [Los Alamos National Laboratory; Robinson, Bruce A [Los Alamos National Laboratory; Higdon, Dave [Los Alamos National Laboratory; Ter Braak, Cajo J F [NETHERLANDS; Diks, Cees G H [UNIV OF AMSTERDAM

2008-01-01T23:59:59.000Z

340

Simulation of Watts Bar Unit 1 Initial Startup Tests with Continuous Energy Monte Carlo Methods

The Consortium for Advanced Simulation of Light Water Reactors* is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications. One component of the testing and validation plan for VERA is comparison of neutronics results to a set of continuous energy Monte Carlo solutions for a range of pressurized water reactor geometries using the SCALE component KENO-VI developed by Oak Ridge National Laboratory. Recent improvements in data, methods, and parallelism have enabled KENO, previously utilized predominately as a criticality safety code, to demonstrate excellent capability and performance for reactor physics applications. The highly detailed and rigorous KENO solutions provide a reliable nu-meric reference for VERAneutronics and also demonstrate the most accurate predictions achievable by modeling and simulations tools for comparison to operating plant data. This paper demonstrates the performance of KENO-VI for the Watts Bar Unit 1 Cycle 1 zero power physics tests, including reactor criticality, control rod worths, and isothermal temperature coefficients.

Godfrey, Andrew T [ORNL; Gehin, Jess C [ORNL; Bekar, Kursat B [ORNL; Celik, Cihangir [ORNL

2014-01-01T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

341

Intra-Globular Structures in Multiblock Copolymer Chains from a Monte Carlo Simulation

Multiblock copolymer chains in implicit nonselective solvents are studied by Monte Carlo method which employs a parallel tempering algorithm. Chains consisting of 120 $A$ and 120 $B$ monomers, arranged in three distinct microarchitectures: $(10-10)_{12}$, $(6-6)_{20}$, and $(3-3)_{40}$, collapse to globular states upon cooling, as expected. By varying both the reduced temperature $T^*$ and compatibility between monomers $\\omega$, numerous intra-globular structures are obtained: diclusters (handshake, spiral, torus with a core, etc.), triclusters, and $n$-clusters with $n>3$ (lamellar and other), which are reminiscent of the block copolymer nanophases for spherically confined geometries. Phase diagrams for various chains in the $(T^*, \\omega)$-space are mapped. The structure factor $S(k)$, for a selected microarchitecture and $\\omega$, is calculated. Since $S(k)$ can be measured in scattering experiments, it can be used to relate simulation results to an experiment. Self-assembly in those systems is interpreted in term of competition between minimization of the interfacial area separating different types of monomers and minimization of contacts between chain and solvent. Finally, the relevance of this model to the protein folding is addressed.

Krzysztof Lewandowski; Michal Banaszak

2014-10-16T23:59:59.000Z

342

A Monte Carlo Analysis of Gas Centrifuge Enrichment Plant Process Load Cell Data

As uranium enrichment plants increase in number, capacity, and types of separative technology deployed (e.g., gas centrifuge, laser, etc.), more automated safeguards measures are needed to enable the IAEA to maintain safeguards effectiveness in a fiscally constrained environment. Monitoring load cell data can significantly increase the IAEA s ability to efficiently achieve the fundamental safeguards objective of confirming operations as declared (i.e., no undeclared activities), but care must be taken to fully protect the operator s proprietary and classified information related to operations. Staff at ORNL, LANL, JRC/ISPRA, and University of Glasgow are investigating monitoring the process load cells at feed and withdrawal (F/W) stations to improve international safeguards at enrichment plants. A key question that must be resolved is what is the necessary frequency of recording data from the process F/W stations? Several studies have analyzed data collected at a fixed frequency. This paper contributes to load cell process monitoring research by presenting an analysis of Monte Carlo simulations to determine the expected errors caused by low frequency sampling and its impact on material balance calculations.

Garner, James R [ORNL; Whitaker, J Michael [ORNL

2013-01-01T23:59:59.000Z

343

Generation of an Covariance Matrix by Monte Carlo Sampling of the Phonon Frequency Spectrum

Science Journals Connector (OSTI)

Abstract Formats and procedures are currently established for representing covariances in the ENDF library for many reaction types. However, no standard exists for thermal neutron inelastic scattering cross section covariance data. These cross sections depend on the material's dynamic structure factor, or S ( ? , ? ) . The structure factor is a function of the phonon density of states (DOS). Published ENDF thermal neutron scattering libraries are commonly produced by modeling codes, such as NJOY/LEAPR, which utilize the DOS as the fundamental input and directly output the S ( ? , ? ) matrix. To calculate covariances for the computed S ( ? , ? ) data, information about uncertainties in the DOS is required. The DOS may be viewed as a probability distribution function of available atomic vibrational energy states in a solid. In this work, density functional theory and lattice dynamics in the harmonic approximation were used to simulate the structure of silicon dioxide (?-quartz) to produce the DOS. A range for the variation in the partial DOS for silicon in ?-quartz was established based on limits of variation in the crystal lattice parameters. Uncertainty in an experimentally derived DOS may also be incorporated with the same methodology. A description of possible variation in the DOS allowed Monte Carlo generation of a set of perturbed DOS spectra which were sampled to produce the S ( ? , ? ) covariance matrix for scattering with silicon in ?-quartz. With appropriate sensitivity matrices, it is shown that the S ( ? , ? ) covariance matrix can be propagated to generate covariance matrices for integrated cross sections, secondary energy distributions, and coupled energy-angle distributions.

J.C. Holmes; A.I. Hawari

2014-01-01T23:59:59.000Z

344

Calculated criticality for sup 235 U/graphite systems using the VIM Monte Carlo code

Calculations for highly enriched uranium and graphite systems gained renewed interest recently for the new production modular high-temperature gas-cooled reactor (MHTGR). Experiments to validate the physics calculations for these systems are being prepared for the Transient Reactor Test Facility (TREAT) reactor at Argonne National Laboratory (ANL-West) and in the Compact Nuclear Power Source facility at Los Alamos National Laboratory. The continuous-energy Monte Carlo code VIM, or equivalently the MCNP code, can utilize fully detailed models of the MHTGR and serve as benchmarks for the approximate multigroup methods necessary in full reactor calculations. Validation of these codes and their associated nuclear data did not exist for highly enriched {sup 235}U/graphite systems. Experimental data, used in development of more approximate methods, dates back to the 1960s. The authors have selected two independent sets of experiments for calculation with the VIM code. The carbon-to-uranium (C/U) ratios encompass the range of 2,000, representative of the new production MHTGR, to the ratio of 10,000 in the fuel of TREAT. Calculations used the ENDF/B-V data.

Collins, P.J.; Grasseschi, G.L.; Olsen, D.N. (Argonne National Lab.-West, Idaho Falls (United States)); Finck, P.J. (Argonne National Lab., IL (United States))

1992-01-01T23:59:59.000Z

345

Towards a frequency-dependent discrete maximum principle for the implicit Monte Carlo equations

It has long been known that temperature solutions of the Implicit Monte Carlo (IMC) equations can exceed the external boundary temperatures, a so-called violation of the 'maximum principle.' Previous attempts at prescribing a maximum value of the time-step size {Delta}{sub t} that is sufficient to eliminate these violations have recommended a {Delta}{sub t} that is typically too small to be used in practice and that appeared to be much too conservative when compared to numerical solutions of the IMC equations for practical problems. In this paper, we derive a new estimator for the maximum time-step size that includes the spatial-grid size {Delta}{sub x}. This explicitly demonstrates that the effect of coarsening {Delta}{sub x} is to reduce the limitation on {Delta}{sub t}, which helps explain the overly conservative nature of the earlier, grid-independent results. We demonstrate that our new time-step restriction is a much more accurate means of predicting violations of the maximum principle. We discuss how the implications of the new, grid-dependent timestep restriction can impact IMC solution algorithms.

Wollaber, Allan B [Los Alamos National Laboratory; Larsen, Edward W [Los Alamos National Laboratory; Densmore, Jeffery D [Los Alamos National Laboratory

2010-12-15T23:59:59.000Z

346

A high-fidelity Monte Carlo evaluation of CANDU-6 safety parameters

Important safety parameters such as the fuel temperature coefficient (FTC) and the power coefficient of reactivity (PCR) of the CANDU-6 (CANada Deuterium Uranium) reactor have been evaluated by using a modified MCNPX code. For accurate analysis of the parameters, the DBRC (Doppler Broadening Rejection Correction) scheme was implemented in MCNPX in order to account for the thermal motion of the heavy uranium nucleus in the neutron-U scattering reactions. In this work, a standard fuel lattice has been modeled and the fuel is depleted by using the MCNPX and the FTC value is evaluated for several burnup points including the mid-burnup representing a near-equilibrium core. The Doppler effect has been evaluated by using several cross section libraries such as ENDF/B-VI, ENDF/B-VII, JEFF, JENDLE. The PCR value is also evaluated at mid-burnup conditions to characterize safety features of equilibrium CANDU-6 reactor. To improve the reliability of the Monte Carlo calculations, huge number of neutron histories are considered in this work and the standard deviation of the k-inf values is only 0.5{approx}1 pcm. It has been found that the FTC is significantly enhanced by accounting for the Doppler broadening of scattering resonance and the PCR are clearly improved. (authors)

Kim, Y.; Hartanto, D. [Korea Advanced Inst. of Science and Technology KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon, 305-701 (Korea, Republic of)

2012-07-01T23:59:59.000Z

347

Monte Carlo analysis of a monolithic interconnected module with a back surface reflector

Recently, the photon Monte Carlo code, RACER-X, was modified to include wave-length dependent absorption coefficients and indices of refraction. This work was done in an effort to increase the code`s capabilities to be more applicable to a wider range of problems. These new features make RACER-X useful for analyzing devices like monolithic interconnected modules (MIMs) which have etched surface features and incorporates a back surface reflector (BSR) for spectral control. A series of calculations were performed on various MIM structures to determine the impact that surface features and component reflectivities have on spectral utilization. The traditional concern of cavity photonics is replaced with intra-cell photonics in the MIM design. Like the cavity photonic problems previously discussed, small changes in optical properties and/or geometry can lead to large changes in spectral utilization. The calculations show that seemingly innocuous surface features (e.g., trenches and grid lines) can significantly reduce the spectral utilization due to the non-normal incident photon flux. Photons that enter the device through a trench edge are refracted onto a trajectory where they will not escape. This leads to a reduction in the number of reflected below bandgap photons that return to the radiator and reduce the spectral utilization. In addition, trenches expose a lateral conduction layer in this particular series of calculations which increase the absorption of above bandgap photons in inactive material.

Ballinger, C.T.; Charache, G.W. [Lockheed Martin Corp., Schenectady, NY (United States); Murray, C.S. [Bettis Atomic Power Lab., West Mifflin, PA (United States)

1998-10-01T23:59:59.000Z

348

Increasing innovation in home energy efficiency: Monte Carlo simulation of potential improvements

Science Journals Connector (OSTI)

Despite the enormous potential for savings, there is little penetration of market-based solutions in the residential energy efficiency market. We hypothesize that there is a failure in the residential efficiency improvement market: due to lack of customer knowledge and capital to invest in improvements, there is unrecovered savings. In this paper, we model a means of extracting profit from those unrecovered energy savings with a market-based residential energy services company, or RESCO. We use a Monte Carlo simulation of the cost and performance of various improvements along with a hypothetical business model to derive general information about the financial viability of these companies. Despite the large amount of energy savings potential, we find that an average contract length with residential customers needs to be nearly 35 years to recoup the cost of the improvements. However, our modeling of an installer knowledge parameter indicates that experience plays a large part in minimizing the time to profitability for each home. Large numbers of inexperienced workers driven by government investment in this area could result in the installation of improvements with long payback periods, whereas a free market might eliminate companies making poor decisions.

Kullapa Soratana; Joe Marriott

2010-01-01T23:59:59.000Z

349

Kinetic Monte Carlo (KMC) simulation of fission product silver transport through TRISO fuel particle

Science Journals Connector (OSTI)

A mesoscale kinetic Monte Carlo (KMC) model developed to investigate the diffusion of silver through the pyrolytic carbon and silicon carbide containment layers of a TRISO fuel particle is described. The release of radioactive silver from TRISO particles has been studied for nearly three decades, yet the mechanisms governing silver transport are not fully understood. This model atomically resolves Ag, but provides a mesoscale medium of carbon and silicon carbide, which can include a variety of defects including grain boundaries, reflective interfaces, cracks, and radiation-induced cavities that can either accelerate silver diffusion or slow diffusion by acting as traps for silver. The key input parameters to the model (diffusion coefficients, trap binding energies, interface characteristics) are determined from available experimental data, or parametrically varied, until more precise values become available from lower length scale modeling or experiment. The predicted results, in terms of the time/temperature dependence of silver release during post-irradiation annealing and the variability of silver release from particle to particle have been compared to available experimental data from the German HTR Fuel Program (Gontard and Nabielek [1]) and Minato and co-workers (Minato et al. [2]).

G. Méric de Bellefon; B.D. Wirth

2011-01-01T23:59:59.000Z

350

Noncovalent Interactions by Quantum Monte Carlo: A Speedup by a Smart Basis Set Reduction

A fixed-node diffusion Monte Carlo (FN-DMC) method provides a promising alternative to the commonly used coupled-cluster (CC) methods, in the domain of benchmark noncovalent interaction energy calculations. This is mainly true for a low-order polynomial CPU cost scaling of FN-DMC and favorable FN error cancellation leading to benchmark interaction energies accurate to 0.1 kcal/mol. While it is empirically accepted that the FN-DMC results depend weakly on the one-particle basis sets used to expand the guiding functions, limits of this assumption remain elusive. Our recent work indicates that augmented triple zeta basis sets are sufficient to achieve a benchmark level of 0.1 kcal/mol. Here we report on a possibility of significant truncation of the one-particle basis sets without any visible bias on the overall accuracy of the final FN-DMC energy differences. The approach is tested on a set of seven small noncovalent closed-shell complexes including a water dimer. The reported findings enable cheaper high-quali...

Dubecký, Matúš

2015-01-01T23:59:59.000Z

351

Investigating the potential of the Pan-Planets project using Monte Carlo simulations

Using Monte Carlo simulations we analyze the potential of the upcoming transit survey Pan-Planets. The analysis covers the simulation of realistic light curves (including the effects of ingress/egress and limb-darkening) with both correlated and uncorrelated noise as well as the application of a box-fitting-least-squares detection algorithm. In this work we show how simulations can be a powerful tool in defining and optimizing the survey strategy of a transiting planet survey. We find the Pan-Planets project to be competitive with all other existing and planned transit surveys with the main power being the large 7 square degree field of view. In the first year we expect to find up to 25 Jupiter-sized planets with periods below 5 days around stars brighter than V = 16.5 mag. The survey will also be sensitive to planets with longer periods and planets with smaller radii. After the second year of the survey, we expect to find up to 9 Warm Jupiters with periods between 5 and 9 days and 7 Very Hot Saturns around stars brighter than V = 16.5 mag as well as 9 Very Hot Neptunes with periods from 1 to 3 days around stars brighter than i' = 18.0 mag.

J. Koppenhoefer; C. Afonso; R. P. Saglia; Th. Henning

2008-12-08T23:59:59.000Z

352

Evaluation of Heliostat Field Global Tracking Error Distributions by Monte Carlo Simulations

Science Journals Connector (OSTI)

Abstract Several error sources can contribute to the global tracking error of heliostats. These sources can be, for instance, angular offset in the reference position of the tracking mechanisms, imperfect leveling of the heliostat pedestal, lack of perpendicularity between the tracking axes, lack of precise clock synchronization. All these possible errors are characterized by angles that have very specific numerical values for each heliostat in a central receiver installation. However, they are intrinsically random in nature, and the errors in different heliostats are independent from each other. In principle, the overall drift behavior of the heliostats can be characterized by a statistical distribution of tracking errors. This global distribution characterizes the angular deviation of the heliostat normal and is used in ray tracing simulations of heliostat fields. It is usually assumed to be Gaussian, although some authors argue in favor of other types of distributions. In the present work, the dependence of the global tracking error distribution on the above mentioned primary error sources is investigated by means of Monte Carlo simulations. Random values are assumed for the different error parameters, and the resulting global tracking error distributions are evaluated for different times of the year for a heliostat field.

L.A. Díaz-Félix; M. Escobar-Toledo; J. Waissman; N. Pitalúa-Díaz; C.A. Arancibia-Bulnes

2014-01-01T23:59:59.000Z

353

Monte Carlo Simulations of the Dissolution of Borosilicate Glasses in Near-Equilibrium Conditions

Monte Carlo simulations were performed to investigate the mechanisms of glass dissolution as equilibrium conditions are approached in both static and flow-through conditions. The glasses studied are borosilicate glasses in the compositional range (80-x)% SiO2 (10+x/2)% B2O3 (10+x/2)% Na2O, where 5 < x < 30%. In static conditions, dissolution/condensation reactions lead to the formation, for all compositions studied, of a blocking layer composed of polymerized Si sites with principally 4 connections to nearest Si sites. This layer forms atop the altered glass layer and shows similar composition and density for all glass compositions considered. In flow-through conditions, three main dissolution regimes are observed: at high flow rates, the dissolving glass exhibits a thin alteration layer and congruent dissolution; at low flow rates, a blocking layer is formed as in static conditions but the simulations show that water can occasionally break through the blocking layer causing the corrosion process to resume; and, at intermediate flow rates, the glasses dissolve incongruently with an increasingly deepening altered layer. The simulation results suggest that, in geological disposal environments, small perturbations or slow flows could be enough to prevent the formation of a permanent blocking layer.

Kerisit, Sebastien N.; Pierce, Eric M.

2012-05-15T23:59:59.000Z

354

The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calulations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10-10 times to properly characterize the few-group cross-sections for deownstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the faborable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.

Abdel-Khalik, Hany S.; Gardner, Robin; Mattingly, John; Sood, Avneet

2014-05-20T23:59:59.000Z

355

Monte Carlo method for estimating backflashover rates on high voltage transmission lines

Science Journals Connector (OSTI)

Abstract This paper presents a novel Monte-Carlo based model for the analysis of backflashover rate (BFOR) on high voltage transmission lines. The proposed model aims to take into the account following aspects of the BFOR phenomenon: transmission line (TL) route keraunic level(s), statistical depiction of lightning-current parameters (including statistical correlation), electrogeometric model of lightning attachment, frequency-dependence of TL parameters and electromagnetic coupling effects, tower geometry and surge impedance, tower grounding impulse impedance (with soil ionization), lightning-surge reflections from adjacent towers, non-linearity of the insulator strings flashover characteristic, distribution of lightning strokes along the TL span and power frequency voltage. In the analysis of the BFOR, special attention is given to the influences emanating from the insulator strings flashover characteristic and lightning statistics. The model could be applied to the transmission line as a whole or some of its portions, e.g. first several towers emanating from the substation or several towers crossing a mountain ridge.

Petar Sarajcev

2015-01-01T23:59:59.000Z

356

Monte Carlo estimation of the rates of lightning strikes on power lines

Science Journals Connector (OSTI)

This paper reports the development of a general method for estimating the rates of lightning strikes on transmission lines using Monte Carlo simulation. Effects of towers, cross-arms, non-level ground, conductor sags and nearby structures are directly represented in the 3-dimension electrogeometric model (EGM). The method developed is a general one that is applicable to any transmission line configuration and is independent of the EGM used. Tedious analytical derivation of more than 100 equations for each configuration as required by analytical methods is avoided altogether. The formulation and a flow chart are detailed in the paper. The shortest distances from the lightning leader tip to individual structures that may be struck are evaluated and then compared with striking distances. The outcome of the comparison identifies the structure which will be struck in each simulation. The exposure area adopted in the simulation is determined on the basis of the transmission line route length and the maximum possible striking distance to ensure that the simulation results in a maximum possible number of strikes on the overhead line. Strokes outside the exposure area will always miss the transmission line and, therefore, have no effects on the results. The method proposed and its software implementation are verified on the basis of results from analytical methods of earlier work, and from field data.

Roger Holt; Tam T. Nguyen

1999-01-01T23:59:59.000Z

357

Science Journals Connector (OSTI)

A New Kinetic Monte Carlo Algorithm for Heteroepitactical Growth: Case Study of C60 Growth on Pentacene ... Q and P sites cover the majority of the pentacene surface, and hence, we have defined the surface of pentacene as being represented as a set of P and Q sites, as shown in Figure 2. ... These sets of MD simulations were run for both the bulk and thin film phases of pentacene. ...

Rebecca A. Cantrell; Paulette Clancy

2012-01-19T23:59:59.000Z

358

The Physical Data Group of the Theoretical Physics Division of LLNL has developed and maintains several basic data files, several Monte Carlo transport codes, and the requisite processing codes that convert the basic data to the form required by our own transport codes and by other laboratory transport and burn codes. The data files (libraries) that we maintain are listed together with a few comments about each.

Howerton, R.J.

1983-10-18T23:59:59.000Z

359

The TSUNAMI computational sequences currently in the SCALE 5 code system provide an automated approach to performing sensitivity and uncertainty analysis for eigenvalue responses, using either one-dimensional discrete ordinates or three-dimensional Monte Carlo methods. This capability has recently been expanded to address eigenvalue-difference responses such as reactivity changes. This paper describes the methodology and presents results obtained for an example advanced CANDU reactor design. (authors)

Williams, M. L.; Gehin, J. C.; Clarno, K. T. [Oak Ridge National Laboratory, Bldg. 5700, P.O. Box 2008, Oak Ridge, TN 37831-6170 (United States)

2006-07-01T23:59:59.000Z

360

Realistic calculations of the neutron and ..gamma..-ray fluences in the TFTR diagnostic basement have been carried out with three-dimensional Monte Carlo models. Comparisons with measurements show that the results are well within the experimental uncertainties.

Liew, S.L.; Ku, L.P.; Kolibal, J.G.

1985-10-01T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

361

We analyze the density-functional theory (DFT) description of weak interactions by employing diffusion and reptation quantum Monte Carlo (QMC) calculations, for a set of benzene-molecule complexes. While the binding energies ...

Grossman, Jeffrey C.

362

MONTE CARLO SIMULATIONS OF SMALL H2SO4-H2O CLUSTERS* B.N. HALE AND S.M. KATHMANN

MONTE CARLO SIMULATIONS OF SMALL H2SO4-H2O CLUSTERS* B.N. HALE AND S.M. KATHMANN Department are central to the understanding of many atmospheric processes, for example, gas to particle conversion, acid

Hale, Barbara N.

363

We present an approach to calculation of point-defect optical and thermal ionization energies based on the highly accurate quantum Monte Carlo methods. The use of an inherently many-body theory that directly treats electron ...

Ertekin, Elif

364

Science Journals Connector (OSTI)

For the torus of the nuclear fusion project ITER (originally the International Thermonuclear Experimental Reactor but also Latin: the way) eight high-performance large-scale customized cryopumps must be designed and manufactured to accommodate the very high pumping speeds and throughputs of the fusion exhaust gas needed to maintain the plasma under stable vacuum conditions and comply with other criteria which cannot be met by standard commercial vacuum pumps. Under an earlier research and development program a model pump of reduced scale based on active cryosorption on charcoal-coated panels at 4.5 K was manufactured and tested systematically. The present article focuses on the simulation of the true three-dimensional complex geometry of the model pump by the newly developed PROVAC3D Monte Carlo code. It is shown for gas throughputs of up to 1000 sccm (?1.69 Pa m3/s at T?=?0° C) in the free molecular regime that the numerical simulation results are in good agreement with the pumping speeds measured. Meanwhile the capture coefficient associated with the virtual region around the cryogenic panels and shields which holds for higher throughputs is calculated using this generic approach. This means that the test particle Monte Carlo simulations in free molecular flow can be used not only for the optimization of the pumping system but also for the supply of the input parameters necessary for the future direct simulation Monte Carlo in the full flow regime.

Xueli Luo; Christian Day; Horst Haas; Stylianos Varoutis

2011-01-01T23:59:59.000Z

365

For the torus of the nuclear fusion project ITER (originally the International Thermonuclear Experimental Reactor, but also Latin: the way), eight high-performance large-scale customized cryopumps must be designed and manufactured to accommodate the very high pumping speeds and throughputs of the fusion exhaust gas needed to maintain the plasma under stable vacuum conditions and comply with other criteria which cannot be met by standard commercial vacuum pumps. Under an earlier research and development program, a model pump of reduced scale based on active cryosorption on charcoal-coated panels at 4.5 K was manufactured and tested systematically. The present article focuses on the simulation of the true three-dimensional complex geometry of the model pump by the newly developed ProVac3D Monte Carlo code. It is shown for gas throughputs of up to 1000 sccm ({approx}1.69 Pa m{sup 3}/s at T = 0 deg. C) in the free molecular regime that the numerical simulation results are in good agreement with the pumping speeds measured. Meanwhile, the capture coefficient associated with the virtual region around the cryogenic panels and shields which holds for higher throughputs is calculated using this generic approach. This means that the test particle Monte Carlo simulations in free molecular flow can be used not only for the optimization of the pumping system but also for the supply of the input parameters necessary for the future direct simulation Monte Carlo in the full flow regime.

Luo Xueli; Day, Christian; Haas, Horst; Varoutis, Stylianos [Karlsruhe Institute of Technology, Institute for Technical Physics, 76021 Karlsruhe (Germany)

2011-07-15T23:59:59.000Z

366

We report state-of-the-art quantum Monte Carlo calculations of the singlet $n \\to \\pi^*$ (CO) vertical excitation energy in the acrolein molecule, extending the recent study of Bouab\\c{c}a {\\it et al.} [J. Chem. Phys. {\\bf 130}, 114107 (2009)]. We investigate the effect of using a Slater basis set instead of a Gaussian basis set, and of using state-average versus state-specific complete-active-space (CAS) wave functions, with or without reoptimization of the coefficients of the configuration state functions (CSFs) and of the orbitals in variational Monte Carlo (VMC). It is found that, with the Slater basis set used here, both state-average and state-specific CAS(6,5) wave functions give an accurate excitation energy in diffusion Monte Carlo (DMC), with or without reoptimization of the CSF and orbital coefficients in the presence of the Jastrow factor. In contrast, the CAS(2,2) wave functions require reoptimization of the CSF and orbital coefficients to give a good DMC excitation energy. Our best estimates of ...

Toulouse, Julien; Reinhardt, Peter; Hoggan, Philip E; Umrigar, C J

2010-01-01T23:59:59.000Z

367

We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.

Schaefer, C.; Jansen, A. P. J. [Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands)

2013-02-07T23:59:59.000Z

368

Quantum Monte Carlo benchmark of exchange-correlation functionals for bulk water

The accurate description of the thermodynamic and dynamical properties of liquid water from first-principles is a very important challenge to the theoretical community. This represents not only a critical test of the predictive capabilities of first-principles methods, but it will also shed light into the microscopic properties of such an important substance. Density Functional Theory, the main workhorse in the field of first-principles methods, has been so far unable to properly describe water and its unusual properties in the liquid state. With the recent introduction of exact exchange and an improved description of dispersion interaction, the possibility of an accurate description of the liquid is finally within reach. Unfortunately, there is still no way to systematically improve exchange-correlation functionals and the number of available functionals is very large. In this article we use highly accurate quantum Monte Carlo calculations to benchmark a selection of exchange-correlation functionals typically used in Density Functional Theory simulations of bulk water. This allows us to test the predictive capabilities of these functionals in water, giving us a way not only to choose optimal functionals for first-principles simulations, but also giving us a route for the optimization of the functionals for the system at hand. We compare and contrast the importance of different features of functionals, including the hybrid component, the vdW component, and their importance within different aspects of the PES. In addition, we test a recently introduce scheme that combines Density Functional Theory with Coupled Cluster Calculations through a Many-Body expansion of the energy, in order to correct the inaccuracies in the description of short range interactions in the liquid.

Morales, Miguel A [Lawrence Livermore National Laboratory (LLNL); Gergely, John [University of Illinois, Urbana-Champaign; McMinis, Jeremy [Lawrence Livermore National Laboratory (LLNL); McMahon, Jeffrey [University of Illinois, Urbana-Champaign; Kim, Jeongnim [ORNL; Ceperley, David M. [University of Illinois, Urbana-Champaign

2014-01-01T23:59:59.000Z

369

Reaction mechanisms of ethanol decomposition on Rh(1 1 1) were elucidated by means of periodic density functional theory (DFT) calculations and kinetic Monte Carlo (KMC) simulations. We propose that the most probable reaction pathway is via CH{sub 3}CH{sub 2}O* on the basis of our mechanistic study: CH{sub 3}CH{sub 2}OH* {yields} CH{sub 3}CH{sub 2}O* {yields} CH{sub 2}CH{sub 2}O* {yields} CH{sub 2}CHO* {yields} CH{sub 2}CO* {yields} CHCO* {yields} CH* + CO* {yields} C* + CO*. In contrast, the contribution from the pathway via CH{sub 3}CHOH* is relatively small, CH{sub 3}CH{sub 2}OH* {yields} CH{sub 3}CHOH* {yields} CH{sub 3}CHO* {yields} CH{sub 3}CO* {yields} CH{sub 2}CO* {yields} CHCO* {yields} CH* + CO* {yields} C* + CO*. According to our calculations, one of the slow steps is the formation of the oxametallacycle CH{sub 2}CH{sub 2}O* species, which leads to the production of CHCO*, the precursor for C-C bond breaking. Finally, the decomposition of ethanol leads to the production of C and CO. Our calculations, for ethanol combustion on Rh, the major obstacle is not C-C bond cleavage, but the C contamination on Rh(1 1 1). The strong C-Rh interaction may deactivate the Rh catalyst. The formation of Rh alloys with Pt and Pd weakens the C-Rh interaction, easing the removal of C, and, as expected, in accordance with the experimental findings, facilitating ethanol combustion.

Liu, P.; Choi, Y.M.

2011-05-16T23:59:59.000Z

370

Monte Carlo simulation of nitrogen dissociation based on state-resolved cross sections

State-resolved analyses of N + N{sub 2} are performed using the direct simulation Monte Carlo (DSMC) method. In describing the elastic collisions by a state-resolved method, a state-specific total cross section is proposed. The state-resolved method is constructed from the state-specific total cross section and the rovibrational state-to-state transition cross sections for bound-bound and bound-free transitions taken from a NASA database. This approach makes it possible to analyze the rotational-to-translational, vibrational-to-translational, and rotational-to-vibrational energy transfers and the chemical reactions without relying on macroscopic properties and phenomenological models. In nonequilibrium heat bath calculations, the results of present state-resolved DSMC calculations are validated with those of the master equation calculations and the existing shock-tube experimental data for bound-bound and bound-free transitions. In various equilibrium and nonequilibrium heat bath conditions and 2D cylindrical flows, the DSMC calculations by the state-resolved method are compared with those obtained with previous phenomenological DSMC models. In these previous DSMC models, the variable soft sphere, phenomenological Larsen-Borgnakke, quantum kinetic, and total collision energy models are considered. From these studies, it is concluded that the state-resolved method can accurately describe the rotational-to-translational, vibrational-to-translational, and rotational-to-vibrational transfers and quasi-steady state of rotational and vibrational energies in nonequilibrium chemical reactions by state-to-state kinetics.

Kim, Jae Gang, E-mail: jaegkim@umich.edu; Boyd, Iain D., E-mail: iainboyd@umich.edu [Department of Aerospace Engineering, University of Michigan, 1320 Beal Avenue, Ann Arbor, Michigan 48109-2140 (United States)

2014-01-15T23:59:59.000Z

371

Argonne National Laboratory (ANL) of USA and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the conceptual design development of an electron accelerator driven subcritical (ADS) facility, using the KIPT electron accelerator. The neutron source of the subcritical assembly is generated from the interaction of 100 KW electron beam with a natural uranium target. The electron beam has a uniform spatial distribution and electron energy in the range of 100 to 200 MeV. The main functions of the subcritical assembly are the production of medical isotopes and the support of the Ukraine nuclear power industry. Neutron physics experiments and material structure analyses are planned using this facility. With the 100 KW electron beam power, the total thermal power of the facility is {approx}375 kW including the fission power of {approx}260 kW. The burnup of the fissile materials and the buildup of fission products reduce continuously the reactivity during the operation, which reduces the neutron flux level and consequently the facility performance. To preserve the neutron flux level during the operation, fuel assemblies should be added after long operating periods to compensate for the lost reactivity. This process requires accurate prediction of the fuel burnup, the decay behavior of the fission produces, and the introduced reactivity from adding fresh fuel assemblies. The recent developments of the Monte Carlo computer codes, the high speed capability of the computer processors, and the parallel computation techniques made it possible to perform three-dimensional detailed burnup simulations. A full detailed three-dimensional geometrical model is used for the burnup simulations with continuous energy nuclear data libraries for the transport calculations and 63-multigroup or one group cross sections libraries for the depletion calculations. Monte Carlo Computer code MCNPX and MCB are utilized for this study. MCNPX transports the electrons and the produced neutrons and photons but the current version of MCNPX doesn't support depletion/burnup calculation of the subcritical system with the generated neutron source from the target. MCB can perform neutron transport and burnup calculation for subcritical system using external neutron source, however it cannot perform electron transport calculations. To solve this problem, a hybrid procedure is developed by coupling these two computer codes. The user tally subroutine of MCNPX is developed and utilized to record the information of the each generated neutron from the photonuclear reactions resulted from the electron beam interactions. MCB reads the recorded information of each generated neutron thorough the user source subroutine. In this way, the neutron source generated by electron reactions could be utilized in MCB calculations, without the need for MCB to transport the electrons. Using the source subroutines, MCB could get the external neutron source, which is prepared by MCNPX, and perform depletion calculation for the driven subcritical facility.

Gohar, Y.; Zhong, Z.; Talamo, A.; Nuclear Engineering Division

2009-06-09T23:59:59.000Z

372

Characteristics of elliptical sources in BEAMnrc Monte Carlo system: Implementation and application

Recently, several papers noticed that the electron focal spot of a linear accelerator (linac) could be elliptical which would cause dosimetric discrepancies between measurements and Monte Carlo simulations. To resolve the mismatch, two elliptical source models were developed in BEAMnrc code. The first was a parallel beam elliptical source with uniform distribution where the shape of the source was primarily considered. The other was a parallel beam elliptical source with Gaussian distribution whose source distribution follows the normal distribution. To validate the elliptical source models, uniform and Gaussian electron beams were impinged on a thin air target. Both models successfully reproduced the elliptical shapes and source distributions. Then, this study investigated the characteristics of the elliptical Gaussian source for a 6 MV photon beam in a Varian 21EX linac. The linac head model was implemented in the BEAMnrc/EGSnrc system and commissioned by comparing the lateral and depth dose profiles to the ion chamber measurements acquired from the annual quality assurance (QA). It was found that the circular Gaussian beam with 6 MeV/0.2 cm full width half maximum (FWHM) produces the best matches to the QA data. To explore the characteristics of the elliptical Gaussian source, this study employed an elliptical Gaussian electron source with 0.1 cm FWHM in the x axis and 0.2 cm FWHM in the y axis which was incident on the target of the linac head. Two circular Gaussian beams with 0.1 and 0.2 cm FWHM were employed to compare the differences between circular and elliptical sources. For all the sources, planar and energy fluences were acquired and analyzed. This study also compared the lateral and depth dose profiles in a water phantom by using a DOSXYZnrc user code. In results, a constricted shoulder effect was observed in both planar and energy fluence plots when the FWHM value was increased and the field size is larger than 30x30 cm{sup 2}. The same effect was also noticed in the lateral dose profiles, while the depth dose profile did not vary much.

Kim, Sangroh [Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States)

2009-04-15T23:59:59.000Z

373

A Monte Carlo model of electron thermalization in inorganic scintillators, which was developed and applied to CsI in a previous publication [Wang et al., J. Appl. Phys. 110, 064903 (2011)], is extended to another material of the alkali halide class, NaI, and to two materials from the alkaline-earth halide class, CaF{sub 2} and BaF{sub 2}. This model includes electron scattering with both longitudinal optical (LO) and acoustic phonons as well as the effects of internal electric fields. For the four pure materials, a significant fraction of the electrons recombine with self-trapped holes and the thermalization distance distributions of the electrons that do not recombine peak between approximately 25 and 50 nm and extend up to a few hundreds of nanometers. The thermalization time distributions of CaF{sub 2}, BaF{sub 2}, NaI, and CsI extend to approximately 0.5, 1, 2, and 7 ps, respectively. The simulations show that the LO phonon energy is a key factor that affects the electron thermalization process. Indeed, the higher the LO phonon energy is, the shorter the thermalization time and distance are. The thermalization time and distance distributions show no dependence on the incident {gamma}-ray energy. The four materials also show different extents of electron-hole pair recombination due mostly to differences in their electron mean free paths (MFPs), LO phonon energies, initial densities of electron-hole pairs, and static dielectric constants. The effect of thallium doping is also investigated for CsI and NaI as these materials are often doped with activators. Comparison between CsI and NaI shows that both the larger size of Cs{sup +} relative to Na{sup +}, i.e., the greater atomic density of NaI, and the longer electron mean free path in NaI compared to CsI contribute to an increased probability for electron trapping at Tl sites in NaI versus CsI.

Wang Zhiguo; Gao Fei; Kerisit, Sebastien [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352 (United States); Xie Yulong [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352 (United States); Campbell, Luke W. [National Security Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352 (United States)

2012-07-01T23:59:59.000Z

374

A Monte Carlo model of electron thermalization in inorganic scintillators, which was developed and applied to CsI in a previous publication [Wang et al., J. Appl. Phys. 110, 064903 (2011)], is extended to another material of the alkali halide class, NaI, and to two materials from the alkaline-earth halide class, CaF2 and BaF2. This model includes electron scattering with both longitudinal optical (LO) and acoustic phonons as well as the effects of internal electric fields. For the four pure materials, a significant fraction of the electrons recombine with self-trapped holes and the thermalization distance distributions of the electrons that do not recombine peak between approximately 25 and 50 {per_thousand}nm and extend up to a few hundreds of nanometers. The thermalization time distributions of CaF2, BaF2, NaI, and CsI extend to approximately 0.5, 1, 2, and 7 ps, respectively. The simulations show that the LO phonon energy is a key factor that affects the electron thermalization process. Indeed, the higher the LO phonon energy is, the shorter the thermalization time and distance are. The thermalization time and distance distributions show no dependence on the incident {gamma}-ray energy. The four materials also show different extents of electron-hole pair recombination due mostly to differences in their electron mean free paths (MFPs), LO phonon energies, initial densities of electron-hole pairs, and static dielectric constants. The effect of thallium doping is also investigated for CsI and NaI as these materials are often doped with activators. Comparison between CsI and NaI shows that both the larger size of Cs+ relative to Na+, i.e., the greater atomic density of NaI, and the longer electron mean free path in NaI compared to CsI contribute to an increased probability for electron trapping at Tl sites in NaI versus CsI.

Wang, Zhiguo; Xie, YuLong; Campbell, Luke W.; Gao, Fei; Kerisit, Sebastien N.

2012-07-01T23:59:59.000Z

375

Science Journals Connector (OSTI)

Prompt fission neutrons following the thermal and 0.5 MeV neutron-induced fission reaction of Pu239 are calculated using a Monte Carlo approach to the evaporation of the excited fission fragments. Exclusive data such as the multiplicity distribution P(?), the average multiplicity as a function of fragment mass ??(A), and many others are inferred in addition to the most used average prompt fission neutron spectrum ?(Ein,Eout), as well as average neutron multiplicity ??. Experimental information on these more exclusive data help constrain the Monte Carlo model parameters. The calculated average total neutron multiplicity is ??c=2.871 in very close agreement with the evaluated value ??e=2.8725 present in the ENDF/B-VII.0 library. The neutron multiplicity distribution P(?) is in very good agreement with the evaluation by Holden and Zucker. The calculated average spectrum differs in shape from the ENDF/B-VII.0 spectrum, evaluated with the Madland-Nix model. In particular, we predict more neutrons in the low-energy tail of the spectrum (below about 300 keV) than the Madland-Nix calculations, casting some doubts on how much scission neutrons contribute to the shape of the low-energy tail of the spectrum. The spectrum high-energy tail is very sensitive to the total kinetic energy distribution of the fragments as well as to the total excitation energy sharing at scission. Present experimental uncertainties on measured spectra above 6 MeV are too large to distinguish between various theoretical hypotheses. Finally, comparisons of the Monte Carlo results with experimental data on ??(A) indicate that more neutrons are emitted from the light fragments than the heavy ones, in agreement with previous works.

P. Talou; B. Becker; T. Kawano; M. B. Chadwick; Y. Danon

2011-06-23T23:59:59.000Z

376

Understanding materials degradation under intense irradiation is important for the development of next generation nuclear power plants. Here we demonstrate that defect microstructural evolution in molybdenum nanofoils in situ irradiated and observed on a transmission electron microscope can be reproduced with high fidelity using an object kinetic Monte Carlo (OKMC) simulation technique. Main characteristics of defect evolution predicted by OKMC, namely, defect density and size distribution as functions of foil thickness, ion fluence and flux, are in excellent agreement with those obtained from the in situ experiments and from previous continuum-based cluster dynamics modeling. The combination of advanced in situ experiments and high performance computer simulation/modeling is a unique tool to validate physical assumptions/mechanisms regarding materials response to irradiation, and to achieve the predictive power for materials stability and safety in nuclear facilities.

Xu Donghua; Wirth, Brian D. [Department of Nuclear Engineering, University of Tennessee, Knoxville, Tennessee 37996 (United States); Li Meimei [Division of Nuclear Engineering, Argonne National Laboratory, Argonne, Illinois 60439 (United States); Kirk, Marquis A. [Division of Materials Science, Argonne National Laboratory, Argonne, Illinois 60439 (United States)

2012-09-03T23:59:59.000Z

377

A detailed characterization of a X-ray Si(Li) detector was performed to obtain the energy dependence of efficiency in the photon energy range of 6.4 - 59.5 keV, which was measured and reproduced by Monte Carlo (MC) simulations. Significant discrepancies between MC and experimental values were found when the manufacturer parameters of the detector were used in the simulation. A complete Computerized Tomography (CT) detector scan allowed to find the correct crystal dimensions and position inside the capsule. The computed efficiencies with the resulting detector model differed with the measured values no more than 10% in most of the energy range.

Lopez-Pino, N.; Padilla-Cabal, F.; Garcia-Alvarez, J. A.; Vazquez, L.; D'Alessandro, K.; Correa-Alfonso, C. M. [Departamento de Fisica Nuclear, Instituto Superior de Tecnologia y Ciencias Aplicadas (InSTEC) Ave. Salvador Allende y Luaces. Quinta de los Molinos. Habana 10600. A.P. 6163, La Habana (Cuba); Godoy, W.; Maidana, N. L.; Vanin, V. R. [Laboratorio do Acelerador Linear, Instituto de Fisica - Universidade de Sao Paulo Rua do Matao, Travessa R, 187, 05508-900, SP (Brazil)

2013-05-06T23:59:59.000Z

378

The correlations between multiplicities in two separated rapidity windows, is studied in the framework of the Monte Carlo model based on the picture of string formation in elementary collisions of colour dipoles. The hardness of the elementary collisions is defined by a transverse size of the interacting dipoles. The dependencies of the forward-backward correlation strength on the width and position of the pseudorapidity windows, as well as on transverse momentum range of observed particles were studied. It is demonstrated that taking into account of the string fusion effects improves the agreement with the available experimental data.

Kovalenko, Vladimir

2014-01-01T23:59:59.000Z

379

Search for New Heavy Higgs Boson in B-L model at the LHC using Monte Carlo Simulation

The aim of this work is to search for a new heavy Higgs boson in the B-L extension of the Standard Model at LHC using the data produced from simulated collisions between two protons at different center of mass energies by Monte Carlo event generator programs to find new Higgs boson signatures at the LHC. Also we study the production and decay channels for Higgs boson in this model and its interactions with the other new particles of this model namely the new neutral gauge massive boson and the new fermionic right-handed heavy neutrinos .

Hesham Mansour; Nady Bakhet

2013-04-24T23:59:59.000Z

380

Combined Monte Carlo and quantum mechanics study of the hydration of the guanine-cytosine base pair

Science Journals Connector (OSTI)

We present a computer simulation study of the hydration of the guanine-cytosine (GC) hydrogen-bonded complex. Using first principles density-functional theory, with gradient-corrected exchange-correlation and Monte Carlo simulation, we include thermal contribution, structural effects, solvent polarization, and the water-water and water-GC hydrogen bond interaction to show that the GC interaction in an aqueous environment is weakened to about 70% of the value obtained for an isolated complex. We also analyze in detail the preferred hydration sites of the GC pair and show that on the average it makes around five hydrogen bonds with water.

Kaline Coutinho; Valdemir Ludwig; Sylvio Canuto

2004-06-01T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

381

We describe the study of thermodynamics of materials using replica-exchange Wang-Landau (REWL) sampling, a generic framework for massively parallel implementations of the Wang-Landau Monte Carlo method. To evaluate the performance and scalability of the method, we investigate the magnetic phase transition in body-centered cubic (bcc) iron using the classical Heisenberg model parametrized with first principles calculations. We demonstrate that our framework leads to a significant speedup without compromising the accuracy and precision and facilitates the study of much larger systems than is possible with its serial counterpart.

Perera, Dilina; Eisenbach, Markus; Vogel, Thomas; Landau, David P

2014-01-01T23:59:59.000Z

382

We study electronic properties of graphene with finite concentration of vacancies or other resonant scatterers by a straightforward lattice Quantum Monte Carlo calculations. Taking into account realistic long-range Coulomb interaction we calculate distribution of spin density associated to midgap states and demonstrate antiferromagnetic ordering. Energy gap are open due to the interaction effects, both in the bare graphene spectrum and in the vacancy/impurity bands. In the case of 5 % concentration of resonant scatterers the latter gap is estimated as 0.7 eV and 1.1 eV for graphene on boron nitride and freely suspended graphene, respectively.

M. V. Ulybyshev; M. I. Katsnelson

2015-02-04T23:59:59.000Z

383

We study electronic properties of graphene with finite concentration of vacancies or other resonant scatterers by a straightforward lattice Quantum Monte Carlo calculations. Taking into account realistic long-range Coulomb interaction we calculate distribution of spin density associated to midgap states and demonstrate antiferromagnetic ordering. Energy gap are open due to the interaction effects, both in the bare graphene spectrum and in the vacancy/impurity bands. In the case of 5 % concentration of resonant scatterers the latter gap is estimated as 0.7 eV and 1.1 eV for graphene on boron nitride and freely suspended graphene, respectively.

Ulybyshev, M V

2015-01-01T23:59:59.000Z

384

Science Journals Connector (OSTI)

A new quantum propagator for asymmetric tops, exact for free tops, is applied to path-integral Monte Carlo simulations of quantum rotors. The algorithm does not suffer from the sign problem if the full density matrix is considered or if the identity representation for the density matrix is chosen. The method is applied to simulation of crystalline CH4, where the influence of quantum fluctuations in the lowering of the transition temperature from a plastic phase to an orientational ordered state is investigated. The possibility of an inverse hysteresis occurring at that transition is predicted and related to spin-statistical exchange effects.

M. H. Müser and B. J. Berne

1996-09-23T23:59:59.000Z

385

The major aim of this work is a sensitivity analysis related to the influence of the different nuclear data libraries on the k-infinity values and on the void coefficient estimations performed for various CANDU fuel projects, and on the simulations related to the replacement of the original stainless steel adjuster rods by cobalt assemblies in the CANDU reactor core. The computations are performed using the Monte Carlo transport codes MCNP5 and MONTEBURNS 1.0 for the actual, detailed geometry and material composition of the fuel bundles and reactivity devices. Some comparisons with deterministic and probabilistic codes involving the WIMS library are also presented.

Gugiu, E.D.; Dumitrache, I.; Constantin, M. [Institute for Nuclear Research, PO Box 78, 0300 Pitesti (Romania); Ellis, R.J. [Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, Tennessee (United States)

2005-05-24T23:59:59.000Z

386

Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running.

O'Brien, M J; Procassini, R J; Joy, K I

2009-03-09T23:59:59.000Z

387

We report on Hybrid-Monte-Carlo simulations of the tight-binding model with long-range Coulomb interactions for the electronic properties of graphene. We investigate the spontaneous breaking of sublattice symmetry corresponding to a transition from the semimetal to an antiferromagnetic insulating phase. Our short-range interactions thereby include the partial screening due to electrons in higher energy states from ab initio calculations based on the constrained random phase approximation [T.O.Wehling {\\it et al.}, Phys.Rev.Lett.{\\bf 106}, 236805 (2011)]. In contrast to a similar previous Monte-Carlo study [M.V.Ulybyshev {\\it et al.}, Phys.Rev.Lett.{\\bf 111}, 056801 (2013)] we also include a phenomenological model which describes the transition to the unscreened bare Coulomb interactions of graphene at half filling in the long-wavelength limit. Our results show, however, that the critical coupling for the antiferromagnetic Mott transition is largely insensitive to the strength of these long-range Coulomb tails. They hence confirm the prediction that suspended graphene remains in the semimetal phase when a realistic static screening of the Coulomb interactions is included.

Dominik Smith; Lorenz von Smekal

2014-03-14T23:59:59.000Z

388

MOCABA is a combination of Monte Carlo sampling and Bayesian updating algorithms for the prediction of integral functions of nuclear data, such as reactor power distributions or neutron multiplication factors. Similarly to the established Generalized Linear Least Squares (GLLS) methodology, MOCABA offers the capability to utilize integral experimental data to reduce the prior uncertainty of integral observables. The MOCABA approach, however, does not involve any series expansions and, therefore, does not suffer from the breakdown of first-order perturbation theory for large nuclear data uncertainties. This is related to the fact that, in contrast to the GLLS method, the updating mechanism within MOCABA is applied directly to the integral observables without having to "adjust" any nuclear data. A central part of MOCABA is the nuclear data Monte Carlo program NUDUNA, which performs random sampling of nuclear data evaluations according to their covariance information and converts them into libraries for transport code systems like MCNP or SCALE. What is special about MOCABA is that it can be applied to any integral function of nuclear data, and any integral measurement can be taken into account to improve the prediction of an integral observable of interest. In this paper we present two example applications of the MOCABA framework: the prediction of the neutron multiplication factor of a water-moderated PWR fuel assembly based on 21 criticality safety benchmark experiments and the prediction of the power distribution within a toy model reactor containing 100 fuel assemblies.

Axel Hoefer; Oliver Buss; Maik Hennebach; Michael Schmid; Dieter Porsch

2014-11-12T23:59:59.000Z

389

Photon energy-modulated radiotherapy: Monte Carlo simulation and treatment planning study

Purpose: To demonstrate the feasibility of photon energy-modulated radiotherapy during beam-on time. Methods: A cylindrical device made of aluminum was conceptually proposed as an energy modulator. The frame of the device was connected with 20 tubes through which mercury could be injected or drained to adjust the thickness of mercury along the beam axis. In Monte Carlo (MC) simulations, a flattening filter of 6 or 10 MV linac was replaced with the device. The thickness of mercury inside the device varied from 0 to 40 mm at the field sizes of 5 x 5 cm{sup 2} (FS5), 10 x 10 cm{sup 2} (FS10), and 20 x 20 cm{sup 2} (FS20). At least 5 billion histories were followed for each simulation to create phase space files at 100 cm source to surface distance (SSD). In-water beam data were acquired by additional MC simulations using the above phase space files. A treatment planning system (TPS) was commissioned to generate a virtual machine using the MC-generated beam data. Intensity modulated radiation therapy (IMRT) plans for six clinical cases were generated using conventional 6 MV, 6 MV flattening filter free, and energy-modulated photon beams of the virtual machine. Results: As increasing the thickness of mercury, Percentage depth doses (PDD) of modulated 6 and 10 MV after the depth of dose maximum were continuously increased. The amount of PDD increase at the depth of 10 and 20 cm for modulated 6 MV was 4.8% and 5.2% at FS5, 3.9% and 5.0% at FS10 and 3.2%-4.9% at FS20 as increasing the thickness of mercury from 0 to 20 mm. The same for modulated 10 MV was 4.5% and 5.0% at FS5, 3.8% and 4.7% at FS10 and 4.1% and 4.8% at FS20 as increasing the thickness of mercury from 0 to 25 mm. The outputs of modulated 6 MV with 20 mm mercury and of modulated 10 MV with 25 mm mercury were reduced into 30%, and 56% of conventional linac, respectively. The energy-modulated IMRT plans had less integral doses than 6 MV IMRT or 6 MV flattening filter free plans for tumors located in the periphery while maintaining the similar quality of target coverage, homogeneity, and conformity. Conclusions: The MC study for the designed energy modulator demonstrated the feasibility of energy-modulated photon beams available during beam-on time. The planning study showed an advantage of energy-and intensity modulated radiotherapy in terms of integral dose without sacrificing any quality of IMRT plan.

Park, Jong Min; Kim, Jung-in; Heon Choi, Chang; Chie, Eui Kyu; Kim, Il Han; Ye, Sung-Joon [Interdiciplinary Program in Radiation Applied Life Science, Seoul National University, Seoul, 110-744, Korea and Department of Radiation Oncology, Seoul National University Hospital, Seoul, 110-744 (Korea, Republic of); Interdiciplinary Program in Radiation Applied Life Science, Seoul National University, Seoul, 110-744 (Korea, Republic of); Department of Radiation Oncology, Seoul National University Hospital, Seoul, 110-744 (Korea, Republic of); Interdiciplinary Program in Radiation Applied Life Science, Seoul National University, Seoul, 110-744 (Korea, Republic of) and Department of Radiation Oncology, Seoul National University Hospital, Seoul, 110-744 (Korea, Republic of); Interdiciplinary Program in Radiation Applied Life Science, Seoul National University, Seoul, 110-744 (Korea, Republic of); Department of Radiation Oncology, Seoul National University Hospital, Seoul, 110-744 (Korea, Republic of) and Department of Intelligent Convergence Systems, Seoul National University, Seoul, 151-742 (Korea, Republic of)

2012-03-15T23:59:59.000Z

390

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

NUCLEAR DATA AND MEASUREMENT SERIES NUCLEAR DATA AND MEASUREMENT SERIES ANL/NDM-166 A Unified Monte Carlo Approach to Fast Neutron Cross Section Data Evaluation Donald L. Smith January 2008 NUCLEAR ENGINEERING DIVISION ARGONNE NATIONAL LABORATORY 9700 SOUTH CASS AVENUE ARGONNE, ILLINOIS 60439, U.S.A. 1 About Argonne National Laboratory Argonne is a U.S. Department of Energy laboratory managed by UChicago Argonne, LLC under contract DE-AC02-06CH11357. The Laboratory's main facility is in the suburbs of Chicago at 9700 South Cass Avenue, Argonne, Illinois 60439. For information about Argonne National Laboratory see http://www.anl.gov. Availability of this Report This report is available, at no cost, at http://www.osti.gov/bridge. It is also available on paper from the U.S. Department of Energy and its contractors, for a processing fee, from:

391

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Application of Distribution Application of Distribution Transformer Thermal Life Models to Electrified Vehicle Charging Loads Using Monte-Carlo Method Preprint Michael Kuss, Tony Markel, and William Kramer Presented at the 25th World Battery, Hybrid and Fuel Cell Electric Vehicle Symposium & Exhibition Shenzhen, China November 5 - 9, 2010 Conference Paper NREL/CP-5400-48827 January 2011 NOTICE The submitted manuscript has been offered by an employee of the Alliance for Sustainable Energy, LLC (Alliance), a contractor of the US Government under Contract No. DE-AC36-08GO28308. Accordingly, the US Government and Alliance retain a nonexclusive royalty-free license to publish or reproduce the published form of this contribution, or allow others to do so, for US Government purposes.

392

The purpose of the present study is to introduce a compression algorithm for the CT (computed tomography) data used in Monte Carlo simulations. Performing simulations on the CT data implies large computational costs as well as large memory requirements since the number of voxels in such data reaches typically into hundreds of millions voxels. CT data, however, contain homogeneous regions which could be regrouped to form larger voxels without affecting the simulation's accuracy. Based on this property we propose a compression algorithm based on octrees: in homogeneous regions the algorithm replaces groups of voxels with a smaller number of larger voxels. This reduces the number of voxels while keeping the critical high-density gradient area. Results obtained using the present algorithm on both phantom and clinical data show that compression rates up to 75% are possible without losing the dosimetric accuracy of the simulation.

Hubert-Tremblay, Vincent; Archambault, Louis; Tubic, Dragan; Roy, Rene; Beaulieu, Luc [Departement de Radio-Oncologie et Centre de Recherche en Cancerologie, CHUQ, Pavillon L'Hotel-Dieu de Quebec, 11 Cote du Palais, Quebec, Quebec, G1R 2J6 (Canada) and Departement de Physique, de Genie Physique et d'Optique, Universite Laval, Quebec, Quebec, G1K 7P4 (Canada); Departement de Radio-Oncologie et Centre de Recherche en Cancerologie, CHUQ, Pavillon L'Hotel-Dieu de Quebec, 11 Cote du Palais, Quebec, Quebec, G1R 2J6 (Canada) and Laboratoire de Vision et Systemes Numeriques, Departement de Genie Electrique et de Genie Informatique, Universite Laval, Quebec, Quebec, G1K 7P4 (Canada); Departement de Physique, de Genie Physique et d' Optique, Universite Laval, Quebec, Quebec, G1K 7P4 (Canada); Departement de Radio-Oncologie et Centre de Recherche en Cancerologie, CHUQ, Pavillon L'Hotel-Dieu de Quebec, 11 Cote du Palais, Quebec, G1R 2J6 (Canada) and Departement de Physique, de Genie Physique et d'Optique, Universite Laval, Quebec, Quebec, G1K 7P4 (Canada)

2006-08-15T23:59:59.000Z

393

A novel way to attain three dimensional fluence rate maps from Monte-Carlo simulations of photon propagation is presented in this work. The propagation of light in a turbid medium is described by the radiative transfer equation and formulated in terms of radiance. For many applications, particularly in biomedical optics, the fluence rate is a more useful quantity and directly derived from the radiance by integrating over all directions. Contrary to the usual way which calculates the fluence rate from absorbed photon power, the fluence rate in this work is directly calculated from the photon packet trajectory. The voxel based algorithm works in arbitrary geometries and material distributions. It is shown that the new algorithm is more efficient and also works in materials with a low or even zero absorption coefficient. The capabilities of the new algorithm are demonstrated on a curved layered structure, where a non-scattering, non-absorbing layer is sandwiched between two highly scattering layers.

Böcklin, Christoph, E-mail: boecklic@ethz.ch; Baumann, Dirk; Fröhlich, Jürg [Institute of Electromagnetic Fields, ETH Zurich, 8092 Zurich (Switzerland)

2014-02-14T23:59:59.000Z

394

Parameters of a subcritical cascade reactor driven by a proton accelerator and based on a primary lead-bismuth target, main reactor constructed analogously to the molten salt breeder (MSBR) reactor core and a booster-reactor analogous to the core of the BN-350 liquid metal cooled fast breeder reactor (LMFBR). It is shown by means of Monte-Carlo modeling that the reactor under study provides safe operation modes (k_{eff}=0.94-0.98), is apable to transmute effectively radioactive nuclear waste and reduces by an order of magnitude the requirements on the accelerator beam current. Calculations show that the maximal neutron flux in the thermal zone is 10^{14} cm^{12}\\cdot s^_{-1}, in the fast booster zone is 5.12\\cdot10^{15} cm^{12}\\cdot s{-1} at k_{eff}=0.98 and proton beam current I=2.1 mA.

Bznuni, S A; Zhamkochyan, V M; Polanski, A; Sosnin, A N; Khudaverdyan, A H

2001-01-01T23:59:59.000Z

395

A theoretical study of the time-of-flight (TOF) distributions under pulsed laser evaporation in vacuum has been performed. A database of TOF distributions has been calculated by the direct simulation Monte Carlo (DSMC) method. It is shown that describing experimental TOF signals through the use of the calculated TOF database combined with a simple analysis of evaporation allows determining the irradiated surface temperature and the rate of evaporation. Analysis of experimental TOF distributions under laser ablation of niobium, copper, and graphite has been performed, with the evaluated surface temperature being well agreed with results of the thermal model calculations. General empirical dependences are proposed, which allow indentifying the regime of the laser induced thermal ablation from the TOF distributions for neutral particles without invoking the DSMC-calculated database.

Morozov, Alexey A., E-mail: morozov@itp.nsc.ru [Institute of Thermophysics SB RAS, 1 Lavrentyev Ave., 630090 Novosibirsk (Russian Federation)

2013-12-21T23:59:59.000Z

396

For optimization and accurate prediction of the amount of H{sup -} ion production in negative ion sources, analysis of electron energy distribution function (EEDF) is necessary. We developed a numerical code which analyzes EEDF in the tandem-type arc-discharge source. It is a three-dimensional Monte Carlo simulation code with the effects of cusp, filter, and extraction magnets. Coulomb collision between electrons is treated with Takizuka's model and several inelastic collisions are treated with null-collision method. We applied this code to the JAEA 10 ampere negative ion source. The numerical result shows that the order of electron density is in good agreement with experimental results. In addition, the obtained EEDF is qualitatively in good agreement with experimental results.

Fujino, I.; Hatayama, A.; Takado, N.; Inoue, T. [Keio University, 3-14-1 Hiyoshi, Kouhoku-ku, Yokohama 223-8522 (Japan); Japan Atomic Energy Agency, 801-1 Mukouyama, Naka 311-0193 (Japan)

2008-02-15T23:59:59.000Z

397

The effect of silica nanoparticles on transient microemulsion networks made of microemulsion droplets and telechelic copolymer molecules in water is studied, as a function of droplet size and concentration, amount of copolymer, and nanoparticle volume fraction. The phase diagram is found to be affected, and in particular the percolation threshold characterized by rheology is shifted upon addition of nanoparticles, suggesting participation of the particles in the network. This leads to a peculiar reinforcement behaviour of such microemulsion nanocomposites, the silica influencing both the modulus and the relaxation time. The reinforcement is modelled based on nanoparticles connected to the network via droplet adsorption. Contrast-variation Small Angle Neutron Scattering coupled to a reverse Monte Carlo approach is used to analyse the microstructure. The rather surprising intensity curves are shown to be in good agreement with the adsorption of droplets on the nanoparticle surface.

Nicolas Puech; Serge Mora; Ty Phou; Gregoire Porte; Jacques Jestin; Julian Oberdisse

2010-12-04T23:59:59.000Z

398

Structure of Cu64.5Zr35.5 Metallic glass by reverse Monte Carlo simulations

Reverse Monte Carlo simulations (RMC) have been widely used to generate three dimensional (3D) atomistic models for glass systems. To examine the reliability of the method for metallic glass, we use RMC to predict the atomic configurations of a “known” structure from molecular dynamics (MD) simulations, and then compare the structure obtained from the RMC with the target structure from MD. We show that when the structure factors and partial pair correlation functions from the MD simulations are used as inputs for RMC simulations, the 3D atomistic structure of the glass obtained from the RMC gives the short- and medium-range order in good agreement with those from the target structure by the MD simulation. These results suggest that 3D atomistic structure model of the metallic glass alloys can be reasonably well reproduced by RMC method with a proper choice of input constraints.

Fang, Xikui W. [Ames Laboratory; Huang, Li [Ames Laboratory; Wang, Cai-Zhuang [Ames Laboratory; Ho, Kai-Ming [Ames Laboratory; Ding, Z. J. [University of Science and Technology of China

2014-02-07T23:59:59.000Z

399

We develop a Monte-Carlo event generator based on combination of a parton production formula including the effects of parton saturation (called the DHJ formula) and hadronization process due to the Lund string fragmentation model. This event generator is designed for the description of hadron productions at forward rapidities and in a wide transverse momentum range in high-energy proton-proton collisions. We analyze transverse momentum spectra of charged hadrons as well as identified particles; pion, kaon, (anti-)proton at RHIC energy, and ultra-forward neutral pion spectra from LHCf experiment. We compare our results to those obtained in other models based on parton-hadron duality and fragmentation functions.

Deng, Wei-Tian; Itakura, Kazunori; Nara, Yasushi

2014-01-01T23:59:59.000Z

400

Monte Carlo study for optimal conditions in single-shot imaging with femtosecond x-ray laser pulses

Intense x-ray pulses from x-ray free electron lasers (XFELs) enable the unveiling of atomic structure in material and biological specimens via ultrafast single-shot exposures. As the radiation is intense enough to destroy the sample, a new sample must be provided for each x-ray pulse. These single-particle delivery schemes require careful optimization, though systematic study to find such optimal conditions is still lacking. We have investigated two major single-particle delivery methods: particle injection as flying objects and membrane-mount as fixed targets. The optimal experimental parameters were searched for via Monte Carlo simulations to discover that the maximum single-particle hit rate achievable is close to 40%.

Park, Jaehyun; Ishikawa, Tetsuya; Song, Changyong [RIKEN SPring-8 Center, 1-1-1 Kouto, Sayo, Hyogo 679-5148 (Japan)] [RIKEN SPring-8 Center, 1-1-1 Kouto, Sayo, Hyogo 679-5148 (Japan); Joti, Yasumasa [Japan Synchrotron Radiation Research Institute, 1-1-1 Kouto, Sayo, Hyogo 679-5198 (Japan)] [Japan Synchrotron Radiation Research Institute, 1-1-1 Kouto, Sayo, Hyogo 679-5198 (Japan)

2013-12-23T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

401

Science Journals Connector (OSTI)

AbstractObjective To describe and validate the simulation of the basic features of GE Millennium MG gamma camera using the GATE Monte Carlo platform. Material and methods Crystal size and thickness, parallel-hole collimation and a realistic energy acquisition window were simulated in the GATE platform. GATE results were compared to experimental data in the following imaging conditions: a point source of 99mTc at different positions during static imaging and tomographic acquisitions using two different energy windows. The accuracy between the events expected and detected by simulation was obtained with the Mann–Whitney–Wilcoxon test. Comparisons were made regarding the measurement of sensitivity and spatial resolution, static and tomographic. Simulated and experimental spatial resolutions for tomographic data were compared with the Kruskal–Wallis test to assess simulation accuracy for this parameter. Results There was good agreement between simulated and experimental data. The number of decays expected when compared with the number of decays registered, showed small deviation (?0.007%). The sensitivity comparisons between static acquisitions for different distances from source to collimator (1, 5, 10, 20, 30 cm) with energy windows of 126–154 keV and 130–158 keV showed differences of 4.4%, 5.5%, 4.2%, 5.5%, 4.5% and 5.4%, 6.3%, 6.3%, 5.8%, 5.3%, respectively. For the tomographic acquisitions, the mean differences were 7.5% and 9.8% for the energy window 126–154 keV and 130–158 keV. Comparison of simulated and experimental spatial resolutions for tomographic data showed no statistically significant differences with 95% confidence interval. Conclusions Adequate simulation of the system basic features using GATE Monte Carlo simulation platform was achieved and validated.

L. Vieira; T.F. Vaz; D.C. Costa; P. Almeida

2014-01-01T23:59:59.000Z

402

SU?E?T?76: Monte Carlo Based Assessment of Matrixx Array as a QA Tool for HDR

Science Journals Connector (OSTI)

Purpose: Verification of HDR dose delivered from surface applicators is challenging because there is a lack of backscatter and most brachytherapy TPS dose calculations do not accurately model such situations. IBA Matrixx™ provides a means to measure HDR dose but brachytherapy sources are outside the manufacturers energy specification. This work investigates the use of an ion chamber array for HDR treatment verification. Methods: A Harrison?Mick (HAM) applicator in conjunction with an Ir192 afterloader was used to deliver a superficial plan with 9 × 9 dwells on a grid separated by 1cm normalized to 100cGy a depth of 3mm in skin. The Matrixx™ array was used to measure the dose. Backscatter contribution to the measured dose was changed by varying layers of solid water on the HAM applicator. The brachytherapy module of EGSnrc was used to simulate this experimental setup. The microselectron V2 Ir192 source was carefully modeled following published literature. Rayleigh scattering bound Compton scattering photo?electric absorption were all simulated. Results: Taking the HAM applicators inherent backscatter of about 0.5cm as reference the dosimetric contribution saturates around 6cm. An absence of backscatter material reduces the dose at 3mm into skin by about 3.5 – 4%. Dose from Monte Carlo simulation compares favorably with Matrixx™ measurements; beyond 5cm of backscatter the increase in measured dose is less than 0.5% and our results from Monte Carlo simulations reflect this. Conclusion: A maximum difference of 0.5% between measured and simulated doses for different amounts of backscatter material indicates that the Matrixx™ ion chamber array even when used in the kV energy domain of the HDR source can be a satisfactory clinical QA device for checking planar dose distributions.

2013-01-01T23:59:59.000Z

403

Quantum Monte Carlo calculation of the electronic binding energy in a C60 molecule Fei Lin, Jurij electrons4Â11 and lattice-level calculations based on an effec- tive Hamiltonian in which the intramolecular,5,15Â17 This argument was supported by perturbative calculations of the electronic binding energies of the conven

SÃ¸rensen, Erik S.

404

Light-induced oxygen-ordering dynamics in ,,Y,Pr...Ba2Cu3O6.7: A Raman spectroscopy and Monte Carlo energy barrier which impedes oxygen movement in the plane unless the oxygen atoms are excited by light for oxygen reordering in the chain plane being at the origin of Raman photobleaching and related effects. DOI

Nabben, Reinhard

405

Phase Behavior of the Restricted Primitive Model and Square-Well Fluids from Monte Carlo of Chemical Engineering, Cornell University, Ithaca, NY 14853-5201 and Institute for Physical Science and Technology and Department of Chemical Engineering, University of Maryland, College Park, MD 20742

406

The purpose of this study is to validate a Monte Carlo based depletion methodology by comparing calculated post-irradiation uranium isotopic compositions in the fuel elements of the High Flux Isotope Reactor (HFIR) core to values measured using uranium mass-spectrographic analysis. Three fuel plates were analyzed: two from the outer fuel element (OFE) and one from the inner fuel element (IFE). Fuel plates O-111-8, O-350-1, and I-417-24 from outer fuel elements 5-O and 21-O and inner fuel element 49-I, respectively, were selected for examination. Fuel elements 5-O, 21-O, and 49-1 were loaded into HFIR during cycles 4, 16, and 35, respectively (mid to late 1960s). Approximately one year after each of these elements were irradiated, they were transferred to the High Radiation Level Examination Laboratory (HRLEL) where samples from these fuel plates were sectioned and examined via uranium mass-spectrographic analysis. The isotopic composition of each of the samples was used to determine the atomic percent of the uranium isotopes. A Monte Carlo based depletion computer program, ALEPH, which couples the MCNP and ORIGEN codes, was utilized to calculate the nuclide inventory at the end-of-cycle (EOC). A current ALEPH/MCNP input for HFIR fuel cycle 400 was modified to replicate cycles 4, 16, and 35. The control element withdrawal curves and flux trap loadings were revised, as well as the radial zone boundaries and nuclide concentrations in the MCNP model. The calculated EOC uranium isotopic compositions for the analyzed plates were found to be in good agreement with measurements, which reveals that ALEPH/MCNP can accurately calculate burn-up dependent uranium isotopic concentrations for the HFIR core. The spatial power distribution in HFIR changes significantly as irradiation time increases due to control element movement. Accurate calculation of the end-of-life uranium isotopic inventory is a good indicator that the power distribution variation as a function of space and time is accurately calculated, i.e. an integral check. Hence, the time dependent heat generation source terms needed for reactor core thermal hydraulic analysis, if derived from this methodology, have been shown to be accurate for highly enriched uranium (HEU) fuel.

Chandler, David [ORNL; Maldonado, G Ivan [ORNL; Primm, Trent [ORNL

2010-01-01T23:59:59.000Z

407

We propose a new Monte Carlo method to study extended X-ray sources with the European Photon Imaging Camera (EPIC) aboard XMM Newton. The Smoothed Particle Inference (SPI) technique, described in a companion paper, is applied here to the EPIC data for the clusters of galaxies Abell 1689, Centaurus and RXJ 0658-55 (the ''bullet cluster''). We aim to show the advantages of this method of simultaneous spectral-spatial modeling over traditional X-ray spectral analysis. In Abell 1689 we confirm our earlier findings about structure in temperature distribution and produce a high resolution temperature map. We also confirm our findings about velocity structure within the gas. In the bullet cluster, RXJ 0658-55, we produce the highest resolution temperature map ever to be published of this cluster allowing us to trace what looks like the motion of the bullet in the cluster. We even detect a south to north temperature gradient within the bullet itself. In the Centaurus cluster we detect, by dividing up the luminosity of the cluster in bands of gas temperatures, a striking feature to the north-east of the cluster core. We hypothesize that this feature is caused by a subcluster left over from a substantial merger that slightly displaced the core. We conclude that our method is very powerful in determining the spatial distributions of plasma temperatures and very useful for systematic studies in cluster structure.

Andersson, Karl E.; /Stockholm U. /SLAC; Peterson, J.R.; /Purdue U. /KIPAC, Menlo Park; Madejski, G.M.; /SLAC /KIPAC, Menlo Park

2007-04-17T23:59:59.000Z

408

The three-body dynamics of the ionization of the atomic hydrogen by 30 keV antiproton impact has been investigated by calculation of fully differential cross sections (FDCS) using the classical trajectory Monte Carlo (CTMC) method. The results of the calculations are compared with the predictions of quantum mechanical descriptions: The semi-classical time-dependent close-coupling theory, the fully quantal, time-independent close-coupling theory, and the continuum-distorted-wave-eikonal-initial-state model. In the analysis particular emphasis was put on the role of the nucleus-nucleus (NN) interaction played in the ionization process. For low-energy electron ejection CTMC predicts a large NN interaction effect on FDCS, in agreement with the quantum mechanical descriptions. By examining individual particle trajectories it was found that the relative motion between the electron and the nuclei is coupled very weakly with that between the nuclei, consequently the two motions can be treated independently. A simple ...

Sarkadi, L

2015-01-01T23:59:59.000Z

409

Science Journals Connector (OSTI)

Abstract An explosive detection system based on a Deuterium–Deuterium (D–D) neutron generator has been simulated using the Monte Carlo N-Particle Transport Code (MCNP5). Nuclear-based explosive detection methods can detect explosives by identifying their elemental components, especially nitrogen. Thermal neutron capture reactions have been used for detecting prompt gamma emission (10.82 MeV) following radiative neutron capture by 14N nuclei. The explosive detection system was built based on a fully high-voltage-shielded, axial D–D neutron generator with a radio frequency (RF) driven ion source and nominal yield of about 1010 fast neutrons per second (E=2.5 MeV). Polyethylene and paraffin were used as moderators with borated polyethylene and lead as neutron and gamma ray shielding, respectively. The shape and the thickness of the moderators and shields are optimized to produce the highest thermal neutron flux at the position of the explosive and the minimum total dose at the outer surfaces of the explosive detection system walls. In addition, simulation of the response functions of NaI, BGO, and LaBr3-based ?-ray detectors to different explosives is described.

K. Bergaoui; N. Reguigui; C.K. Gary; C. Brown; J.T. Cremer; J.H. Vainionpaa; M.A. Piestrup

2014-01-01T23:59:59.000Z

410

Science Journals Connector (OSTI)

We describe a 3-dimensional, time-dependent Monte Carlo model developed to analyze the chemical and physical nature of a cometary gas coma. Our model includes the necessary physics and chemistry to recreate the conditions applicable to Comet Hale–Bopp when the comet was near 1 AU from the Sun. Two base models were designed and are described here. The first is an isotropic model that emits particles (parents of the observed gases) from the entire nucleus; the second is a jet model that ejects parent particles solely from discrete active areas on the surface of the comet nucleus, resulting in coma jets. The two models are combined to produce the final model, which is compared with observations. The physical processes incorporated in both base models include: (1) isotropic ejection of daughter molecules (the observed gases) in the parent's frame of reference, (2) solar radiation pressure, (3) solar insolation effects, (4) collisions of daughter products with other molecules in the coma, and (5) acceleration of the gas in the coma. The observed daughter molecules are produced when a parent decays, which is represented by either an exponential decay distribution (photodissociation of the parent gas) or a triangular distribution (production from a grain extended source). Application of this model to the analysis the OH, C2 and CN gas jets observed in the coma of Comet Hale–Bopp is the focus of the accompanying paper [Lederer, S.M., Campins, H., Osip, D.J., 2008. Icarus, in press (this issue)].

S.M. Lederer; H. Campins; D.J. Osip

2009-01-01T23:59:59.000Z

411

An analysis of surface potential nonlinearity at metal oxide/electrolyte interfaces is presented. By using Grand Canonical Monte Carlo simulations of a simple lattice model of an interface, we show a correlation exists between ionic strength as well as surface site densities and the non-Nernstian response of a metal oxide electrode. We propose two approaches to deal with the 0-nonlinearity: one based on perturbative expansion of the Gibbs free energy and another based on assumption of the pH-dependence of surface potential slope. The theoretical anal ysis based on our new potential form gives excellent performance at extreme pH regions, where classical formulae based on the Poisson-Boltzmann equation fail. The new formula is general and independent of any underlying assumptions. For this reason, it can be directly applied to experimental surface potential measurements, including those for individual surfaces of single crystals, as we present for data reported by Kallay and Preocanin [Kallay, Preocanin J. Colloid and Interface20 Sci. 318 (2008) 290].

Zarzycki, Piotr P.; Rosso, Kevin M.

2010-01-01T23:59:59.000Z

412

The origin of ultra-fast outflows in AGN: Monte-Carlo simulations of the wind in PDS 456

Ultra-fast outflows (UFOs) are seen in many AGN, giving a possible mode for AGN feedback onto the host galaxy. However, the mechanism(s) for the launch and acceleration of these outflows are currently unknown, with UV line driving apparently strongly disfavoured as the material along the line of sight is so highly ionised that it has no UV transitions. We revisit this issue using the Suzaku X-ray data from PDS 456, an AGN with the most powerful UFO seen in the local Universe. We explore conditions in the wind by developing a new 3-D Monte-Carlo code for radiation transport. The code only handles highly ionised ions, but the data show the ionisation state of the wind is high enough that this is appropriate, and this restriction makes it fast enough to explore parameter space. We reproduce the results of earlier work, confirming that the mass loss rate in the wind is around 30% of the inferred inflow rate through the outer disc. We show for the first time that UV line driving is likely to be a major contributio...

Hagino, Kouichi; Done, Chris; Gandhi, Poshak; Watanabe, Shin; Sako, Masao; Takahashi, Tadayuki

2014-01-01T23:59:59.000Z

413

Two-dimensional axisymmetric particle-in-cell simulations with Monte Carlo collision calculations (PIC-MCC) have been conducted to investigate argon microplasma characteristics of a miniature inductively coupled plasma source with a 5-mm-diameter planar coil, where the radius and length are 5 mm and 6 mm, respectively. Coupling the rf-electromagnetic fields to the plasma is carried out based on a collisional model and a kinetic model. The former employs the cold-electron approximation and the latter incorporates warm-electron effects. The numerical analysis has been performed for pressures in the range 370-770 mTorr and at 450 MHz rf powers below 3.5 W, and then the PIC-MCC results are compared with available experimental data and fluid simulation results. The results show that a considerably thick sheath structure can be seen compared with the plasma reactor size and the electron energy distribution is non-Maxwellian over the entire plasma region. As a result, the distribution of the electron temperature is quite different from that obtained in the fluid model. The electron temperature as a function of rf power is in a reasonable agreement with experimental data. The pressure dependence of the plasma density shows different tendency between the collisional and kinetic model, implying noncollisional effects even at high pressures due to the high rf frequency, where the electron collision frequency is less than the rf driving frequency.

Takao, Yoshinori; Kusaba, Naoki; Eriguchi, Koji; Ono, Kouichi [Department of Aeronautics and Astronautics, Graduate School of Engineering, Kyoto University, Yoshida-Honmachi, Sakyo-ku, Kyoto 606-8501 (Japan)

2010-11-15T23:59:59.000Z

414

Recently an extended series of equally spaced vibrational modes was observed in uranium nitride (UN) by performing neutron spectroscopy measurements using the ARCS and SEQUOIA time-of- flight chopper spectrometers [A.A. Aczel et al, Nature Communications 3, 1124 (2012)]. These modes are well described by 3D isotropic quantum harmonic oscillator (QHO) behavior of the nitrogen atoms, but there are additional contributions to the scattering that complicate the measured response. In an effort to better characterize the observed neutron scattering spectrum of UN, we have performed Monte Carlo ray tracing simulations of the ARCS and SEQUOIA experiments with various sample kernels, accounting for the nitrogen QHO scattering, contributions that arise from the acoustic portion of the partial phonon density of states (PDOS), and multiple scattering. These simulations demonstrate that the U and N motions can be treated independently, and show that multiple scattering contributes an approximate Q-independent background to the spectrum at the oscillator mode positions. Temperature dependent studies of the lowest few oscillator modes have also been made with SEQUOIA, and our simulations indicate that the T-dependence of the scattering from these modes is strongly influenced by the uranium lattice.

Lin, J. Y. Y. [California Institute of Technology, Pasadena] [California Institute of Technology, Pasadena; Aczel, Adam A [ORNL] [ORNL; Abernathy, Douglas L [ORNL] [ORNL; Nagler, Stephen E [ORNL] [ORNL; Buyers, W. J. L. [National Research Council of Canada] [National Research Council of Canada; Granroth, Garrett E [ORNL] [ORNL

2014-01-01T23:59:59.000Z

415

The first off-lattice Monte Carlo kinetics model of interstellar dust-grain surface chemistry is presented. The positions of all surface particles are determined explicitly, according to the local potential minima resulting from the pair-wise interactions of contiguous atoms and molecules, rather than by a pre-defined lattice structure. The model is capable of simulating chemical kinetics on any arbitrary dust-grain morphology, as determined by the user-defined positions of each individual dust-grain atom. A simple method is devised for the determination of the most likely diffusion pathways and their associated energy barriers for surface species. The model is applied to a small, idealized dust grain, adopting various gas densities and using a small chemical network. Hydrogen and oxygen atoms accrete onto the grain, to produce H2O, H2, O2 and H2O2. The off-lattice method allows the ice structure to evolve freely; ice mantle porosity is found to be dependent on the gas density, which controls the accretion ra...

Garrod, Robin T

2013-01-01T23:59:59.000Z

416

United States Transuranium and Uranium Registries (USTUR) Case 0102 was the first whole-body donation to the USTUR (1979), of a worker affected by a substantial accidental 241Am intake(1). Half of this man’s skeleton, encased in tissue-quivalent plastic, provides a unique human ‘phantom’ for calibrating in vivo counting systems. In this case, the 241Am skeletal activity was measured 25 y after the intake. Approximately 82 % of the 241Am remaining in the body was found in the bones and teeth. The241Am activity concentration throughout the skeleton (in all types of bone) was fairly uniform(2). A protocol has been proposed by a group of in vivo laboratories from Europe [CIEMAT-Spain, IRSN-France and Helmholtz Zentrum Mu¨nchen (HMGU)-Germany] and Canada (HML) participating in this DOS/USTUR intercomparison. The focus areas for the study included: (1) the efficiency pattern along the leg phantom using Germanium detectors (experimental and computational), (2) the comparison of Monte Carlo (MC) results with experimental values in counting efficiency data and (3) the inflence of americium distribution in the bone material (volume or surface).

Lopez, M. A.; Broggio, D.; Capello, K.; Cardenas-Mendez, E.; El-Faramawy, N.; Franck, D.; James, Anthony C.; Kramer, Gary H.; Lacerenza, G.; Lynch, Timothy P.; Navarro, J. F.; Navarro, T.; Perez, B.; Ruhm, W.; Tolmachev, Sergei Y.; Weitzenegger, E.

2011-03-01T23:59:59.000Z

417

The self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo J. Chem. Phys. {\\bf 136}, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. {\\bf 89}, 6316 (1988)] are blended to obtain a method for the calculation of thermodynamic properties of many-body systems at low temperatures. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric trial wave functions. A statistical method is derived for the calculation of finite temperature properties of many-body systems near the ground state. In the process we also obtain a parallel algorithm that optimizes the many-body basis of a small subspace of the many-body Hilbert space. This small subspace is optimized to have maximum overlap with the one expanded by the lower energy eigenstates of a many-body Hamiltonian. We show in a model system that the Helmholtz free energy is minimized within this subspace as the iteration number increases. We show that the subspace expanded by the small basis systematically converges towards the subspace expanded by the lowest energy eigenstates. Possible applications of this method to calculate the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can be also used to accelerate the calculation of the ground or excited states with Quantum Monte Carlo.

Kim, Jeongnim [ORNL] [ORNL; Reboredo, Fernando A [ORNL] [ORNL

2014-01-01T23:59:59.000Z

418

Monte Carlo Simulation of RPC-based PET with GEANT4

The Resistive Plate Chambers (RPC) are low-cost charged-particle detectors with good timing resolution and potentially good spatial resolution. Using RPC as gamma detector provides an opportunity for application in positron emission tomography (PET). In this work, we use GEANT4 simulation package to study various methods improving the detection efficiency of a realistic RPC-based PET model for 511keV photons, by adding more detection units, changing the thickness of each layer, choosing different converters and using multi-gaps RPC (MRPC) technique. Proper balance among these factors are discussed. It's found that although RPC with materials of high atomic number can reach a higher efficiency, they may contribute to a poor spatial resolution and higher background level.

Weizheng, Zhou; Cheng, Li; Hongfang, Chen; Yongjie, Sun; Tianxiang, Chen

2014-01-01T23:59:59.000Z

419

Monte Carlo Simulation of RPC-based PET with GEANT4

The Resistive Plate Chambers (RPC) are low-cost charged-particle detectors with good timing resolution and potentially good spatial resolution. Using RPC as gamma detector provides an opportunity for application in positron emission tomography (PET). In this work, we use GEANT4 simulation package to study various methods improving the detection efficiency of a realistic RPC-based PET model for 511keV photons, by adding more detection units, changing the thickness of each layer, choosing different converters and using multi-gaps RPC (MRPC) technique. Proper balance among these factors are discussed. It's found that although RPC with materials of high atomic number can reach a higher efficiency, they may contribute to a poor spatial resolution and higher background level.

Zhou Weizheng; Shao Ming; Li Cheng; Chen Hongfang; Sun Yongjie; Chen Tianxiang

2014-02-19T23:59:59.000Z

420

Purpose: Brachytherapy planning software relies on the Task Group report 43 dosimetry formalism. This formalism, based on a water approximation, neglects various heterogeneous materials present during treatment. Various studies have suggested that these heterogeneities should be taken into account to improve the treatment quality. The present study sought to demonstrate the feasibility of incorporating Monte Carlo (MC) dosimetry within an inverse planning algorithm to improve the dose conformity and increase the treatment quality. Methods and Materials: The method was based on precalculated dose kernels in full patient geometries, representing the dose distribution of a brachytherapy source at a single dwell position using MC simulations and the Geant4 toolkit. These dose kernels are used by the inverse planning by simulated annealing tool to produce a fast MC-based plan. A test was performed for an interstitial brachytherapy breast treatment using two different high-dose-rate brachytherapy sources: the microSelectron iridium-192 source and the electronic brachytherapy source Axxent operating at 50 kVp. Results: A research version of the inverse planning by simulated annealing algorithm was combined with MC to provide a method to fully account for the heterogeneities in dose optimization, using the MC method. The effect of the water approximation was found to depend on photon energy, with greater dose attenuation for the lower energies of the Axxent source compared with iridium-192. For the latter, an underdosage of 5.1% for the dose received by 90% of the clinical target volume was found. Conclusion: A new method to optimize afterloading brachytherapy plans that uses MC dosimetric information was developed. Including computed tomography-based information in MC dosimetry in the inverse planning process was shown to take into account the full range of scatter and heterogeneity conditions. This led to significant dose differences compared with the Task Group report 43 approach for the Axxent source.

D'Amours, Michel [Departement de Radio-Oncologie et Centre de Recherche en Cancerologie de l'Universite Laval, Hotel-Dieu de Quebec, Quebec, QC (Canada); Department of Physics, Physics Engineering, and Optics, Universite Laval, Quebec, QC (Canada); Pouliot, Jean [Department of Radiation Oncology, University of California, San Francisco, School of Medicine, San Francisco, CA (United States); Dagnault, Anne [Departement de Radio-Oncologie et Centre de Recherche en Cancerologie de l'Universite Laval, Hotel-Dieu de Quebec, Quebec, QC (Canada); Verhaegen, Frank [Department of Radiation Oncology, Maastro Clinic, GROW Research Institute, Maastricht University Medical Centre, Maastricht (Netherlands); Department of Oncology, McGill University, Montreal, QC (Canada); Beaulieu, Luc, E-mail: beaulieu@phy.ulaval.ca [Departement de Radio-Oncologie et Centre de Recherche en Cancerologie de l'Universite Laval, Hotel-Dieu de Quebec, Quebec, QC (Canada); Department of Physics, Physics Engineering, and Optics, Universite Laval, Quebec, QC (Canada)

2011-12-01T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

421

This study investigates the performance of the YALINA Booster subcritical assembly, located in Belarus, during operation with high (90%), medium (36%), and low (21%) enriched uranium fuels in the assembly's fast zone. The YALINA Booster is a zero-power, subcritical assembly driven by a conventional neutron generator. It was constructed for the purpose of investigating the static and dynamic neutronics properties of accelerator driven subcritical systems, and to serve as a fast neutron source for investigating the properties of nuclear reactions, in particular transmutation reactions involving minor-actinides. The first part of this study analyzes the assembly's performance with several fuel types. The MCNPX and MONK Monte Carlo codes were used to determine effective and source neutron multiplication factors, effective delayed neutron fraction, prompt neutron lifetime, neutron flux profiles and spectra, and neutron reaction rates produced from the use of three neutron sources: californium, deuterium-deuterium, and deuterium-tritium. In the latter two cases, the external neutron source operates in pulsed mode. The results discussed in the first part of this report show that the use of low enriched fuel in the fast zone of the assembly diminishes neutron multiplication. Therefore, the discussion in the second part of the report focuses on finding alternative fuel loading configurations that enhance neutron multiplication while using low enriched uranium fuel. It was found that arranging the interface absorber between the fast and the thermal zones in a circular rather than a square array is an effective method of operating the YALINA Booster subcritical assembly without downgrading neutron multiplication relative to the original value obtained with the use of the high enriched uranium fuels in the fast zone.

Talamo, A.; Gohar, Y. (Nuclear Engineering Division) [Nuclear Engineering Division

2011-05-12T23:59:59.000Z

422

This study was carried out to model and analyze the YALINA-Booster facility, of the Joint Institute for Power and Nuclear Research of Belarus, with the long term objective of advancing the utilization of accelerator driven systems for the incineration of nuclear waste. The YALINA-Booster facility is a subcritical assembly, driven by an external neutron source, which has been constructed to study the neutron physics and to develop and refine methodologies to control the operation of accelerator driven systems. The external neutron source consists of Californium-252 spontaneous fission neutrons, 2.45 MeV neutrons from Deuterium-Deuterium reactions, or 14.1 MeV neutrons from Deuterium-Tritium reactions. In the latter two cases a deuteron beam is used to generate the neutrons. This study is a part of the collaborative activity between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research of Belarus. In addition, the International Atomic Energy Agency (IAEA) has a coordinated research project benchmarking and comparing the results of different numerical codes with the experimental data available from the YALINA-Booster facility and ANL has a leading role coordinating the IAEA activity. The YALINA-Booster facility has been modeled according to the benchmark specifications defined for the IAEA activity without any geometrical homogenization using the Monte Carlo codes MONK and MCNP/MCNPX/MCB. The MONK model perfectly matches the MCNP one. The computational analyses have been extended through the MCB code, which is an extension of the MCNP code with burnup capability because of its additional feature for analyzing source driven multiplying assemblies. The main neutronics parameters of the YALINA-Booster facility were calculated using these computer codes with different nuclear data libraries based on ENDF/B-VI-0, -6, JEF-2.2, and JEF-3.1.

Talamo, A.; Gohar, M. Y. A.; Nuclear Engineering Division

2008-09-11T23:59:59.000Z

423

The tokamak Monte Carlo fast ion module NUBEAM in the National Transport Code Collaboration library

Science Journals Connector (OSTI)

The NUBEAM module is a comprehensive computational model for Neutral Beam Injection (NBI) in tokamaks. It is used to compute power deposition, driven current, momentum transfer, fueling, and other profiles in tokamak plasmas due to NBI. NUBEAM computes the time-dependent deposition and slowing down of the fast ions produced by NBI, taking into consideration beam geometry and composition, ion-neutral interactions (atomic physics), anomalous diffusion of fast ions, the effects of large scale instabilities, the effect of magnetic ripple, and finite Larmor radius effects. The NUBEAM module can also treat fusion product ions that contribute to alpha heating and ash accumulation, whether or not NBI is present. These physical phenomena are important in simulations of present day tokamaks and projections to future devices such as ITER. The NUBEAM module was extracted from the TRANSP integrated modeling code, using standards of the National Transport Code Collaboration (NTCC), and was submitted to the NTCC module library (http://w3.pppl.gov/NTCC). This paper describes the physical processes computed in the NUBEAM module, together with a summary of the numerical techniques that are used. The structure of the NUBEAM module is described, including its dependence on other NTCC library modules. Finally, a description of the procedure for setting up input data for the NUBEAM module and making use of the output is outlined.

Alexei Pankin; Douglas McCune; Robert Andre; Glenn Bateman; Arnold Kritz

2004-01-01T23:59:59.000Z

424

Purpose: The experimental determination of doses at proximal distances from radioactive sources is difficult because of the steepness of the dose gradient. The goal of this study was to determine the relative radial dose distribution for a low dose rate {sup 192}Ir wire source using electron paramagnetic resonance imaging (EPRI) and to compare the results to those obtained using Gafchromic EBT film dosimetry and Monte Carlo (MC) simulations. Methods: Lithium formate and ammonium formate were chosen as the EPR dosimetric materials and were used to form cylindrical phantoms. The dose distribution of the stable radiation-induced free radicals in the lithium formate and ammonium formate phantoms was assessed by EPRI. EBT films were also inserted inside in ammonium formate phantoms for comparison. MC simulation was performed using the MCNP4C2 software code. Results: The radical signal in irradiated ammonium formate is contained in a single narrow EPR line, with an EPR peak-to-peak linewidth narrower than that of lithium formate ({approx}0.64 and 1.4 mT, respectively). The spatial resolution of EPR images was enhanced by a factor of 2.3 using ammonium formate compared to lithium formate because its linewidth is about 0.75 mT narrower than that of lithium formate. The EPRI results were consistent to within 1% with those of Gafchromic EBT films and MC simulations at distances from 1.0 to 2.9 mm. The radial dose values obtained by EPRI were about 4% lower at distances from 2.9 to 4.0 mm than those determined by MC simulation and EBT film dosimetry. Conclusions: Ammonium formate is a suitable material under certain conditions for use in brachytherapy dosimetry using EPRI. In this study, the authors demonstrated that the EPRI technique allows the estimation of the relative radial dose distribution at short distances for a {sup 192}Ir wire source.

Kolbun, N.; Leveque, Ph.; Abboud, F.; Bol, A.; Vynckier, S.; Gallez, B. [Biomedical Magnetic Resonance Unit, Louvain Drug Research Institute, Universite catholique de Louvain, Avenue Mounier 73.40, B-1200 Brussels (Belgium); Molecular Imaging and Experimental Radiotherapy Unit, Institute of Experimental and Clinical Research, Universite catholique de Louvain, Avenue Hippocrate 55, B-1200 Brussels (Belgium); Biomedical Magnetic Resonance Unit, Louvain Drug Research Institute, Universite catholique de Louvain, Avenue Mounier 73.40, B-1200 Brussels (Belgium)

2010-10-15T23:59:59.000Z

425

A study of the contrast of a submerged disc using Monte Carlo techniques

are: K = a(l-w F)/u, and 0 o' (]0) F 7 "7 (q)"ouo o 0 0 u +v 0 cos (q) = -vv + (1-u ) ' (&-u ) cos (a). 2& 2Q 0 0 where F = 1-8, and v is the cosine of the scattering angle. The dependence of the Secchi Disc depth on w , 8, and u can be seen... . 08207 . 04978 . 01831 . 006736 . 002478 10 0-sun angle: 0. 0 g-sun angles84. 16 APPARENT CONTRAST F 01 F 001 1 1. 5 2. 0 OPTICAL DEPTH Figure 3 Apparent Contrast vs. Optical Depth of a Secchi Disc in a Rayieigh Ocean 19 state that K...

Hagan, Donald Frank

2012-06-07T23:59:59.000Z

426

fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Approved by: Chair of Committee, John W. Poston, Sr. Committee Members, Leslie A. Braby John R. Ford, Jr.... for his patient guidance in my study and research and enthusiastic help in my life. He is not only my academic advisor but also a mentor in spirit. I also would like to thank my committee members Dr. Leslie A. Braby, Dr. John Ford and Dr. Nancy Turner...

Guan, Fada 1982-

2012-04-27T23:59:59.000Z

427

distribution, and while this precludes the study of dy- namics, it does allow the trajectory to be optimized with it.1,2 The question naturally arises as to the physi- cal meaning of these extensions and whetherFi t . This has been written in the simplest form. In practice one often solves the natural motion to higher order

Attard, Phil

428

Science Journals Connector (OSTI)

A recent mice study demonstrated that gold nanoparticles could be safely administered and used to enhance the tumour dose during radiation therapy. The use of gold nanoparticles seems more promising than earlier methods because of the high atomic number of gold and because nanoparticles can more easily penetrate the tumour vasculature. However, to date, possible dose enhancement due to the use of gold nanoparticles has not been well quantified, especially for common radiation treatment situations. Therefore, the current preliminary study estimated this dose enhancement by Monte Carlo calculations for several phantom test cases representing radiation treatments with the following modalities: 140 kVp x-rays, 4 and 6 MV photon beams, and 192Ir gamma rays. The current study considered three levels of gold concentration within the tumour, two of which are based on the aforementioned mice study, and assumed either no gold or a single gold concentration level outside the tumour. The dose enhancement over the tumour volume considered for the 140 kVp x-ray case can be at least a factor of 2 at an achievable gold concentration of 7 mg Au/g tumour assuming no gold outside the tumour. The tumour dose enhancement for the cases involving the 4 and 6 MV photon beams based on the same assumption ranged from about 1% to 7%, depending on the amount of gold within the tumour and photon beam qualities. For the 192Ir cases, the dose enhancement within the tumour region ranged from 5% to 31%, depending on radial distance and gold concentration level within the tumour. For the 7 mg Au/g tumour cases, the loading of gold into surrounding normal tissue at 2 mg Au/g resulted in an increase in the normal tissue dose, up to 30%, negligible, and about 2% for the 140 kVp x-rays, 6 MV photon beam, and 192Ir gamma rays, respectively, while the magnitude of dose enhancement within the tumour was essentially unchanged.

Sang Hyun Cho

2005-01-01T23:59:59.000Z

429

Purpose: To investigate recommendations for reference dosimetry of electron beams and gradient effects for the NE2571 chamber and to provide beam quality conversion factors using Monte Carlo simulations of the PTW Roos and NE2571 ion chambers. Methods: The EGSnrc code system is used to calculate the absorbed dose-to-water and the dose to the gas in fully modeled ion chambers as a function of depth in water. Electron beams are modeled using realistic accelerator simulations as well as beams modeled as collimated point sources from realistic electron beam spectra or monoenergetic electrons. Beam quality conversion factors are calculated with ratios of the doses to water and to the air in the ion chamber in electron beams and a cobalt-60 reference field. The overall ion chamber correction factor is studied using calculations of water-to-air stopping power ratios. Results: The use of an effective point of measurement shift of 1.55 mm from the front face of the PTW Roos chamber, which places the point of measurement inside the chamber cavity, minimizes the difference betweenR{sub 50}, the beam quality specifier, calculated from chamber simulations compared to that obtained using depth-dose calculations in water. A similar shift minimizes the variation of the overall ion chamber correction factor with depth to the practical range and reduces the root-mean-square deviation of a fit to calculated beam quality conversion factors at the reference depth as a function of R{sub 50}. Similarly, an upstream shift of 0.34 r{sub cav} allows a more accurate determination of R{sub 50} from NE2571 chamber calculations and reduces the variation of the overall ion chamber correction factor with depth. The determination of the gradient correction using a shift of 0.22 r{sub cav} optimizes the root-mean-square deviation of a fit to calculated beam quality conversion factors if all beams investigated are considered. However, if only clinical beams are considered, a good fit to results for beam quality conversion factors is obtained without explicitly correcting for gradient effects. The inadequacy of R{sub 50} to uniquely specify beam quality for the accurate selection of k{sub Q} factors is discussed. Systematic uncertainties in beam quality conversion factors are analyzed for the NE2571 chamber and amount to between 0.4% and 1.2% depending on assumptions used. Conclusions: The calculated beam quality conversion factors for the PTW Roos chamber obtained here are in good agreement with literature data. These results characterize the use of an NE2571 ion chamber for reference dosimetry of electron beams even in low-energy beams.

Muir, B. R., E-mail: bmuir@physics.carleton.ca; Rogers, D. W. O., E-mail: drogers@physics.carleton.ca [Physics Department, Carleton Laboratory for Radiotherapy Physics, Carleton University, 1125 Colonel By Drive, Ottawa, Ontario K1S 5B6 (Canada)] [Physics Department, Carleton Laboratory for Radiotherapy Physics, Carleton University, 1125 Colonel By Drive, Ottawa, Ontario K1S 5B6 (Canada)

2013-12-15T23:59:59.000Z

430

Radical-ion-pair reactions, central in photosynthesis and the avian magnetic compass mechanism, have recently shown to be a paradigm system for applying quantum information science in a biochemical setting. The fundamental quantum master equation describing radical-ion-pair reactions is still under debate. We here use quantum retrodiction to produce a rigorous refinement of the theory put forward in Phys. Rev. E {\\bf 83}, 056118 (2011). We also provide a rigorous analysis of the measure of singlet-triplet coherence required for deriving the radical-pair master equation. A Monte-Carlo simulation with single-molecule quantum trajectories supports the self-consistency of our approach.

Kritsotakis, M

2014-01-01T23:59:59.000Z

431

A computational tool is described that can be used for designing magnetic focusing or defocusing systems. A fully three-dimensional classical trajectory Monte Carlo simulation has been developed. Ion trajectories are simulated in the presence of magnetic elements that can be modeled as any combination of current loops and current lines. Each current loop or line may be located anywhere in the system and oriented along any of the three coordinate axes. The configuration need not be axisymmetric. The solutions are obtained using normalized parameters, which can be used for easily scaling the results. Examples are provided of the utility of the code.

Lane, R. A.; Ordonez, C. A. [Department of Physics, University of North Texas, Denton, Texas (United States)

2013-04-19T23:59:59.000Z

432

We have developed a moment-based scale-bridging algorithm for thermal radiative transfer problems. The algorithm takes the form of well-known nonlinear-diffusion acceleration which utilizes a low-order (LO) continuum problem to accelerate the solution of a high-order (HO) kinetic problem. The coupled nonlinear equations that form the LO problem are efficiently solved using a preconditioned Jacobian-free Newton-Krylov method. This work demonstrates the applicability of the scale-bridging algorithm with a Monte Carlo HO solver and reports the computational efficiency of the algorithm in comparison to the well-known Fleck-Cummings algorithm. (authors)

Park, H. [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Densmore, J. D. [Bettis Atomic Power Laboratory, West Mifflin, PA 15122 (United States); Wollaber, A. B.; Knoll, D. A.; Rauenzahn, R. M. [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)

2013-07-01T23:59:59.000Z

433

Momentum spectra of hydrogen isotopes have been measured at 3.5 deg from C12 fragmentation on a Be target. Momentum spectra cover both the region of fragmentation maximum and the cumulative region. Differential cross sections span five orders of magnitude. The data are compared to predictions of four Monte Carlo codes: QMD, LAQGSM, BC, and INCL++. There are large differences between the data and predictions of some models in the high momentum region. The INCL++ code gives the best and almost perfect description of the data.

Abramov, B M; Borodin, Yu A; Bulychjov, S A; Dukhovskoy, I A; Krutenkova, A P; Kulikov, V V; Martemianov, M A; Matsyuk, M A; Turdakina, E N; Khanov, A I; Mashnik, S G

2015-01-01T23:59:59.000Z

434

Science Journals Connector (OSTI)

......assessing exposure from such communications infrastructure(1). Networks based on technologies...and TE polarisation to create a uniform grid of basic solutions. For each computation...Carlo method outlined in this study is a hybrid approach, using a small set of basic......

S. Iskra; R. McKenzie; I. Cosic

2011-11-01T23:59:59.000Z

435

Death ligand mediated apoptotic activation is a mode of programmed cell death that is widely used in cellular and physiological situations. Interest in studying death ligand induced apoptosis has increased due to the promising role of recombinant soluble forms of death ligands (mainly recombinant TRAIL) in anti-cancer therapy. A clear elucidation of how death ligands activate the type 1 and type 2 apoptotic pathways in healthy and cancer cells may help develop better chemotherapeutic strategies. In this work, we use kinetic Monte Carlo simulations to address the problem of type 1/ type 2 choice in death ligand mediated apoptosis of cancer cells. Our study provides insights into the activation of membrane proximal death module that results from complex interplay between death and decoy receptors. Relative abundance of death and decoy receptors was shown to be a key parameter for activation of the initiator caspases in the membrane module. Increased concentration of death ligands frequently increased the type 1...

Raychaudhuri, Subhadip

2015-01-01T23:59:59.000Z

436

The Department of Energy (DOE) has given the Spallation Neutron Source (SNS) project approval to begin Title I design of the proposed facility to be built at Oak Ridge National Laboratory (ORNL) and construction is scheduled to commence in FY01 . The SNS initially will consist of an accelerator system capable of delivering an {approximately}0.5 microsecond pulse of 1 GeV protons, at a 60 Hz frequency, with 1 MW of beam power, into a single target station. The SNS will eventually be upgraded to a 2 MW facility with two target stations (a 60 Hz station and a 10 Hz station). The radiation transport analysis, which includes the neutronic, shielding, activation, and safety analyses, is critical to the design of an intense high-energy accelerator facility like the proposed SNS, and the Monte Carlo method is the cornerstone of the radiation transport analyses.

Johnson, J.O.

2000-10-23T23:59:59.000Z

437

Monte Carlo (MC) criticality calculations are based on an iterative method. It requires a converged fission source distribution before beginning tallying the effective multiplication factor (K{sub eff}) or other quantities of interest. However, it is pretty difficult to locate on the run, the end of the source convergence and scores may be biased by an initial transient. This paper deals with a method that locates and suppresses the transient due to the initialization in an output series, applied here to K{sub eff} and Shannon entropy. It relies on modeling stationary series by an order 1 auto regressive process and applying statistical tests based on a Student Bridge statistics. It should be noticed that the initial transient suppression only aims at obtaining stationary output series and cannot guarantee any kind of convergence. The truncation method is applied on both K{sub eff} and Shannon entropy on three test cases. (authors)

Jinaphanh, A.; Miss, J.; Richet, Y. [Inst. for Radiological Protection and Nuclear Safety IRSN, BP 17, 92262 Fontenay-Aux-Roses Cedex (France); Jacquet, O. [Independent Consultant (France)

2012-07-01T23:59:59.000Z

438

Science Journals Connector (OSTI)

We nonperturbatively investigate the ground state magnetic properties of the 2D half-filled SU(2N) Hubbard model in the square lattice by using the projector determinant quantum Monte Carlo simulations combined with the method of local pinning fields. Long-range Néel orders are found for both the SU(4) and SU(6) cases at small and intermediate values of U. In both cases, the long-range Néel moments exhibit nonmonotonic behavior with respect to U, which first grow and then drop as U increases. This result is fundamentally different from the SU(2) case in which the Néel moments increase monotonically and saturate. In the SU(6) case, a transition to the columnar dimer phase is found in the strong interaction regime.

Da Wang; Yi Li; Zi Cai; Zhichao Zhou; Yu Wang; Congjun Wu

2014-04-17T23:59:59.000Z

439

Analysis of insulation characteristics of c-C4F8 and N2 gas mixtures by the Monte Carlo method

Science Journals Connector (OSTI)

The motion of electrons in c-C4F8 and N2 gas mixtures for a pulsed Townsend discharge is simulated by the Monte Carlo method. The effective ionization coefficient, , over the E/N range from 160 to 480?Td is calculated by employing a set of cross sections available in the literature. From the variation curve of with the C4F8 mixture ratio k, the limiting field of the gas mixture at different gas content is determined. The required gas pressure ratios comparable with the insulation property of SF6 and GWP at this gas pressure were also investigated. It is found that the insulation characteristics of the N2 and c-C4F8 gas mixtures are comparable with the N2 and SF6 mixture, but the GWP of the former is significantly lower than that of the latter. Simulation results show excellent agreement with experimental data available in the literature.

Bian-Tao Wu; Deng-Ming Xiao; Zhang-Sheng Liu; Liu-Chun Zhang; Xue-Li Liu

2006-01-01T23:59:59.000Z

440

The RCP01 Monte Carlo program is used to analyze many geometries of interest in nuclear design and analysis of light water moderated reactors such as the core in its pressure vessel with complex piping arrangement, fuel storage arrays, shipping and container arrangements, and neutron detector configurations. Written in FORTRAN and in use on a variety of computers, it is capable of estimating steady state neutron or photon reaction rates and neutron multiplication factors. The energy range covered in neutron calculations is that relevant to the fission process and subsequent slowing-down and thermalization, i.e., 20 MeV to 0 eV. The same energy range is covered for photon calculations.

Ondis, L.A., II; Tyburski, L.J.; Moskowitz, B.S.

2000-03-01T23:59:59.000Z

While these samples are representative of the content of NLE

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of NLE

to obtain the most current and comprehensive results.

441

Quantum Monte Carlo calculations of electromagnetic moments and transitions are reported for A{<=}9 nuclei. The realistic Argonne v{sub 18} two-nucleon and Illinois-7 three-nucleon potentials are used to generate the nuclear wave functions. Contributions of two-body meson-exchange current (MEC) operators are included for magnetic moments and M1 transitions. The MEC operators have been derived in both a standard nuclear physics approach and a chiral effective field theory formulation with pions and nucleons including up to one-loop corrections. The two-body MEC contributions provide significant corrections and lead to very good agreement with experiment. Their effect is particularly pronounced in the A=9, T=3/2 systems, in which they provide up to ~20% (~40%) of the total predicted value for the {sup 9}Li ({sup 9}C) magnetic moment.

Saori Pastore, S.C. Pieper, Rocco Schiavilla, Robert Wiringa

2013-03-01T23:59:59.000Z